USDA-ARS?s Scientific Manuscript database
In the carbon market, greenhouse gas (GHG) offset protocols need to ensure that emission reductions are of high quality, quantifiable and real. However, lack of consistency across protocols for quantifying emission reductions compromise the credibility of offsets generated. Thus, protocol quantifica...
2011-01-01
Purpose Eddy current induced velocity offsets are of concern for accuracy in cardiovascular magnetic resonance (CMR) volume flow quantification. However, currently known theoretical aspects of eddy current behavior have not led to effective guidelines for the optimization of flow quantification sequences. This study is aimed at identifying correlations between protocol parameters and the resulting velocity error in clinical CMR flow measurements in a multi-vendor study. Methods Nine 1.5T scanners of three different types/vendors were studied. Measurements were performed on a large stationary phantom. Starting from a clinical breath-hold flow protocol, several protocol parameters were varied. Acquisitions were made in three clinically relevant orientations. Additionally, a time delay between the bipolar gradient and read-out, asymmetric versus symmetric velocity encoding, and gradient amplitude and slew rate were studied in adapted sequences as exploratory measurements beyond the protocol. Image analysis determined the worst-case offset for a typical great-vessel flow measurement. Results The results showed a great variation in offset behavior among scanners (standard deviation among samples of 0.3, 0.4, and 0.9 cm/s for the three different scanner types), even for small changes in the protocol. Considering the absolute values, none of the tested protocol settings consistently reduced the velocity offsets below the critical level of 0.6 cm/s neither for all three orientations nor for all three scanner types. Using multilevel linear model analysis, oblique aortic and pulmonary slices showed systematic higher offsets than the transverse aortic slices (oblique aortic 0.6 cm/s, and pulmonary 1.8 cm/s higher than transverse aortic). The exploratory measurements beyond the protocol yielded some new leads for further sequence development towards reduction of velocity offsets; however those protocols were not always compatible with the time-constraints of breath-hold imaging and flow-related artefacts. Conclusions This study showed that with current systems there was no generic protocol which resulted into acceptable flow offset values. Protocol optimization would have to be performed on a per scanner and per protocol basis. Proper optimization might make accurate (transverse) aortic flow quantification possible for most scanners. Pulmonary flow quantification would still need further (offline) correction. PMID:21388521
Towards high-resolution 4D flow MRI in the human aorta using kt-GRAPPA and B1+ shimming at 7T.
Schmitter, Sebastian; Schnell, Susanne; Uğurbil, Kâmil; Markl, Michael; Van de Moortele, Pierre-François
2016-08-01
To evaluate the feasibility of aortic 4D flow magnetic resonance imaging (MRI) at 7T with improved spatial resolution using kt-GRAPPA acceleration while restricting acquisition time and to address radiofrequency (RF) excitation heterogeneities with B1+ shimming. 4D flow MRI data were obtained in the aorta of eight subjects using a 16-channel transmit/receive coil array at 7T. Flow quantification and acquisition time were compared for a kt-GRAPPA accelerated (R = 5) and a standard GRAPPA (R = 2) accelerated protocol. The impact of different dynamic B1+ shimming strategies on flow quantification was investigated. Two kt-GRAPPA accelerated protocols with 1.2 × 1.2 × 1.2 mm(3) and 1.8 × 1.8 × 2.4 mm(3) spatial resolution were compared. Using kt-GRAPPA, we achieved a 4.3-fold reduction in net acquisition time resulting in scan times of about 10 minutes. No significant effect on flow quantification was observed compared to standard GRAPPA with R = 2. Optimizing the B1+ fields for the aorta impacted significantly (P < 0.05) the flow quantification while specific B1+ settings were required for respiration navigators. The high-resolution protocol yielded similar flow quantification, but allowed the depiction of branching vessels. 7T in combination with B1+ shimming allows for high-resolution 4D flow MRI acquisitions in the human aorta, while kt-GRAPPA limits total scan times without affecting flow quantification. J. Magn. Reson. Imaging 2016;44:486-499. © 2016 Wiley Periodicals, Inc.
Multiplex Droplet Digital PCR Protocols for Quantification of GM Maize Events.
Dobnik, David; Spilsberg, Bjørn; Bogožalec Košir, Alexandra; Štebih, Dejan; Morisset, Dany; Holst-Jensen, Arne; Žel, Jana
2018-01-01
The standard-curve based simplex quantitative polymerase chain reaction (qPCR) has been the gold standard for DNA target quantification for more than a decade. The large and growing number of individual analyses needed to test for genetically modified organisms (GMOs) is reducing the cost-effectiveness of qPCR. Droplet digital PCR (ddPCR) enables absolute quantification without standard curves, avoids the amplification efficiency bias observed with qPCR, allows more accurate estimations at low target copy numbers and, in combination with multiplexing, significantly improves cost efficiency. Here we describe two protocols for multiplex quantification of GM maize events: (1) nondiscriminating, with multiplex quantification of targets as a group (12 GM maize lines) and (2) discriminating, with multiplex quantification of individual targets (events). The first enables the quantification of twelve European Union authorized GM maize events as a group with only two assays, but does not permit determination of the individual events present. The second protocol enables the quantification of four individual targets (three GM events and one endogene) in a single reaction. Both protocols can be modified for quantification of any other DNA target.
Belo, Luís; Serrano, Isa; Cunha, Eva; Carneiro, Carla; Tavares, Luis; Miguel Carreira, L; Oliveira, Manuela
2018-03-14
Most of surgical site infections (SSI) are caused by commensal and pathogenic agents from the patient's microbiota, which may include antibiotic resistant strains. Pre-surgical asepsis of the skin is one of the preventive measures performed to reduce SSI incidence and also antibiotic resistance dissemination. However, in veterinary medicine there is no agreement on which biocide is the most effective. The aim of this study was to evaluate the effectiveness of two pre-surgical skin asepsis protocols in dogs. A total of 46 animals were randomly assigned for an asepsis protocol with an aqueous solution of 7.5% povidone-iodine or with an alcoholic solution of 2% chlorhexidine. For each dog, two skin swab samples were collected at pre-asepsis and post-asepsis, for bacterial quantification by conventional techniques and isolation of methicillin-resistant species. Most samples collected at the post-asepsis did not present bacterial growth, both for the animals subjected to the povidone-iodine (74%) or to the chlorhexidine (70%) protocols. In only 9% of the cases a significant bacterial logarithmic reduction was not observed, indicating possible resistance to these agents. Also, the logarithmic reduction of the bacterial quantification from pre- and post-asepsis time, was not statistically different for povidone-iodine (6.51 ± 1.94 log10) and chlorhexidine (6.46 ± 2.62 log10) protocol. From the 39% pre-asepsis swabs which showed bacterial growth in MRSA modified chromogenic agar medium, only one isolate was identified as Staphylococcus aureus and one as S. epidermidis. False positives were mainly other staphylococci species, as well as Enterobacteriaceae. Pre-surgical skin asepsis protocols with povidone-iodine or chlorhexidine showed similar efficacy in the elimination of methicillin resistant bacteria and preventing surgical site infections in dogs undergoing surgery.
Integrated protocol for reliable and fast quantification and documentation of electrophoresis gels.
Rehbein, Peter; Schwalbe, Harald
2015-06-01
Quantitative analysis of electrophoresis gels is an important part in molecular cloning, as well as in protein expression and purification. Parallel quantifications in yield and purity can be most conveniently obtained from densitometric analysis. This communication reports a comprehensive, reliable and simple protocol for gel quantification and documentation, applicable for single samples and with special features for protein expression screens. As major component of the protocol, the fully annotated code of a proprietary open source computer program for semi-automatic densitometric quantification of digitized electrophoresis gels is disclosed. The program ("GelQuant") is implemented for the C-based macro-language of the widespread integrated development environment of IGOR Pro. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Tonitto, C.; Gurwick, N. P.
2012-12-01
Policy initiatives to reduce greenhouse gas emissions (GHG) have promoted the development of agricultural management protocols to increase SOC storage and reduce GHG emissions. We review approaches for quantifying N2O flux from agricultural landscapes. We summarize the temporal and spatial extent of observations across representative soil classes, climate zones, cropping systems, and management scenarios. We review applications of simulation and empirical modeling approaches and compare validation outcomes across modeling tools. Subsequently, we review current model application in agricultural management protocols. In particular, we compare approaches adapted for compliance with the California Global Warming Solutions Act, the Alberta Climate Change and Emissions Management Act, and by the American Carbon Registry. In the absence of regional data to drive model development, policies that require GHG quantification often use simple empirical models based on highly aggregated data of N2O flux as a function of applied N - Tier 1 models according to IPCC categorization. As participants in development of protocols that could be used in carbon offset markets, we observed that stakeholders outside of the biogeochemistry community favored outcomes from simulation modeling (Tier 3) rather than empirical modeling (Tier 2). In contrast, scientific advisors were more accepting of outcomes based on statistical approaches that rely on local observations, and their views sometimes swayed policy practitioners over the course of policy development. Both Tier 2 and Tier 3 approaches have been implemented in current policy development, and it is important that the strengths and limitations of both approaches, in the face of available data, be well-understood by those drafting and adopting policies and protocols. The reliability of all models is contingent on sufficient observations for model development and validation. Simulation models applied without site-calibration generally result in poor validation results, and this point particularly needs to be emphasized during policy development. For cases where sufficient calibration data are available, simulation models have demonstrated the ability to capture seasonal patterns of N2O flux. The reliability of statistical models likewise depends on data availability. Because soil moisture is a significant driver of N2O flux, the best outcomes occur when empirical models are applied to systems with relevant soil classification and climate. The structure of current carbon offset protocols is not well-aligned with a budgetary approach to GHG accounting. Current protocols credit field-scale reduction in N2O flux as a result of reduced fertilizer use. Protocols do not award farmers credit for reductions in CO2 emissions resulting from reduced production of synthetic N fertilizer. To achieve the greatest GHG emission reductions through reduced synthetic N production and reduced landscape N saturation requires a re-envisioning of the agricultural landscape to include cropping systems with legume and manure N sources. The current focus on on-farm GHG sources focuses credits on simple reductions of N applied in conventional systems rather than on developing cropping systems which promote higher recycling and retention of N.
USDA-ARS?s Scientific Manuscript database
The pathogen causing corky root on lettuce, Sphingobium suberifaciens, is recalcitrant to standard epidemiological methods. Primers were selected from 16S rDNA sequences useful for the specific detection and quantification of S. suberifaciens. Conventional (PCR) and quantitative (qPCR) PCR protocols...
Psifidi, Androniki; Dovas, Chrysostomos; Banos, Georgios
2011-01-19
Single nucleotide polymorphisms (SNP) have proven to be powerful genetic markers for genetic applications in medicine, life science and agriculture. A variety of methods exist for SNP detection but few can quantify SNP frequencies when the mutated DNA molecules correspond to a small fraction of the wild-type DNA. Furthermore, there is no generally accepted gold standard for SNP quantification, and, in general, currently applied methods give inconsistent results in selected cohorts. In the present study we sought to develop a novel method for accurate detection and quantification of SNP in DNA pooled samples. The development and evaluation of a novel Ligase Chain Reaction (LCR) protocol that uses a DNA-specific fluorescent dye to allow quantitative real-time analysis is described. Different reaction components and thermocycling parameters affecting the efficiency and specificity of LCR were examined. Several protocols, including gap-LCR modifications, were evaluated using plasmid standard and genomic DNA pools. A protocol of choice was identified and applied for the quantification of a polymorphism at codon 136 of the ovine PRNP gene that is associated with susceptibility to a transmissible spongiform encephalopathy in sheep. The real-time LCR protocol developed in the present study showed high sensitivity, accuracy, reproducibility and a wide dynamic range of SNP quantification in different DNA pools. The limits of detection and quantification of SNP frequencies were 0.085% and 0.35%, respectively. The proposed real-time LCR protocol is applicable when sensitive detection and accurate quantification of low copy number mutations in DNA pools is needed. Examples include oncogenes and tumour suppressor genes, infectious diseases, pathogenic bacteria, fungal species, viral mutants, drug resistance resulting from point mutations, and genetically modified organisms in food.
Psifidi, Androniki; Dovas, Chrysostomos; Banos, Georgios
2011-01-01
Background Single nucleotide polymorphisms (SNP) have proven to be powerful genetic markers for genetic applications in medicine, life science and agriculture. A variety of methods exist for SNP detection but few can quantify SNP frequencies when the mutated DNA molecules correspond to a small fraction of the wild-type DNA. Furthermore, there is no generally accepted gold standard for SNP quantification, and, in general, currently applied methods give inconsistent results in selected cohorts. In the present study we sought to develop a novel method for accurate detection and quantification of SNP in DNA pooled samples. Methods The development and evaluation of a novel Ligase Chain Reaction (LCR) protocol that uses a DNA-specific fluorescent dye to allow quantitative real-time analysis is described. Different reaction components and thermocycling parameters affecting the efficiency and specificity of LCR were examined. Several protocols, including gap-LCR modifications, were evaluated using plasmid standard and genomic DNA pools. A protocol of choice was identified and applied for the quantification of a polymorphism at codon 136 of the ovine PRNP gene that is associated with susceptibility to a transmissible spongiform encephalopathy in sheep. Conclusions The real-time LCR protocol developed in the present study showed high sensitivity, accuracy, reproducibility and a wide dynamic range of SNP quantification in different DNA pools. The limits of detection and quantification of SNP frequencies were 0.085% and 0.35%, respectively. Significance The proposed real-time LCR protocol is applicable when sensitive detection and accurate quantification of low copy number mutations in DNA pools is needed. Examples include oncogenes and tumour suppressor genes, infectious diseases, pathogenic bacteria, fungal species, viral mutants, drug resistance resulting from point mutations, and genetically modified organisms in food. PMID:21283808
Otani, Kyoko; Nakazono, Akemi; Salgo, Ivan S; Lang, Roberto M; Takeuchi, Masaaki
2016-10-01
Echocardiographic determination of left heart chamber volumetric parameters by using manual tracings during multiple beats is tedious in atrial fibrillation (AF). The aim of this study was to determine the usefulness of fully automated left chamber quantification software with single-beat three-dimensional transthoracic echocardiographic data sets in patients with AF. Single-beat full-volume three-dimensional transthoracic echocardiographic data sets were prospectively acquired during consecutive multiple cardiac beats (≥10 beats) in 88 patients with AF. In protocol 1, left ventricular volumes, left ventricular ejection fraction, and maximal left atrial volume were validated using automated quantification against the manual tracing method in identical beats in 10 patients. In protocol 2, automated quantification-derived averaged values from multiple beats were compared with the corresponding values obtained from the indexed beat in all patients. Excellent correlations of left chamber parameters between automated quantification and the manual method were observed (r = 0.88-0.98) in protocol 1. The time required for the analysis with the automated quantification method (5 min) was significantly less compared with the manual method (27 min) (P < .0001). In protocol 2, there were excellent linear correlations between the averaged left chamber parameters and the corresponding values obtained from the indexed beat (r = 0.94-0.99), and test-retest variability of left chamber parameters was low (3.5%-4.8%). Three-dimensional transthoracic echocardiography with fully automated quantification software is a rapid and reliable way to measure averaged values of left heart chamber parameters during multiple consecutive beats. Thus, it is a potential new approach for left chamber quantification in patients with AF in daily routine practice. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.
Rapid and Easy Protocol for Quantification of Next-Generation Sequencing Libraries.
Hawkins, Steve F C; Guest, Paul C
2018-01-01
The emergence of next-generation sequencing (NGS) over the last 10 years has increased the efficiency of DNA sequencing in terms of speed, ease, and price. However, the exact quantification of a NGS library is crucial in order to obtain good data on sequencing platforms developed by the current market leader Illumina. Different approaches for DNA quantification are available currently and the most commonly used are based on analysis of the physical properties of the DNA through spectrophotometric or fluorometric methods. Although these methods are technically simple, they do not allow exact quantification as can be achieved using a real-time quantitative PCR (qPCR) approach. A qPCR protocol for DNA quantification with applications in NGS library preparation studies is presented here. This can be applied in various fields of study such as medical disorders resulting from nutritional programming disturbances.
A rapid, ideal, and eco-friendlier protocol for quantifying proline.
Shabnam, Nisha; Tripathi, Indu; Sharmila, P; Pardha-Saradhi, P
2016-11-01
Proline, a stress marker, is routinely quantified by a protocol that essentially uses hazardous toluene. Negative impacts of toluene on human health prompted us to develop a reliable alternate protocol for proline quantification. Absorbance of the proline-ninhydrin condensation product formed by reaction of proline with ninhydrin at 100 °C in the reaction mixture was significantly higher than that recorded after its transfer to toluene, revealing that toluene lowers sensitivity of this assay. λ max of the proline-ninhydrin complex in the reaction mixture and toluene were 508 and 513 nm, respectively. Ninhydrin in glacial acetic acid yielded higher quantity of the proline-ninhydrin condensation product compared to ninhydrin in mixture of glacial acetic acid and H 3 PO 4 , indicating negative impact of H 3 PO 4 on proline quantification. Further, maximum yield of the proline-ninhydrin complex with ninhydrin in glacial acetic acid and ninhydrin in mixture of glacial acetic acid and H 3 PO 4 was achieved within 30 and 60 min, respectively. This revealed that H 3 PO 4 has negative impact on the reaction rate and quantity of the proline-ninhydrin complex formed. In brief, our proline quantification protocol involves reaction of a 1-ml proline sample with 2 ml of 1.25 % ninhydrin in glacial acetic acid at 100 °C for 30 min, followed by recording absorbance of the proline-ninhydrin condensation product in the reaction mixture itself at 508 nm. Amongst proline quantification protocols known till date, our protocol is the most simple, rapid, reliable, cost-effective, and eco-friendlier.
Amarasiri, Mohan; Kitajima, Masaaki; Nguyen, Thanh H; Okabe, Satoshi; Sano, Daisuke
2017-09-15
The multiple-barrier concept is widely employed in international and domestic guidelines for wastewater reclamation and reuse for microbiological risk management, in which a wastewater reclamation system is designed to achieve guideline values of the performance target of microbe reduction. Enteric viruses are one of the pathogens for which the target reduction values are stipulated in guidelines, but frequent monitoring to validate human virus removal efficacy is challenging in a daily operation due to the cumbersome procedures for virus quantification in wastewater. Bacteriophages have been the first choice surrogate for this task, because of the well-characterized nature of strains and the presence of established protocols for quantification. Here, we performed a meta-analysis to calculate the average log 10 reduction values (LRVs) of somatic coliphages, F-specific phages, MS2 coliphage and T4 phage by membrane bioreactor, activated sludge, constructed wetlands, pond systems, microfiltration and ultrafiltration. The calculated LRVs of bacteriophages were then compared with reported human enteric virus LRVs. MS2 coliphage LRVs in MBR processes were shown to be lower than those of norovirus GII and enterovirus, suggesting it as a possible validation and operational monitoring tool. The other bacteriophages provided higher LRVs compared to human viruses. The data sets on LRVs of human viruses and bacteriophages are scarce except for MBR and conventional activated sludge processes, which highlights the necessity of investigating LRVs of human viruses and bacteriophages in multiple treatment unit processes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Quantification of birefringence readily measures the level of muscle damage in zebrafish
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berger, Joachim, E-mail: Joachim.Berger@Monash.edu; Sztal, Tamar; Currie, Peter D.
2012-07-13
Highlights: Black-Right-Pointing-Pointer Report of an unbiased quantification of the birefringence of muscle of fish larvae. Black-Right-Pointing-Pointer Quantification method readily identifies level of overall muscle damage. Black-Right-Pointing-Pointer Compare zebrafish muscle mutants for level of phenotype severity. Black-Right-Pointing-Pointer Proposed tool to survey treatments that aim to ameliorate muscular dystrophy. -- Abstract: Muscular dystrophies are a group of genetic disorders that progressively weaken and degenerate muscle. Many zebrafish models for human muscular dystrophies have been generated and analysed, including dystrophin-deficient zebrafish mutants dmd that model Duchenne Muscular Dystrophy. Under polarised light the zebrafish muscle can be detected as a bright area in anmore » otherwise dark background. This light effect, called birefringence, results from the diffraction of polarised light through the pseudo-crystalline array of the muscle sarcomeres. Muscle damage, as seen in zebrafish models for muscular dystrophies, can readily be detected by a reduction in the birefringence. Therefore, birefringence is a very sensitive indicator of overall muscle integrity within larval zebrafish. Unbiased documentation of the birefringence followed by densitometric measurement enables the quantification of the birefringence of zebrafish larvae. Thereby, the overall level of muscle integrity can be detected, allowing the identification and categorisation of zebrafish muscle mutants. In addition, we propose that the establish protocol can be used to analyse treatments aimed at ameliorating dystrophic zebrafish models.« less
de Albuquerque, Carlos Diego L; Sobral-Filho, Regivaldo G; Poppi, Ronei J; Brolo, Alexandre G
2018-01-16
Single molecule surface-enhanced Raman spectroscopy (SM-SERS) has the potential to revolutionize quantitative analysis at ultralow concentrations (less than 1 nM). However, there are no established protocols to generalize the application of this technique in analytical chemistry. Here, a protocol for quantification at ultralow concentrations using SM-SERS is proposed. The approach aims to take advantage of the stochastic nature of the single-molecule regime to achieved lower limits of quantification (LOQ). Two emerging contaminants commonly found in aquatic environments, enrofloxacin (ENRO) and ciprofloxacin (CIPRO), were chosen as nonresonant molecular probes. The methodology involves a multivariate resolution curve fitting known as non-negative matrix factorization with alternating least-squares algorithm (NMF-ALS) to solve spectral overlaps. The key element of the quantification is to realize that, under SM-SERS conditions, the Raman intensity generated by a molecule adsorbed on a "hotspot" can be digitalized. Therefore, the number of SERS event counts (rather than SERS intensities) was shown to be proportional to the solution concentration. This allowed the determination of both ENRO and CIPRO with high accuracy and precision even at ultralow concentrations regime. The LOQ for both ENRO and CIPRO were achieved at 2.8 pM. The digital SERS protocol, suggested here, is a roadmap for the implementation of SM-SERS as a routine tool for quantification at ultralow concentrations.
An artifical corrosion protocol for lap-splices in aircraft skin
NASA Technical Reports Server (NTRS)
Shaw, Bevil J.
1994-01-01
This paper reviews the progress to date to formulate an artificial corrosion protocol for the Tinker AFB C/KC-135 Corrosion Fatigue Round Robin Test Program. The project has provided new test methods to faithfully reproduce the corrosion damage within a lap-splice by accelerated means, the rationale for a new laboratory test environment, and a means for corrosion damage quantification. The approach is pragmatic and the resulting artificial corrosion protocol lays the foundation for future research in the assessment of aerospace alloys. The general means for quantification of corrosion damage has been presented in a form which can be directly applied to structural integrity calculations.
Kanitkar, Yogendra H.; Stedtfeld, Robert D.; Steffan, Robert J.; Hashsham, Syed A.
2016-01-01
Real-time quantitative PCR (qPCR) protocols specific to the reductive dehalogenase (RDase) genes vcrA, bvcA, and tceA are commonly used to quantify Dehalococcoides spp. in groundwater from chlorinated solvent-contaminated sites. In this study, loop-mediated isothermal amplification (LAMP) was developed as an alternative approach for the quantification of these genes. LAMP does not require a real-time thermal cycler (i.e., amplification is isothermal), allowing the method to be performed using less-expensive and potentially field-deployable detection devices. Six LAMP primers were designed for each of three RDase genes (vcrA, bvcA, and tceA) using Primer Explorer V4. The LAMP assays were compared to conventional qPCR approaches using plasmid standards, two commercially available bioaugmentation cultures, KB-1 and SDC-9 (both contain Dehalococcoides species). DNA was extracted over a growth cycle from KB-1 and SDC-9 cultures amended with trichloroethene and vinyl chloride, respectively. All three genes were quantified for KB-1, whereas only vcrA was quantified for SDC-9. A comparison of LAMP and qPCR using standard plasmids indicated that quantification results were similar over a large range of gene concentrations. In addition, the quantitative increase in gene concentrations over one growth cycle of KB-1 and SDC-9 using LAMP was comparable to that of qPCR. The developed LAMP assays for vcrA and tceA genes were validated by comparing quantification on the Gene-Z handheld platform and a real-time thermal cycler using DNA isolated from eight groundwater samples obtained from an SDC-9-bioaugmented site (Tulsa, OK). These assays will be particularly useful at sites subject to bioaugmentation with these two commonly used Dehalococcoides species-containing cultures. PMID:26746711
Martín-Sabroso, Cristina; Tavares-Fernandes, Daniel Filipe; Espada-García, Juan Ignacio; Torres-Suárez, Ana Isabel
2013-12-15
In this work a protocol to validate analytical procedures for the quantification of drug substances formulated in polymeric systems that comprise both drug entrapped into the polymeric matrix (assay:content test) and drug released from the systems (assay:dissolution test) is developed. This protocol is applied to the validation two isocratic HPLC analytical procedures for the analysis of dexamethasone phosphate disodium microparticles for parenteral administration. Preparation of authentic samples and artificially "spiked" and "unspiked" samples is described. Specificity (ability to quantify dexamethasone phosphate disodium in presence of constituents of the dissolution medium and other microparticle constituents), linearity, accuracy and precision are evaluated, in the range from 10 to 50 μg mL(-1) in the assay:content test procedure and from 0.25 to 10 μg mL(-1) in the assay:dissolution test procedure. The robustness of the analytical method to extract drug from microparticles is also assessed. The validation protocol developed allows us to conclude that both analytical methods are suitable for their intended purpose, but the lack of proportionality of the assay:dissolution analytical method should be taken into account. The validation protocol designed in this work could be applied to the validation of any analytical procedure for the quantification of drugs formulated in controlled release polymeric microparticles. Copyright © 2013 Elsevier B.V. All rights reserved.
Nolan, J A; Kinsella, M; Hill, C; Joyce, S A; Gahan, C G M
2016-07-01
Statins are widely prescribed cholesterol-lowering medications and act through inhibition of the human enzyme 3-methylglutaryl coenzyme A reductase (HMG-R) which produces mevalonate (MVAL), a key substrate for cholesterol biosynthesis. Some important microbial species also express an isoform of HMG-R; however, the nature of the interaction between statins and bacteria is currently unclear and studies would benefit from protocols to quantify MVAL in complex microbial environments. The objective of this study was to develop a protocol for the analytical quantification of MVAL in bacterial systems and to utilise this approach to analyse the effects of Rosuvastatin (RSV) on bacterial MVAL formation. To determine the effective concentration range of RSV, we examined the dose-dependent inhibition of growth in the HMG-R(+) bacterial pathogens Listeria monocytogenes, Staphylococcus aureus and Enterococcus faecium at various concentrations of pure RSV. Growth inhibition generally correlated with a reduction in bacterial MVAL levels, particularly in culture supernatants at high RSV concentrations, as determined using our ultra-performance liquid chromatography mass spectrometry protocol. This work therefore outlines a refined protocol for the analysis of MVAL in microbial cultures and provides evidence for statin-mediated inhibition of bacterial HMG-R. Furthermore, we show that MVAL is readily transported and secreted from bacterial cells into the growth media.
Morschett, Holger; Wiechert, Wolfgang; Oldiges, Marco
2016-02-09
Within the context of microalgal lipid production for biofuels and bulk chemical applications, specialized higher throughput devices for small scale parallelized cultivation are expected to boost the time efficiency of phototrophic bioprocess development. However, the increasing number of possible experiments is directly coupled to the demand for lipid quantification protocols that enable reliably measuring large sets of samples within short time and that can deal with the reduced sample volume typically generated at screening scale. To meet these demands, a dye based assay was established using a liquid handling robot to provide reproducible high throughput quantification of lipids with minimized hands-on-time. Lipid production was monitored using the fluorescent dye Nile red with dimethyl sulfoxide as solvent facilitating dye permeation. The staining kinetics of cells at different concentrations and physiological states were investigated to successfully down-scale the assay to 96 well microtiter plates. Gravimetric calibration against a well-established extractive protocol enabled absolute quantification of intracellular lipids improving precision from ±8 to ±2 % on average. Implementation into an automated liquid handling platform allows for measuring up to 48 samples within 6.5 h, reducing hands-on-time to a third compared to manual operation. Moreover, it was shown that automation enhances accuracy and precision compared to manual preparation. It was revealed that established protocols relying on optical density or cell number for biomass adjustion prior to staining may suffer from errors due to significant changes of the cells' optical and physiological properties during cultivation. Alternatively, the biovolume was used as a measure for biomass concentration so that errors from morphological changes can be excluded. The newly established assay proved to be applicable for absolute quantification of algal lipids avoiding limitations of currently established protocols, namely biomass adjustment and limited throughput. Automation was shown to improve data reliability, as well as experimental throughput simultaneously minimizing the needed hands-on-time to a third. Thereby, the presented protocol meets the demands for the analysis of samples generated by the upcoming generation of devices for higher throughput phototrophic cultivation and thereby contributes to boosting the time efficiency for setting up algae lipid production processes.
Rizk, Aurélien; Paul, Grégory; Incardona, Pietro; Bugarski, Milica; Mansouri, Maysam; Niemann, Axel; Ziegler, Urs; Berger, Philipp; Sbalzarini, Ivo F
2014-03-01
Detection and quantification of fluorescently labeled molecules in subcellular compartments is a key step in the analysis of many cell biological processes. Pixel-wise colocalization analyses, however, are not always suitable, because they do not provide object-specific information, and they are vulnerable to noise and background fluorescence. Here we present a versatile protocol for a method named 'Squassh' (segmentation and quantification of subcellular shapes), which is used for detecting, delineating and quantifying subcellular structures in fluorescence microscopy images. The workflow is implemented in freely available, user-friendly software. It works on both 2D and 3D images, accounts for the microscope optics and for uneven image background, computes cell masks and provides subpixel accuracy. The Squassh software enables both colocalization and shape analyses. The protocol can be applied in batch, on desktop computers or computer clusters, and it usually requires <1 min and <5 min for 2D and 3D images, respectively. Basic computer-user skills and some experience with fluorescence microscopy are recommended to successfully use the protocol.
NASA Astrophysics Data System (ADS)
Restaino, Stephen M.; White, Ian M.
2017-03-01
Surface Enhanced Raman spectroscopy (SERS) provides significant improvements over conventional methods for single and multianalyte quantification. Specifically, the spectroscopic fingerprint provided by Raman scattering allows for a direct multiplexing potential far beyond that of fluorescence and colorimetry. Additionally, SERS generates a comparatively low financial and spatial footprint compared with common fluorescence based systems. Despite the advantages of SERS, it has remained largely an academic pursuit. In the field of biosensing, techniques to apply SERS to molecular diagnostics are constantly under development but, most often, assay protocols are redesigned around the use of SERS as a quantification method and ultimately complicate existing protocols. Our group has sought to rethink common SERS methodologies in order to produce translational technologies capable of allowing SERS to compete in the evolving, yet often inflexible biosensing field. This work will discuss the development of two techniques for quantification of microRNA, a promising biomarker for homeostatic and disease conditions ranging from cancer to HIV. First, an inkjet-printed paper SERS sensor has been developed to allow on-demand production of a customizable and multiplexable single-step lateral flow assay for miRNA quantification. Second, as miRNA concentrations commonly exist in relatively low concentrations, amplification methods (e.g. PCR) are therefore required to facilitate quantification. This work presents a novel miRNA assay alongside a novel technique for quantification of nuclease driven nucleic acid amplification strategies that will allow SERS to be used directly with common amplification strategies for quantification of miRNA and other nucleic acid biomarkers.
Stepanović, Srdjan; Vuković, Dragana; Hola, Veronika; Di Bonaventura, Giovanni; Djukić, Slobodanka; Cirković, Ivana; Ruzicka, Filip
2007-08-01
The details of all steps involved in the quantification of biofilm formation in microtiter plates are described. The presented protocol incorporates information on assessment of biofilm production by staphylococci, gained both by direct experience as well as by analysis of methods for assaying biofilm production. The obtained results should simplify quantification of biofilm formation in microtiter plates, and make it more reliable and comparable among different laboratories.
Quantification of protein carbonylation.
Wehr, Nancy B; Levine, Rodney L
2013-01-01
Protein carbonylation is the most commonly used measure of oxidative modification of proteins. It is most often measured spectrophotometrically or immunochemically by derivatizing proteins with the classical carbonyl reagent 2,4 dinitrophenylhydrazine (DNPH). We present protocols for the derivatization and quantification of protein carbonylation with these two methods, including a newly described dot blot with greatly increased sensitivity.
Bullich, Santiago; Barthel, Henryk; Koglin, Norman; Becker, Georg A; De Santi, Susan; Jovalekic, Aleksandar; Stephens, Andrew W; Sabri, Osama
2017-11-24
Accurate amyloid PET quantification is necessary for monitoring amyloid-beta accumulation and response to therapy. Currently, most of the studies are analyzed using the static standardized uptake value ratio (SUVR) approach because of its simplicity. However, this approach may be influenced by changes in cerebral blood flow (CBF) or radiotracer clearance. Full tracer kinetic models require arterial blood sampling and dynamic image acquisition. The objectives of this work were: (1) to validate a non-invasive kinetic modeling approach for 18 F-florbetaben PET using an acquisition protocol with the best compromise between quantification accuracy and simplicity and (2) to assess the impact of CBF changes and radiotracer clearance on SUVRs and non-invasive kinetic modeling data in 18 F-florbetaben PET. Methods: Data from twenty subjects (10 patients with probable Alzheimer's dementia/ 10 healthy volunteers) were used to compare the binding potential (BP ND ) obtained from the full kinetic analysis to the SUVR and to non-invasive tracer kinetic methods (simplified reference tissue model (SRTM), and multilinear reference tissue model 2 (MRTM2)). Different approaches using shortened or interrupted acquisitions were compared to the results of the full acquisition (0-140 min). Simulations were carried out to assess the effect of CBF and radiotracer clearance changes on SUVRs and non-invasive kinetic modeling outputs. Results: A 0-30 and 120-140 min dual time-window acquisition protocol using appropriate interpolation of the missing time points provided the best compromise between patient comfort and quantification accuracy. Excellent agreement was found between BP ND obtained using full and dual time-window (2TW) acquisition protocols (BP ND,2TW =0.01+ 1.00 BP ND,FULL , R2=0.97 (MRTM2); BP ND,2TW = 0.05+ 0.92·BP ND,FULL , R2=0.93 (SRTM)). Simulations showed a limited impact of CBF and radiotracer clearance changes on MRTM parameters and SUVRs. Conclusion: This study demonstrates accurate non-invasive kinetic modeling of 18 F-florbetaben PET data using a dual time-window acquisition protocol, thus providing a good compromise between quantification accuracy, scan duration and patient burden. The influence of CBF and radiotracer clearance changes on amyloid-beta load estimates was small. For most clinical research applications, the SUVR approach is appropriate. However, for longitudinal studies in which a maximum quantification accuracy is desired, this non-invasive dual time-window acquisition protocol and kinetic analysis is recommended. Copyright © 2017 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Quantification of Global DNA Methylation Levels by Mass Spectrometry.
Fernandez, Agustin F; Valledor, Luis; Vallejo, Fernando; Cañal, Maria Jesús; Fraga, Mario F
2018-01-01
Global DNA methylation was classically considered the relative percentage of 5-methylcysine (5mC) with respect to total cytosine (C). Early approaches were based on the use of high-performance separation technologies and UV detection. However, the recent development of protocols using mass spectrometry for the detection has increased sensibility and permitted the precise identification of peak compounds based on their molecular masses. This allows work to be conducted with much less genomic DNA starting material and also to quantify 5-hydroxymethyl-cytosine (5hmC), a recently identified form of methylated cytosine that could play an important role in active DNA demethylation. Here, we describe the protocol that we currently use in our laboratory to analyze 5mC and 5hmC by mass spectrometry. The protocol, which is based on the method originally developed by Le and colleagues using Ultra Performance Liquid Chromatography (UPLC) and mass spectrometry (triple Quadrupole (QqQ)) detection, allows for the rapid and accurate quantification of relative global 5mC and 5hmC levels starting from just 1 μg of genomic DNA, which allows for the rapid and accurate quantification of relative global 5mC and 5hmC levels.
Hame, Yrjo; Angelini, Elsa D; Hoffman, Eric A; Barr, R Graham; Laine, Andrew F
2014-07-01
The extent of pulmonary emphysema is commonly estimated from CT scans by computing the proportional area of voxels below a predefined attenuation threshold. However, the reliability of this approach is limited by several factors that affect the CT intensity distributions in the lung. This work presents a novel method for emphysema quantification, based on parametric modeling of intensity distributions and a hidden Markov measure field model to segment emphysematous regions. The framework adapts to the characteristics of an image to ensure a robust quantification of emphysema under varying CT imaging protocols, and differences in parenchymal intensity distributions due to factors such as inspiration level. Compared to standard approaches, the presented model involves a larger number of parameters, most of which can be estimated from data, to handle the variability encountered in lung CT scans. The method was applied on a longitudinal data set with 87 subjects and a total of 365 scans acquired with varying imaging protocols. The resulting emphysema estimates had very high intra-subject correlation values. By reducing sensitivity to changes in imaging protocol, the method provides a more robust estimate than standard approaches. The generated emphysema delineations promise advantages for regional analysis of emphysema extent and progression.
Effects of Two Training Modalities on Body Fat and Insulin Resistance in Postmenopausal Women.
Henríquez, Sandra; Monsalves-Alvarez, Matías; Jimenez, Teresa; Barrera, Gladys; Hirsch, Sandra; de la Maza, María Pia; Leiva, Laura; Rodriguez, Juan Manuel; Silva, Claudio; Bunout, Daniel
2017-11-01
Henríquez, S, Monsalves-Alvarez, M, Jimenez, T, Barrera, G, Hirsch, S, de la Maza, MP, Leiva, L, Rodriguez, JM, Silva, C, and Bunout, D. Effects of two training modalities on body fat and insulin resistance in postmenopausal women. J Strength Cond Res 31(11): 2955-2964, 2017-Our objective was to compare the effects of a low-load circuit resistance training protocol and usual aerobic training in postmenopausal women. Postmenopausal women with at least 1 feature of the metabolic syndrome were randomly allocated to a low-load circuit resistance training protocol or traditional aerobic training in a braked cycle ergometer. The intervention consisted in supervised sessions lasting 40 minutes, 3 times per week, during 6 months. At baseline and at the end of the intervention, fasting serum lipid levels, serum interleukin 6, C-reactive protein, 8 isoprostanes, and insulin resistance (assessed through QUICKI and HOMA-IR) were measured. Body fat was measured by double-beam X-ray absorptiometry and by computed tomography densitometric quantification at lumbar 3 vertebral level. Twenty-one women aged 58 (54-59) years were allocated to aerobic training and 21 women aged 55 (52-61) years were allocated to the low-load circuit resistance training protocol. Eighteen and 16 women in each group completed the 6 months training period. Women in both groups experienced significant reductions in blood pressure, total body, subcutaneous, and intraabdominal body fat. Reductions in total cholesterol and triacylglycerol levels were also observed. No changes in insulin resistance indexes, 8 isoprostanes, C-reactive protein, or interleukin 6 were observed in either group. No significant differences between treatment groups were observed in any of the measured parameters. We conclude that low-load circuit resistance training and aerobic training resulted in the same reductions in body fat and serum lipid levels.
A universal real-time PCR assay for the quantification of group-M HIV-1 proviral load.
Malnati, Mauro S; Scarlatti, Gabriella; Gatto, Francesca; Salvatori, Francesca; Cassina, Giulia; Rutigliano, Teresa; Volpi, Rosy; Lusso, Paolo
2008-01-01
Quantification of human immunodeficiency virus type-1 (HIV-1) proviral DNA is increasingly used to measure the HIV-1 cellular reservoirs, a helpful marker to evaluate the efficacy of antiretroviral therapeutic regimens in HIV-1-infected individuals. Furthermore, the proviral DNA load represents a specific marker for the early diagnosis of perinatal HIV-1 infection and might be predictive of HIV-1 disease progression independently of plasma HIV-1 RNA levels and CD4(+) T-cell counts. The high degree of genetic variability of HIV-1 poses a serious challenge for the design of a universal quantitative assay capable of detecting all the genetic subtypes within the main (M) HIV-1 group with similar efficiency. Here, we describe a highly sensitive real-time PCR protocol that allows for the correct quantification of virtually all group-M HIV-1 strains with a higher degree of accuracy compared with other methods. The protocol involves three stages, namely DNA extraction/lysis, cellular DNA quantification and HIV-1 proviral load assessment. Owing to the robustness of the PCR design, this assay can be performed on crude cellular extracts, and therefore it may be suitable for the routine analysis of clinical samples even in developing countries. An accurate quantification of the HIV-1 proviral load can be achieved within 1 d from blood withdrawal.
Evaluation of two methods to determine glyphosate and AMPA in soils of Argentina
NASA Astrophysics Data System (ADS)
De Geronimo, Eduardo; Lorenzon, Claudio; Iwasita, Barbara; Faggioli, Valeria; Aparicio, Virginia; Costa, Jose Luis
2017-04-01
Argentine agricultural production is fundamentally based on a technological package combining no-tillage and the dependence of glyphosate applications to control weeds in transgenic crops (soybean, maize and cotton). Therefore, glyphosate is the most employed herbicide in the country, where 180 to 200 million liters are applied every year. Due to its widespread use, it is important to assess its impact on the environment and, therefore, reliable analytical methods are mandatory. Glyphosate molecule exhibits unique physical and chemical characteristics which difficult its quantification, especially in soils with high organic matter content, such as the central eastern Argentine soils, where strong interferences are normally observed. The objective of this work was to compare two methods for extraction and quantification of glyphosate and AMPA in samples of 8 representative soils of Argentina. The first analytical method (method 1) was based on the use of phosphate buffer as extracting solution and dichloromethane to minimize matrix organic content. In the second method (method 2), potassium hydroxide was used to extract the analytes followed by a clean-up step using solid phase extraction (SPE) to minimize strong interferences. Sensitivity, recoveries, matrix effects and robustness were evaluated. Both methodologies involved a derivatization with 9-fluorenyl-methyl-chloroformate (FMOC) in borate buffer and detection based on ultra-high-pressure liquid chromatography coupled to tandem mass spectrometry (UHPLC-MS/MS). Recoveries obtained from soil samples spiked at 0.1 and 1 mg kg-1 and were satisfactory in both methods (70% - 120%). However, there was a remarkable difference regarding the matrix effect, being the SPE clean-up step (method 2) insufficient to remove the interferences. Whereas the dilution and the clean-up with dichloromethane (method 1) were more effective minimizing the ionic suppression. Moreover, method 1 had fewer steps in the protocol of sample processing than method 2. This can be highly valuable in the routine lab work due to the reduction of potential undesired errors such as the loss of analyte or sample contamination. In addition, the substitution of SPE by another alternative involved a considerable reduction of analytical costs in method 1. We conclude that method 1 seemed to be simpler and cheaper than method 2, as well as reliable to quantify glyphosate in Argentinean soils. We hope that this experience can be useful to simplify the protocols of glyphosate quantification and contribute to the understanding of the fate of this herbicide in the environment.
NASA Astrophysics Data System (ADS)
Nunes, Teresa; Mirante, Fátima; Almeida, Elza; Pio, Casimiro
2010-05-01
Atmospheric carbon consists of: organic carbon (OC, including various organic compounds), elemental carbon (EC, or black carbon [BC]/soot, a non-volatile/light-absorbing carbon), and a small quantity of carbonate carbon. Thermal/optical methods (TOM) have been widely used for quantifying total carbon (TC), OC, and EC in ambient and source particulate samples. Unfortunately, the different thermal evolution protocols in use can result in a wide elemental carbon-to-total carbon variation. Temperature evolution in thermal carbon analysis is critical to the allocation of carbon fractions. Another critical point in OC and EC quantification by TOM is the interference of carbonate carbon (CC) that could be present in the particulate samples, mainly in the coarse fraction of atmospheric aerosol. One of the methods used to minimize this interference consists on the use of a sample pre-treatment with acid to eliminate CC prior to thermal analysis (Chow et al., 2001; Pio et al., 1994). In Europe, there is currently no standard procedure for determining the carbonaceous aerosol fraction, which implies that data from different laboratories at various sites are of unknown accuracy and cannot be considered comparable. In the framework of the EU-project EUSAAR, a comprehensive study has been carried out to identify the causes of differences in the EC measured using different thermal evolution protocols. From this study an optimised protocol, the EUSAAR-2 protocol, was defined (Cavali et al., 2009). During the last two decades thousands of aerosol samples have been taken over quartz filters at urban, industrial, rural and background sites, and also from plume forest fires and biomass burning in a domestic closed stove. These samples were analysed for OC and EC, by a TOM, similar to that in use in the IMPROVE network (Pio et al., 2007). More recently we reduced the number of steps in thermal evolution protocols, without significant repercussions in the OC/EC quantifications. In order to evaluate the possibility of continue using, for trend analysis, the historical data set, we performed an inter-comparison between our method and an adaptation of EUSAAR-2 protocol, taking into account that this last protocol will possibly be recommended for analysing carbonaceous aerosols at European sites. In this inter-comparison we tested different types of samples (PM2,5, PM2,5-10, PM10) with large spectra of carbon loadings, with and without pre-treatment acidification. For a reduced number of samples, five replicates of each one were analysed by each method for statistical purposes. The inter-comparison study revealed that when the sample analysis were performed in similar room conditions, the two thermo-optic methods give similar results for TC, OC and EC, without significant differences at a 95% confidence level. The correlation between the methods, DAO and EUSAAR-2 for EC is smaller than for TC and OC, although showing a coefficient correlation over 0,95, with a slope close to one. For samples performed in different periods, room temperatures seem to have a significant effect over OC quantification. The sample pre-treatment with HCl fumigation tends to decrease TC quantification, mainly due to the more volatile organic fraction release during the first heating step. For a set of 20 domestic biomass burning samples analyzed by the DAO method we observed an average decrease in TC quantification of 3,7 % in relation to non-acidified samples, even though this decrease is accompanied by an average increase in the less volatile organic fraction. The indirect measurement of carbon carbonate, usually a minor carbon component in the carbonaceous aerosol, based on the difference between TC measured by TOM of acidified and non-acidified samples is not a robust measurement, considering the biases affecting his quantification. The present study show that the two thermo-optic temperature program used for OC and EC quantification give similar results, and if in the future the EUSAAR-2 protocol will be adopted the past measurement of carbonaceous fractions can be used for trend analysis. However this study demonstrates that the temperature control during post-sampling handling is a critical point in total OC and TC quantification that must be assigned in the new European protocol. References: Cavali et al., 2009, AMTD 2, 2321-2345, 2009 Chow et al., 2001, Aerosol. Sci. Technol., 34, 23-34, 2001. Pio et al., 1994, Proceedings of the Sixth European Symposium on Physico-Chemical Behavior of Atmospheric Pollutants. Report EUR 15609/2 EN, pp. 706-711. Pio et al, 2007, J. Geophys. Res. 112, D23S02 Acknowledgement: This work was funded by the Portuguese Science Foundation through the projects POCI/AMB/60267/2004 and PTDC/AMB/65706/2006 (BIOEMI). F. Mirante acknowledges the PhD grant SFRH/BD/45473/2008.
Assessment of cardiac fibrosis: a morphometric method comparison for collagen quantification.
Schipke, Julia; Brandenberger, Christina; Rajces, Alexandra; Manninger, Martin; Alogna, Alessio; Post, Heiner; Mühlfeld, Christian
2017-04-01
Fibrotic remodeling of the heart is a frequent condition linked to various diseases and cardiac dysfunction. Collagen quantification is an important objective in cardiac fibrosis research; however, a variety of different histological methods are currently used that may differ in accuracy. Here, frequently applied collagen quantification techniques were compared. A porcine model of early stage heart failure with preserved ejection fraction was used as an example. Semiautomated threshold analyses were imprecise, mainly due to inclusion of noncollagen structures or failure to detect certain collagen deposits. In contrast, collagen assessment by automated image analysis and light microscopy (LM)-stereology was more sensitive. Depending on the quantification method, the amount of estimated collagen varied and influenced intergroup comparisons. PicroSirius Red, Masson's trichrome, and Azan staining protocols yielded similar results, whereas the measured collagen area increased with increasing section thickness. Whereas none of the LM-based methods showed significant differences between the groups, electron microscopy (EM)-stereology revealed a significant collagen increase between cardiomyocytes in the experimental group, but not at other localizations. In conclusion, in contrast to the staining protocol, section thickness and the quantification method being used directly influence the estimated collagen content and thus, possibly, intergroup comparisons. EM in combination with stereology is a precise and sensitive method for collagen quantification if certain prerequisites are considered. For subtle fibrotic alterations, consideration of collagen localization may be necessary. Among LM methods, LM-stereology and automated image analysis are appropriate to quantify fibrotic changes, the latter depending on careful control of algorithm and comparable section staining. NEW & NOTEWORTHY Direct comparison of frequently applied histological fibrosis assessment techniques revealed a distinct relation of measured collagen and utilized quantification method as well as section thickness. Besides electron microscopy-stereology, which was precise and sensitive, light microscopy-stereology and automated image analysis proved to be appropriate for collagen quantification. Moreover, consideration of collagen localization might be important in revealing minor fibrotic changes. Copyright © 2017 the American Physiological Society.
[Can the degree of renal artery stenosis be automatically quantified?].
Cherrak, I; Jaulent, M C; Azizi, M; Plouin, P F; Degoulet, P; Chatellier, G
2000-08-01
The objective of the reported study is to validate a computer system, QUASAR, dedicated to the quantification of renal artery stenoses. This system estimates automatically the reference diameter and calculates the minimum diameter to compute a degree of stenosis. A hundred and eighty images of atheromatous stenoses between 10% and 80% were collected from two French independent protocols. For the 49 images of the EMMA protocol, the results from QUASAR were compared with the visual estimation of an initial investigator and with the results from a reference method based on a panel of fixe experienced experts. For the 131 images of the ASTARTE protocol, the results from QUASAR were compared with those from a semi-automatic quantification system and with those from a system based on densitometric analysis. The present work validates QUASAR in a population of narrow atheromatous stenoses (> 50%). In the context of the EMMA protocol, QUASAR is not significantly different from the mean of the fixe experts. It is unbiased and more precise than the estimation of a single investigator. In the context of the ASTARTE protocol, there is no significant difference between the three methods for the stenoses higher than 50%, however, globally, QUASAR surestimates significantly (up to 10%) the degree of stenosis.
van Frankenhuyzen, Jessica K; Trevors, Jack T; Flemming, Cecily A; Lee, Hung; Habash, Marc B
2013-11-01
Biosolids result from treatment of sewage sludge to meet jurisdictional standards, including pathogen reduction. Once government regulations are met, materials can be applied to agricultural lands. Culture-based methods are used to enumerate pathogen indicator microorganisms but may underestimate cell densities, which is partly due to bacteria existing in a viable but non-culturable physiological state. Viable indicators can also be quantified by realtime polymerase chain reaction (qPCR) used with propidium monoazide (PMA), a dye that inhibits amplification of DNA found extracellularly or in dead cells. The objectives of this study were to test an optimized PMA-qPCR method for viable pathogen detection in wastewater solids and to validate it by comparing results to data obtained by conventional plating. Reporter genes from genetically marked Pseudomonas sp. UG14Lr and Agrobacterium tumefaciens 542 cells were spiked into samples of primary sludge, and anaerobically digested and Lystek-treated biosolids as cell-free DNA, dead cells, viable cells, and mixtures of live and dead cells, followed by DNA extraction with and without PMA, and qPCR. The protocol was then used for Escherichia coli quantification in the three matrices, and results compared to plate counts. PMA-qPCR selectively detected viable cells, while inhibiting signals from cell-free DNA and DNA found in membrane-compromised cells. PMA-qPCR detected 0.5-1 log unit more viable E. coli cells in both primary solids and dewatered biosolids than plate counts. No viable E. coli was found in Lystek-treated biosolids. These data suggest PMA-qPCR may more accurately estimate pathogen cell numbers than traditional culture methods.
Scollo, Francesco; Egea, Leticia A; Gentile, Alessandra; La Malfa, Stefano; Dorado, Gabriel; Hernandez, Pilar
2016-12-15
Olive oil is considered a premium product for its nutritional value and health benefits, and the ability to define its origin and varietal composition is a key step towards ensuring the traceability of the product. However, isolating the DNA from such a matrix is a difficult task. In this study, the quality and quantity of olive oil DNA, isolated using four different DNA isolation protocols, was evaluated using the qRT-PCR and ddPCR techniques. The results indicate that CTAB-based extraction methods were the best for unfiltered oil, while Nucleo Spin-based extraction protocols showed greater overall reproducibility. The use of both qRT-PCR and ddPCR led to the absolute quantification of the DNA copy number. The results clearly demonstrate the importance of the choice of DNA-isolation protocol, which should take into consideration the qualitative aspects of DNA and the evaluation of the amplified DNA copy number. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bergmeister, Konstantin D; Gröger, Marion; Aman, Martin; Willensdorfer, Anna; Manzano-Szalai, Krisztina; Salminger, Stefan; Aszmann, Oskar C
2016-08-01
Skeletal muscle consists of different fiber types which adapt to exercise, aging, disease, or trauma. Here we present a protocol for fast staining, automatic acquisition, and quantification of fiber populations with ImageJ. Biceps and lumbrical muscles were harvested from Sprague-Dawley rats. Quadruple immunohistochemical staining was performed on single sections using antibodies against myosin heavy chains and secondary fluorescent antibodies. Slides were scanned automatically with a slide scanner. Manual and automatic analyses were performed and compared statistically. The protocol provided rapid and reliable staining for automated image acquisition. Analyses between manual and automatic data indicated Pearson correlation coefficients for biceps of 0.645-0.841 and 0.564-0.673 for lumbrical muscles. Relative fiber populations were accurate to a degree of ± 4%. This protocol provides a reliable tool for quantification of muscle fiber populations. Using freely available software, it decreases the required time to analyze whole muscle sections. Muscle Nerve 54: 292-299, 2016. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Roed-Larsen, Trygve; Flach, Todd
The purpose of this chapter is to provide a review of existing national and international requirements for verification of greenhouse gas reductions and associated accreditation of independent verifiers. The credibility of results claimed to reduce or remove anthropogenic emissions of greenhouse gases (GHG) is of utmost importance for the success of emerging schemes to reduce such emissions. Requirements include transparency, accuracy, consistency, and completeness of the GHG data. The many independent verification processes that have developed recently now make up a quite elaborate tool kit for best practices. The UN Framework Convention for Climate Change and the Kyoto Protocol specifications for project mechanisms initiated this work, but other national and international actors also work intensely with these issues. One initiative gaining wide application is that taken by the World Business Council for Sustainable Development with the World Resources Institute to develop a "GHG Protocol" to assist companies in arranging for auditable monitoring and reporting processes of their GHG activities. A set of new international standards developed by the International Organization for Standardization (ISO) provides specifications for the quantification, monitoring, and reporting of company entity and project-based activities. The ISO is also developing specifications for recognizing independent GHG verifiers. This chapter covers this background with intent of providing a common understanding of all efforts undertaken in different parts of the world to secure the reliability of GHG emission reduction and removal activities. These verification schemes may provide valuable input to current efforts of securing a comprehensive, trustworthy, and robust framework for verification activities of CO2 capture, transport, and storage.
43 CFR 11.71 - Quantification phase-service reduction quantification.
Code of Federal Regulations, 2011 CFR
2011-10-01
...-discharge-or-release condition. (c) Contents of the quantification. The following factors should be included...; and (6) Factors identified in the specific guidance in paragraphs (h), (i), (j), (k), and (l) of this section dealing with the different kinds of natural resources. (d) Selection of resources, services, and...
43 CFR 11.71 - Quantification phase-service reduction quantification.
Code of Federal Regulations, 2010 CFR
2010-10-01
...-discharge-or-release condition. (c) Contents of the quantification. The following factors should be included...; and (6) Factors identified in the specific guidance in paragraphs (h), (i), (j), (k), and (l) of this section dealing with the different kinds of natural resources. (d) Selection of resources, services, and...
The Aging of lignin rich papers upon exposure to light : its quantification and prediction
James S. Bond; Rajai H. Atalla; Agarwal Umesh P.; Chris G. Hunt
1999-01-01
A program was undertaken at the Forest Products Laboratory in conjunction with the American Society for Testing and Materials (ASTM) to develop guidelines for a credible accelerated photoaging protocol for printing and writing papers. In support of this, indepth studies of photodegredation were undertaken in sufficient detail to establish the validity of the protocol....
Clarifying uncertainty in biogeochemical response to land management
NASA Astrophysics Data System (ADS)
Tonitto, C.; Gurwick, N. P.; Woodbury, P. B.
2013-12-01
We examined the ability of contemporary simulation and empirical modeling tools to describe net greenhouse gas (GHG) emissions as a result of agricultural and forest ecosystem land management, and we looked at how key policy institutions use these tools. We focused on quantification of nitrous oxide (N2O) emissions from agricultural systems, as agriculture is the dominant source of anthropogenic N2O emissions. Agricultural management impact on N2O emissions is especially challenging because controls on N2O emissions (soil aerobic status, inorganic N availability, and C substrate availability) vary as a function of site soil type, climate, and cropping system; available measurements do not cover all relevant combinations of these controlling system features. Furthermore, N2O emissions are highly non-linear, and threshold values of controlling soil environmental conditions are not defined across most agricultural site properties. We also examined the multi-faceted challenges regarding the quantification of increased soil organic carbon (SOC) storage as a result of land management in both agricultural and forest systems. Quantifying changes in SOC resulting from land management is difficult because mechanisms of SOC stabilization are not fully understood, SOC measurements have been concentrated in the upper 30cm of soil, erosion is often ignored when estimating SOC, and few long-term studies exist to track system response to diverse management practices. Furthermore, the permanence of SOC accumulating management practices is not easily established. For instance, under the Regional Greenhouse Gas Initiative (RGGI), forest land managed for SOC accumulation must remain under permanent conservation easement to ensure that SOC accumulation is not reversed due to changes in land cover. For agricultural protocols, given that many farmers rent land and that agriculture is driven by an annual management time scale, the ability to ensure SOC-accumulating land management would be maintained indefinitely has delayed the implementation of SOC accumulating practices for compliance with the California Global Warming Solutions Act (AB 32). GHG accounting tools are increasingly applied to implement GHG reduction policies. In this policy context, data limitation has impacted the implementation of GHG accounting strategies. For example, protocol design in support of AB 32 initially sought to apply simulation models to determine N2O emissions across all major U.S. agricultural landscapes. After discussion with ecosystem scientists, the lack of observations and model validation in most U.S. arable landscapes led to protocol definition based on simple empirical models and limited to corn management in 12 states. The distribution of protocol participants is also a potential source of inaccuracy in GHG accounting. Land management protocols are often structured assuming that in the aggregate policy achieves an average improvement by promoting specific management. However it is unclear that current policy incentives promote participation from a truly random distribution of landscapes. Participation in policy development to support improved land management challenges ecosystem scientists with making recommendations based on best-available information while acknowledging that uncertainty limits accurate quantification of impacts via analysis using either observations or simulation modeling.
Gabrani-Juma, Hanif; Clarkin, Owen J; Pourmoghaddas, Amir; Driscoll, Brandon; Wells, R Glenn; deKemp, Robert A; Klein, Ran
2017-01-01
Simple and robust techniques are lacking to assess performance of flow quantification using dynamic imaging. We therefore developed a method to qualify flow quantification technologies using a physical compartment exchange phantom and image analysis tool. We validate and demonstrate utility of this method using dynamic PET and SPECT. Dynamic image sequences were acquired on two PET/CT and a cardiac dedicated SPECT (with and without attenuation and scatter corrections) systems. A two-compartment exchange model was fit to image derived time-activity curves to quantify flow rates. Flowmeter measured flow rates (20-300 mL/min) were set prior to imaging and were used as reference truth to which image derived flow rates were compared. Both PET cameras had excellent agreement with truth ( [Formula: see text]). High-end PET had no significant bias (p > 0.05) while lower-end PET had minimal slope bias (wash-in and wash-out slopes were 1.02 and 1.01) but no significant reduction in precision relative to high-end PET (<15% vs. <14% limits of agreement, p > 0.3). SPECT (without scatter and attenuation corrections) slope biases were noted (0.85 and 1.32) and attributed to camera saturation in early time frames. Analysis of wash-out rates from non-saturated, late time frames resulted in excellent agreement with truth ( [Formula: see text], slope = 0.97). Attenuation and scatter corrections did not significantly impact SPECT performance. The proposed phantom, software and quality assurance paradigm can be used to qualify imaging instrumentation and protocols for quantification of kinetic rate parameters using dynamic imaging.
Dissection and Flat-mounting of the Threespine Stickleback Branchial Skeleton
Ellis, Nicholas A.; Miller, Craig T.
2016-01-01
The posterior pharyngeal segments of the vertebrate head give rise to the branchial skeleton, the primary site of food processing in fish. The morphology of the fish branchial skeleton is matched to a species' diet. Threespine stickleback fish (Gasterosteus aculeatus) have emerged as a model system to study the genetic and developmental basis of evolved differences in a variety of traits. Marine populations of sticklebacks have repeatedly colonized countless new freshwater lakes and creeks. Adaptation to the new diet in these freshwater environments likely underlies a series of craniofacial changes that have evolved repeatedly in independently derived freshwater populations. These include three major patterning changes to the branchial skeleton: reductions in the number and length of gill raker bones, increases in pharyngeal tooth number, and increased branchial bone lengths. Here we describe a detailed protocol to dissect and flat-mount the internal branchial skeleton in threespine stickleback fish. Dissection of the entire three-dimensional branchial skeleton and mounting it flat into a largely two-dimensional prep allows for the easy visualization and quantification of branchial skeleton morphology. This dissection method is inexpensive, fast, relatively easy, and applicable to a wide variety of fish species. In sticklebacks, this efficient method allows the quantification of skeletal morphology in genetic crosses to map genomic regions controlling craniofacial patterning. PMID:27213248
Dissection and Flat-mounting of the Threespine Stickleback Branchial Skeleton.
Ellis, Nicholas A; Miller, Craig T
2016-05-07
The posterior pharyngeal segments of the vertebrate head give rise to the branchial skeleton, the primary site of food processing in fish. The morphology of the fish branchial skeleton is matched to a species' diet. Threespine stickleback fish (Gasterosteus aculeatus) have emerged as a model system to study the genetic and developmental basis of evolved differences in a variety of traits. Marine populations of sticklebacks have repeatedly colonized countless new freshwater lakes and creeks. Adaptation to the new diet in these freshwater environments likely underlies a series of craniofacial changes that have evolved repeatedly in independently derived freshwater populations. These include three major patterning changes to the branchial skeleton: reductions in the number and length of gill raker bones, increases in pharyngeal tooth number, and increased branchial bone lengths. Here we describe a detailed protocol to dissect and flat-mount the internal branchial skeleton in threespine stickleback fish. Dissection of the entire three-dimensional branchial skeleton and mounting it flat into a largely two-dimensional prep allows for the easy visualization and quantification of branchial skeleton morphology. This dissection method is inexpensive, fast, relatively easy, and applicable to a wide variety of fish species. In sticklebacks, this efficient method allows the quantification of skeletal morphology in genetic crosses to map genomic regions controlling craniofacial patterning.
Thaysen-Petersen, D; Barbet-Pfeilsticker, M; Beerwerth, F; Nash, J F; Philipsen, P A; Staubach, P; Haedersdal, M
2015-01-01
At-home laser and intense pulsed-light hair removal continues to grow in popularity and availability. A relatively limited body of evidence is available on the course of hair growth during and after low-fluence laser usage. To assess growing hair counts, thickness and colour quantitatively during and after cessation of low-fluence laser treatment. Thirty-six women with skin phototypes I-IV and light to dark-brown axillary hairs were included. Entire axillary regions were randomized to zero or eight self-administered weekly treatments with an 810-nm home-use laser at 5·0-6·4 J cm(-2). Standardized clinical photographs were taken before each treatment and up to 3 months after the final treatment for computer-aided quantification of growing hair counts, thickness and colour. Thirty-two women completed the study protocol. During sustained treatment, there was a reduction in growing hair that reached a plateau of up to 59%, while remaining hairs became up to 38% thinner and 5% lighter (P < 0·001). The majority of subjects (77%) reported 'moderately' to 'much less hair' in treated than untreated axilla, and assessed remaining hairs as thinner and lighter (≥ 60%). After treatment cessation, hair growth gradually returned to baseline levels, and 3 months after the final treatment the count and thickness of actively growing hair exceeded pretreatment values by 29% and 7%, respectively (P ≤ 0·04). Sustained usage of low-fluence laser induced a stable reduction of growing hair counts, thickness and colour. The reduction was reversible and hairs regrew beyond baseline values after cessation of usage. Computer-aided image analysis was qualified for quantification of hair counts, thickness and colour after laser epilation. © 2014 British Association of Dermatologists.
Zarzycki, Paweł K; Portka, Joanna K
2015-09-01
Pentacyclic triterpenoids, particularly hopanoids, are organism-specific compounds and are generally considered as useful biomarkers that allow fingerprinting and classification of biological, environmental and geological samples. Simultaneous quantification of various hopanoids together with battery of related non-polar and low-molecular mass compounds may provide principal information for geochemical and environmental research focusing on both modern and ancient investigations. Target compounds can be derived from microbial biomass, water columns, sediments, coals, crude fossils or rocks. This create number of analytical problems due to different composition of the analytical matrix and interfering compounds and therefore, proper optimization of quantification protocols for such biomarkers is still the challenge. In this work we summarizing typical analytical protocols that were recently applied for quantification of hopanoids like compounds from different samples. Main steps including components of interest extraction, pre-purification, fractionation, derivatization and quantification involving gas (1D and 2D) as well as liquid separation techniques (liquid-liquid extraction, solid-phase extraction, planar and low resolution column chromatography, high-performance liquid chromatography) are described and discussed from practical point of view, mainly based on the experimental papers that were published within last two years, where significant increase in hopanoids research was noticed. The second aim of this review is to describe the latest research trends concerning determination of hopanoids and related low-molecular mass lipids analyzed in various samples including sediments, rocks, coals, crude oils and plant fossils as well as stromatolites and microbial biomass cultivated under different conditions. It has been found that majority of the most recent papers are based on uni- or bivariate approach for complex data analysis. Data interpretation involves number of physicochemical parameters and hopanoids quantities or given biomarkers mass ratios derived from high-throughput separation and detection systems, typically GC-MS and HPLC-MS. Based on quantitative data reported in recently published experimental works it has been demonstrated that multivariate data analysis using e.g. principal components computations may significantly extend our knowledge concerning proper biomarkers selection and samples classification by means of hopanoids and related non-polar compounds. Copyright © 2015 Elsevier Ltd. All rights reserved.
Häme, Yrjö; Angelini, Elsa D.; Hoffman, Eric A.; Barr, R. Graham; Laine, Andrew F.
2014-01-01
The extent of pulmonary emphysema is commonly estimated from CT images by computing the proportional area of voxels below a predefined attenuation threshold. However, the reliability of this approach is limited by several factors that affect the CT intensity distributions in the lung. This work presents a novel method for emphysema quantification, based on parametric modeling of intensity distributions in the lung and a hidden Markov measure field model to segment emphysematous regions. The framework adapts to the characteristics of an image to ensure a robust quantification of emphysema under varying CT imaging protocols and differences in parenchymal intensity distributions due to factors such as inspiration level. Compared to standard approaches, the present model involves a larger number of parameters, most of which can be estimated from data, to handle the variability encountered in lung CT scans. The method was used to quantify emphysema on a cohort of 87 subjects, with repeated CT scans acquired over a time period of 8 years using different imaging protocols. The scans were acquired approximately annually, and the data set included a total of 365 scans. The results show that the emphysema estimates produced by the proposed method have very high intra-subject correlation values. By reducing sensitivity to changes in imaging protocol, the method provides a more robust estimate than standard approaches. In addition, the generated emphysema delineations promise great advantages for regional analysis of emphysema extent and progression, possibly advancing disease subtyping. PMID:24759984
Gomes, F.M.; Ramos, I.B.; Wendt, C.; Girard-Dias, W.; De Souza, W.; Machado, E.A.; K. Miranda, E.A.
2013-01-01
Inorganic polyphosphate (PolyP) is a biological polymer that plays important roles in the cell physiology of both prokaryotic and eukaryotic organisms. Among the available methods for PolyP localization and quantification, a 4’,6-diamidino-2-phenylindole(DAPI)-based assay has been used for visualization of PolyP-rich organelles. Due to differences in DAPI permeability to different compartments and/or PolyP retention after fixation, a general protocol for DAPI-PolyP staining has not yet been established. Here, we tested different protocols for DAPI-PolyP detection in a range of samples with different levels of DAPI permeability, including subcellular fractions, free-living cells and cryosections of fixed tissues. Subcellular fractions of PolyP-rich organelles yielded DAPI-PolyP fluorescence, although those with a complex external layer usually required longer incubation times, previous aldehyde fixation and/or detergent permeabilization. DAPI-PolyP was also detected in cryosections of OCT-embedded tissues analyzed by multiphoton microscopy. In addition, a semi-quantitative fluorimetric analysis of DAPI-stained fractions showed PolyP mobilization in a similar fashion to what has been demonstrated with the use of enzyme-based quantitative protocols. Taken together, our results support the use of DAPI for both PolyP visualization and quantification, although specific steps are suggested as a general guideline for DAPI-PolyP staining in biological samples with different degrees of DAPI and PolyP permeability. PMID:24441187
Quantitative CT: technique dependence of volume estimation on pulmonary nodules
NASA Astrophysics Data System (ADS)
Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan
2012-03-01
Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.
Shahsavari, Esmaeil; Aburto-Medina, Arturo; Taha, Mohamed; Ball, Andrew S
2016-01-01
Polycyclic aromatic hydrocarbons (PAHs) are major pollutants globally and due to their carcinogenic and mutagenic properties their clean-up is paramount. Bioremediation or using PAH degrading microorganisms (mainly bacteria) to degrade the pollutants represents cheap, effective methods. These PAH degraders harbor functional genes which help microorganisms use PAHs as source of food and energy. Most probable number (MPN) and plate counting methods are widely used for counting PAHs degraders; however, as culture based methods only count a small fraction (<1%) of microorganisms capable of carrying out PAH degradation, the use of culture-independent methodologies is desirable.•This protocol presents a robust, rapid and sensitive qPCR method for the quantification of the functional genes involved in the degradation of PAHs in soil samples.•This protocol enables us to screen a vast number of PAH contaminated soil samples in few hours.•This protocol provides valuable information about the natural attenuation potential of contaminated soil and can be used to monitor the bioremediation process.
Lim, Hyun-ju; Weinheimer, Oliver; Wielpütz, Mark O.; Dinkel, Julien; Hielscher, Thomas; Gompelmann, Daniela; Kauczor, Hans-Ulrich; Heussel, Claus Peter
2016-01-01
Objectives Surgical or bronchoscopic lung volume reduction (BLVR) techniques can be beneficial for heterogeneous emphysema. Post-processing software tools for lobar emphysema quantification are useful for patient and target lobe selection, treatment planning and post-interventional follow-up. We aimed to evaluate the inter-software variability of emphysema quantification using fully automated lobar segmentation prototypes. Material and Methods 66 patients with moderate to severe COPD who underwent CT for planning of BLVR were included. Emphysema quantification was performed using 2 modified versions of in-house software (without and with prototype advanced lung vessel segmentation; programs 1 [YACTA v.2.3.0.2] and 2 [YACTA v.2.4.3.1]), as well as 1 commercial program 3 [Pulmo3D VA30A_HF2] and 1 pre-commercial prototype 4 [CT COPD ISP ver7.0]). The following parameters were computed for each segmented anatomical lung lobe and the whole lung: lobar volume (LV), mean lobar density (MLD), 15th percentile of lobar density (15th), emphysema volume (EV) and emphysema index (EI). Bland-Altman analysis (limits of agreement, LoA) and linear random effects models were used for comparison between the software. Results Segmentation using programs 1, 3 and 4 was unsuccessful in 1 (1%), 7 (10%) and 5 (7%) patients, respectively. Program 2 could analyze all datasets. The 53 patients with successful segmentation by all 4 programs were included for further analysis. For LV, program 1 and 4 showed the largest mean difference of 72 ml and the widest LoA of [-356, 499 ml] (p<0.05). Program 3 and 4 showed the largest mean difference of 4% and the widest LoA of [-7, 14%] for EI (p<0.001). Conclusions Only a single software program was able to successfully analyze all scheduled data-sets. Although mean bias of LV and EV were relatively low in lobar quantification, ranges of disagreement were substantial in both of them. For longitudinal emphysema monitoring, not only scanning protocol but also quantification software needs to be kept constant. PMID:27029047
Optimization of intra-voxel incoherent motion imaging at 3.0 Tesla for fast liver examination.
Leporq, Benjamin; Saint-Jalmes, Hervé; Rabrait, Cecile; Pilleul, Frank; Guillaud, Olivier; Dumortier, Jérôme; Scoazec, Jean-Yves; Beuf, Olivier
2015-05-01
Optimization of multi b-values MR protocol for fast intra-voxel incoherent motion imaging of the liver at 3.0 Tesla. A comparison of four different acquisition protocols were carried out based on estimated IVIM (DSlow , DFast , and f) and ADC-parameters in 25 healthy volunteers. The effects of respiratory gating compared with free breathing acquisition then diffusion gradient scheme (simultaneous or sequential) and finally use of weighted averaging for different b-values were assessed. An optimization study based on Cramer-Rao lower bound theory was then performed to minimize the number of b-values required for a suitable quantification. The duration-optimized protocol was evaluated on 12 patients with chronic liver diseases No significant differences of IVIM parameters were observed between the assessed protocols. Only four b-values (0, 12, 82, and 1310 s.mm(-2) ) were found mandatory to perform a suitable quantification of IVIM parameters. DSlow and DFast significantly decreased between nonadvanced and advanced fibrosis (P < 0.05 and P < 0.01) whereas perfusion fraction and ADC variations were not found to be significant. Results showed that IVIM could be performed in free breathing, with a weighted-averaging procedure, a simultaneous diffusion gradient scheme and only four optimized b-values (0, 10, 80, and 800) reducing scan duration by a factor of nine compared with a nonoptimized protocol. Preliminary results have shown that parameters such as DSlow and DFast based on optimized IVIM protocol can be relevant biomarkers to distinguish between nonadvanced and advanced fibrosis. © 2014 Wiley Periodicals, Inc.
2012-01-01
Naturally occurring native peptides provide important information about physiological states of an organism and its changes in disease conditions but protocols and methods for assessing their abundance are not well-developed. In this paper, we describe a simple procedure for the quantification of non-tryptic peptides in body fluids. The workflow includes an enrichment step followed by two-dimensional fractionation of native peptides and MS/MS data management facilitating the design and validation of LC- MRM MS assays. The added value of the workflow is demonstrated in the development of a triplex LC-MRM MS assay used for quantification of peptides potentially associated with the progression of liver disease to hepatocellular carcinoma. PMID:22304756
NASA Astrophysics Data System (ADS)
Rosenstock, T. S.; Rufino, M. C.; Butterbach-Bahl, K.; Wollenberg, E.
2013-06-01
Globally, agriculture is directly responsible for 14% of annual greenhouse gas (GHG) emissions and induces an additional 17% through land use change, mostly in developing countries (Vermeulen et al 2012). Agricultural intensification and expansion in these regions is expected to catalyze the most significant relative increases in agricultural GHG emissions over the next decade (Smith et al 2008, Tilman et al 2011). Farms in the developing countries of sub-Saharan Africa and Asia are predominately managed by smallholders, with 80% of land holdings smaller than ten hectares (FAO 2012). One can therefore posit that smallholder farming significantly impacts the GHG balance of these regions today and will continue to do so in the near future. However, our understanding of the effect smallholder farming has on the Earth's climate system is remarkably limited. Data quantifying existing and reduced GHG emissions and removals of smallholder production systems are available for only a handful of crops, livestock, and agroecosystems (Herrero et al 2008, Verchot et al 2008, Palm et al 2010). For example, fewer than fifteen studies of nitrous oxide emissions from soils have taken place in sub-Saharan Africa, leaving the rate of emissions virtually undocumented. Due to a scarcity of data on GHG sources and sinks, most developing countries currently quantify agricultural emissions and reductions using IPCC Tier 1 emissions factors. However, current Tier 1 emissions factors are either calibrated to data primarily derived from developed countries, where agricultural production conditions are dissimilar to that in which the majority of smallholders operate, or from data that are sparse or of mixed quality in developing countries (IPCC 2006). For the most part, there are insufficient emissions data characterizing smallholder agriculture to evaluate the level of accuracy or inaccuracy of current emissions estimates. Consequentially, there is no reliable information on the agricultural GHG budgets for developing economies. This dearth of information constrains the capacity to transition to low-carbon agricultural development, opportunities for smallholders to capitalize on carbon markets, and the negotiating position of developing countries in global climate policy discourse. Concerns over the poor state of information, in terms of data availability and representation, have fueled appeals for new approaches to quantifying GHG emissions and removals from smallholder agriculture, for both existing conditions and mitigation interventions (Berry and Ryan 2013, Olander et al 2013). Considering the dependence of quantification approaches on data and the current data deficit for smallholder systems, it is clear that in situ measurements must be a core part of initial and future strategies to improve GHG inventories and develop mitigation measures for smallholder agriculture. Once more data are available, especially for farming systems of high priority (e.g., those identified through global and regional rankings of emission hotspots or mitigation leverage points), better cumulative estimates and targeted actions will become possible. Greenhouse gas measurements in agriculture are expensive, time consuming, and error prone. These challenges are exacerbated by the heterogeneity of smallholder systems and landscapes and the diversity of methods used. Concerns over methodological rigor, measurement costs, and the diversity of approaches, coupled with the demand for robust information suggest it is germane for the scientific community to establish standards of measurements—'a protocol'—for quantifying GHG emissions from smallholder agriculture. A standard protocol for use by scientists and development organizations will help generate consistent, comparable, and reliable data on emissions baselines and allow rigorous comparisons of mitigation options. Besides enhancing data utility, a protocol serves as a benchmark for non-experts to easily assess data quality. Obviously many such protocols already exist (e.g., GraceNet, Parkin and Venterea 2010). None, however, account for the diversity and complexity of smallholder agriculture, quantify emissions and removals from crops, livestock, and biomass together to calculate the net balance, or are adapted for the research environment of developing countries; conditions that warrant developing specific methods. Here we summarize an approach being developed by the Consultative Group on International Agricultural Research's (CGIAR) Climate Change, Agriculture, and Food Security Program (CCAFS) and partners. The CGIAR-CCAFS smallholder GHG quantification protocol aims to improve quantification of baseline emission levels and support mitigation decisions. The protocol introduces five novel quantification elements relevant for smallholder agriculture (figure 1). First, it stresses the systematic collection of 'activity data' to describe the type, distribution, and extent of land management activities in landscapes cultivated by smallholder. Second, it advocates an informed sampling approach that concentrates measurement activities on emission hotspots and leverage points to capture heterogeneity and account for the diversity and complexity of farming activities. Third, it quantifies emissions at multiple spatial scales, whole-farm and landscape, to provide information targeted to household and communities decisions. Fourth, it encourages GHG research to document farm productivity and economics in addition to emissions, in recognition of the importance of agriculture to livelihoods. Fifth, it develops cost-differentiated measurement solutions that optimize the relationships among scale, cost, and accuracy. Each of the five innovations is further described below. Figure 1. Figure 1. The quantification approach. The protocol includes comparative evaluation of various methodologies for each element (e.g., biophysical context, profitability evaluation, etc), recommend methods specific for end users objectives and constraints, and field manuals for implementation of recommended methods. Items with an asterisk indicate novel aspects of this protocol by comparison to others. Systematizing collection of activity data . Data describing smallholder farming systems, their relative distribution in space and time, and typical management practices are largely unavailable for smallholder agriculture in developing counties. That is significant because empirical or process based models rely on information on the nature and extent of production systems, so called 'activity data'. Without it, it is not possible to run models, scale flux data to larger spatial extents, or target measurements with any certainty. In some cases, uncertainty in the extent and management for farming activities may be equivalent or greater to the uncertainty associated with the GHG fluxes themselves. The CGIAR-CCAFS protocol therefore provides guidelines for using remote sensing, targeted social and soil surveys, and proxies that correlate with socio-ecological condition and farm management to improve the quantity and quality of activity data available. Informed sampling . Smallholder agriculture typically involves multiple farming activities taking place in a field, nested within higher levels of organization (e.g., farm or landscape), each having interactive impacts on the cumulative GHG balance. To understand the net effect, attention must be paid to the full range of sources and sinks. Yet it is generally too resource intensive to measure them all. The CGIAR-CCAFS protocol deconstructs what is already known about nutrient stock changes and GHG fluxes to guide measurements toward emission hotspots or leverage points (e.g., methane emissions from cows in crop-livestock systems) within complex agroecosystems and landscapes. The premise underlying this approach is that information from other systems can be used to match the intensity of quantification effort with the predicted intensity of the source or sink. By reducing the uncertainty of the largest fluxes, using an informed sampling approach will hypothetically yield a more accurate and more precise estimate of the total systems' GHG balance. Multi-scale . Farming activities take place at the field level, but climate impacts and decision-making of smallholders extend to larger spatial scales. Households frequently manage farming activities across several fields, while institutions at the village or higher levels can determine land use practices across entire landscapes, as is the case of communal grazing lands or woodlands. Decisions by households and social organizations unite climate impacts across space. It is therefore important to consider spatial scales greater than the farming activity or field to understand GHG impacts and mitigation opportunities. Therefore, the CGIAR-CCAFS protocol targets quantification and mitigation efforts at the whole-farm and landscape levels to align data describing emissions and removals with the decision units of households and communities. Linking productivity and emissions . Smallholder farmers depend on farm production for food and income, and farm productivity is inextricably linked to food security. The importance of productivity must be taken into account in mitigation decision-making and the GHG research agenda supporting those decisions. So far, livelihood benefits and farmers' own priorities or other social benefits have been mostly ignored in GHG research. Quantification of GHG reductions from mitigation options is arguably irrelevant if the livelihood effects of those mitigation options are ignored, and scaling GHG emissions per unit area is agronomically meaningless if yields are not considered (Linquist et al 2012). Therefore, the CGIAR-CCAFS protocol recommends that future GHG quantification efforts for assessing mitigation options adopt a multi-criteria approach to include data on indicators of household benefits (e.g. productivity and nutrition). In that way, the research captures the balance of benefits between the private landholder and the global public good. Joint assessment of food production and emissions may produce optimal management strategies that balance the competing demands of food production and climate stabilization. For example, nitrous oxide emissions per unit of product are lowest at intermediate (not the lowest) fertilization rates (Van Groenigen et al 2010) which differs from the optimal strategy for reducing emissions per unit area. Costs associated with collecting the additional data are likely to be small relative to the operational budget for GHG field research and could viably become standard practice. Cost-differentiated measurements . Potential end users of the protocol are diverse in their purpose, resources available, and capacity to carry out research. For example, development organizations may want to determine the relative difference in emission impacts between mitigation options while governments may be interested in quantification of impacts across landscapes to develop Nationally Appropriate Mitigation Actions. The most useful approach to quantification therefore lies at the nexus among key constraints: objectives, resources, and capacity. The protocol develops a system of 'tiered' entry points for greenhouse gas accounting, with explicit attention directed toward the uncertainty induced from the various measurement selections. The protocol will include decision pathways to help users quickly determine the quantification options suitable for their goals and constraints to optimize the relationship among accuracy, costs, and scale. The CCAFS-CGIAR protocol is being developed and field-tested in mixed crop-livestock systems of Kenya and intensive rice production in the Philippines, with plans to expand to other sites and agroecosystems in the next year. These initial pilot projects provide a trial of the approach and methods, highlighting technical gaps and promising directions, while generating valuable emissions data. The role smallholder farming plays in Earth's climate system is uncertain due to lack of data. Better information is needed to calibrate the research, policy, and development communities' thinking on the importance of this issue. Generating the high value information that policy makers, development organizations, and farmers demand however pivots on creating accurate, useful, consistent, and meaningful data. The CCAFS-CGIAR protocol will help advance the scientific community's ability to provide such information by using standard methods of measurement in ways that recognize the data needs and the priorities of smallholder farmers. Acknowledgments We thank participants of the October 2012 Protocol Development workshop in Garmisch-Partenkirchen, Germany for their previous and ongoing contributions. We also thank CCAFS, Environment Canada, and the Mitigation of Climate Change in Agriculture (MICCA) Program of the United Nations Food and Agriculture Organization for their support of this initiative. References Berry N J and Ryan C M 2013 Overcoming the risk of inaction from emissions uncertainty in smallholder agriculture Environ. Res. Lett. 8 011003 FAO 2012 Smallholders and Family Farmers (Rome: FAO) (www.fao.org/fileadmin/templates/nr/sustainability_pathways/docs/Factsheet_SMALLHOLDERS.pdf, accessed 19 March 2013) Herrero M, Thornton P K, Kruska R and Reid R S 2008 Systems dynamics and the spatial distribution of methane emissions from African domestic ruminants to 2030 Agric. Ecosyst. Environ. 126 122-37 IPCC 2006 2006 IPCC Guidelines for National Greenhouse Gas Inventories ed H S Eggleston, L Buendia, K Miwa, T Ngara and K Tanabe (Hayama: IGES) Linquist B, Van Groenigen K J, Adviento-Borbe M A, Pittelkow C and Van Kessel C 2012 An agronomic assessment of greenhouse gas emissions from major cereal crops Glob. Change Biol. 18 194-209 Olander L, Wollenberg L, Tubiello F and Herald M 2013 Advancing agricultural greenhouse gas quantification Environ. Res. Lett. 8 011002 Palm C A, Smukler S M, Sullivan C C, Mutuo P K, Nyadzi G I and Walsh M G 2010 Identifying potential synergies and trade-offs for meeting food security and climate change objectives in sub-Saharan Africa Proc. Natl Acad. Sci. 107 19661-6 Parkin T B and Venterea R T 2010 Chamber-based trace gas flux measurements Sampling Protocols ed R F Follett chapter 3, pp 3-1-3-39 (available at: www.ars.usda.gov/research/GRACEnet) Smith P et al 2008 Greenhouse gas mitigation in agriculture Phil. Trans. R. Soc. B 363 789-813 Tilman D, Balzer C, Hill J and Befort B 2011 Global food demand and the sustainable intensification of agriculture Proc. Natl Acad. Sci. 108 20260-4 Van Groenigen J W, Velthof G L, Oeneme O, Van Groenigen K J and Van Kessel C 2010 Towards an agronomic assessment of N2O emissions: a case study for arable crops Eur. J. Soil Sci. 61 903-13 Verchot L V, Brienzajunior S, Deoliveira V, Mutegi J, Cattânio J H and Davidson E A 2008 Fluxes of CH4, CO2, NO, and N2O in an improved fallow agroforestry system in eastern Amazonia Agric. Ecosyst. Environ. 126 113-21 Vermeulen S J, Campbell B M and Ingram J S I 2012 Climate change and food systems Annu. Rev. Environ. Resour. 37 195-222
Siqueira, José F; Alves, Flávio R F; Versiani, Marco A; Rôças, Isabela N; Almeida, Bernardo M; Neves, Mônica A S; Sousa-Neto, Manoel D
2013-08-01
This ex vivo study evaluated the disinfecting and shaping ability of 3 protocols used in the preparation of mesial root canals of mandibular molars by means of correlative bacteriologic and micro-computed tomographic (μμCT) analysis. The mesial canals of extracted mandibular molars were contaminated with Enterococcus faecalis for 30 days and assigned to 3 groups based on their anatomic configuration as determined by μCT analysis according to the preparation technique (Self-Adjusting File [ReDent-Nova, Ra'anana, Israel], Reciproc [VDW, Munich, Germany], and Twisted File [SybronEndo, Orange, CA]). In all groups, 2.5% NaOCl was the irrigant. Canal samples were taken before (S1) and after instrumentation (S2), and bacterial quantification was performed using culture. Next, mesial roots were subjected to additional μCT analysis in order to evaluate shaping of the canals. All instrumentation protocols promoted a highly significant intracanal bacterial reduction (P < .001). Intergroup quantitative and qualitative comparisons disclosed no significant differences between groups (P > .05). As for shaping, no statistical difference was observed between the techniques regarding the mean percentage of volume increase, the surface area increase, the unprepared surface area, and the relative unprepared surface area (P > .05). Correlative analysis showed no statistically significant relationship between bacterial reduction and the mean percentage increase of the analyzed parameters (P > .05). The 3 instrumentation systems have similar disinfecting and shaping performance in the preparation of mesial canals of mandibular molars. Copyright © 2013 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Chen, Yi; Fisher, Kate J.; Lloyd, Mark; Wood, Elizabeth R.; Coppola, Domenico; Siegel, Erin; Shibata, David; Chen, Yian A.; Koomen, John M.
2017-01-01
Quantitative evaluation of protein expression across multiple cancer-related signaling pathways (e.g. Wnt/β-catenin, TGF-β, receptor tyrosine kinases (RTK), MAP kinases, NF-κB, and apoptosis) in tumor tissues may enable the development of a molecular profile for each individual tumor that can aid in the selection of appropriate targeted cancer therapies. Here, we describe the development of a broadly applicable protocol to develop and implement quantitative mass spectrometry assays using cell line models and frozen tissue specimens from colon cancer patients. Cell lines are used to develop peptide-based assays for protein quantification, which are incorporated into a method based on SDS-PAGE protein fractionation, in-gel digestion, and liquid chromatography-multiple reaction monitoring mass spectrometry (LC-MRM/MS). This analytical platform is then applied to frozen tumor tissues. This protocol can be broadly applied to the study of human disease using multiplexed LC-MRM assays. PMID:28808993
Modi, Arpan; Kumar, Nitish; Narayanan, Subhash
2016-01-01
Stevia (Stevia rebaudiana Bertoni) is a medicinal plant having sweet, diterpenoid glycosides known as steviol glycosides which are 200-300 times sweeter than sucrose (0.4 % solution). They are synthesized mainly in the leaves via plastid localized 2-C-methyl-D-erythrose-4-phosphate pathway (MEP pathway). Fifteen genes are involved in the formation of these glycosides. In the present protocol, a method for the quantification of transcripts of these genes is shown. The work involves RNA extraction and cDNA preparation, and therefore, procedures for the confirmation of DNA-free cDNA preparation have also been illustrated. Moreover, details of plant treatments are not mentioned as this protocol may apply to relative gene expression profile in any medicinal plant with any treatment. The treatments are numbered as T0 (Control), T1, T2, T3, and T4.
Sun, Bing; Shen, Feng; McCalla, Stephanie E; Kreutz, Jason E; Karymov, Mikhail A; Ismagilov, Rustem F
2013-02-05
Here we used a SlipChip microfluidic device to evaluate the performance of digital reverse transcription-loop-mediated isothermal amplification (dRT-LAMP) for quantification of HIV viral RNA. Tests are needed for monitoring HIV viral load to control the emergence of drug resistance and to diagnose acute HIV infections. In resource-limited settings, in vitro measurement of HIV viral load in a simple format is especially needed, and single-molecule counting using a digital format could provide a potential solution. We showed here that when one-step dRT-LAMP is used for quantification of HIV RNA, the digital count is lower than expected and is limited by the yield of desired cDNA. We were able to overcome the limitations by developing a microfluidic protocol to manipulate many single molecules in parallel through a two-step digital process. In the first step we compartmentalize the individual RNA molecules (based on Poisson statistics) and perform reverse transcription on each RNA molecule independently to produce DNA. In the second step, we perform the LAMP amplification on all individual DNA molecules in parallel. Using this new protocol, we increased the absolute efficiency (the ratio between the concentration calculated from the actual count and the expected concentration) of dRT-LAMP 10-fold, from ∼2% to ∼23%, by (i) using a more efficient reverse transcriptase, (ii) introducing RNase H to break up the DNA:RNA hybrid, and (iii) adding only the BIP primer during the RT step. We also used this two-step method to quantify HIV RNA purified from four patient samples and found that in some cases, the quantification results were highly sensitive to the sequence of the patient's HIV RNA. We learned the following three lessons from this work: (i) digital amplification technologies, including dLAMP and dPCR, may give adequate dilution curves and yet have low efficiency, thereby providing quantification values that underestimate the true concentration. Careful validation is essential before a method is considered to provide absolute quantification; (ii) the sensitivity of dLAMP to the sequence of the target nucleic acid necessitates additional validation with patient samples carrying the full spectrum of mutations; (iii) for multistep digital amplification chemistries, such as a combination of reverse transcription with amplification, microfluidic devices may be used to decouple these steps from one another and to perform them under different, individually optimized conditions for improved efficiency.
Nguyen, An Thi-Binh; Nigen, Michaël; Jimenez, Luciana; Ait-Abderrahim, Hassina; Marchesseau, Sylvie; Picart-Palmade, Laetitia
2018-01-15
Dextran or xanthan were used as model exocellular polysaccharides (EPS) to compare the extraction efficiency of EPS from skim milk acid gels using three different protocols. Extraction yields, residual protein concentrations and the macromolecular properties of extracted EPS were determined. For both model EPS, the highest extraction yield (∼80%) was obtained when samples were heated in acidic conditions at the first step of extraction (Protocol 1). Protocols that contained steps of acid/ethanol precipitation without heating (Protocols 2 and 3) show lower extraction yields (∼55%) but allow a better preservation of the EPS macromolecular properties. Changing the pH of acid gels up to 7 before extraction (Protocol 3) improved the extraction yield of anionic EPS without effect on the macromolecular properties of EPS. Protocol 1 was then applied for the quantification of EPS produced during the yogurt fermentation, while Protocol 3 was dedicated to their macromolecular characterization. Copyright © 2017 Elsevier Ltd. All rights reserved.
Motosugi, Utaroh; Hernando, Diego; Wiens, Curtis; Bannas, Peter; Reeder, Scott. B
2017-01-01
Purpose: To determine whether high signal-to-noise ratio (SNR) acquisitions improve the repeatability of liver proton density fat fraction (PDFF) measurements using confounder-corrected chemical shift-encoded magnetic resonance (MR) imaging (CSE-MRI). Materials and Methods: Eleven fat-water phantoms were scanned with 8 different protocols with varying SNR. After repositioning the phantoms, the same scans were repeated to evaluate the test-retest repeatability. Next, an in vivo study was performed with 20 volunteers and 28 patients scheduled for liver magnetic resonance imaging (MRI). Two CSE-MRI protocols with standard- and high-SNR were repeated to assess test-retest repeatability. MR spectroscopy (MRS)-based PDFF was acquired as a standard of reference. The standard deviation (SD) of the difference (Δ) of PDFF measured in the two repeated scans was defined to ascertain repeatability. The correlation between PDFF of CSE-MRI and MRS was calculated to assess accuracy. The SD of Δ and correlation coefficients of the two protocols (standard- and high-SNR) were compared using F-test and t-test, respectively. Two reconstruction algorithms (complex-based and magnitude-based) were used for both the phantom and in vivo experiments. Results: The phantom study demonstrated that higher SNR improved the repeatability for both complex- and magnitude-based reconstruction. Similarly, the in vivo study demonstrated that the repeatability of the high-SNR protocol (SD of Δ = 0.53 for complex- and = 0.85 for magnitude-based fit) was significantly higher than using the standard-SNR protocol (0.77 for complex, P < 0.001; and 0.94 for magnitude-based fit, P = 0.003). No significant difference was observed in the accuracy between standard- and high-SNR protocols. Conclusion: Higher SNR improves the repeatability of fat quantification using confounder-corrected CSE-MRI. PMID:28190853
Osteogenic differentiation of equine adipose tissue derived mesenchymal stem cells using CaCl2.
Elashry, Mohamed I; Baulig, Nadine; Heimann, Manuela; Bernhardt, Caroline; Wenisch, Sabine; Arnhold, Stefan
2018-04-01
Adipose tissue derived mesenchymal stem cells (ASCs) may be used to cure bone defects after osteogenic differentiation. In this study we tried to optimize osteogenic differentiation for equine ASCs using various concentrations of CaCl 2 in comparison to the standard osteogenic protocol. ASCs were isolated from subcutaneous adipose tissue from mixed breed horses. The osteogenic induction protocols were (1) the standard osteogenic medium (OM) composed of dexamethasone, ascorbic acid and β-glycerol phosphate; (2) CaCl 2 based protocol composed of 3, 5 and 7.5mM CaCl 2 . Differentiation and proliferation were evaluated at 7, 10, 14 and 21days post-differentiation induction using the alizarin red staining (ARS) detecting matrix calcification. Semi-quantification of cell protein content, ARS and alkaline phosphatase activity (ALP) were performed using an ELISA reader. Quantification of the transcription level for the common osteogenic markers alkaline phosphatase (ALP) and Osteopontin (OP) was performed using RT-qPCR. In the presence of CaCl 2 , a concentration dependent effect on the osteogenic differentiation capacity was evident by the ARS evaluation and OP gene expression. We provide evidence that 5 and 7mM CaCl 2 enhance the osteogenic differentiation compared to the OM protocol. Although, there was a clear commitment of ASCs to the osteogenic fate in the presence of 5 and 7mM CaCl 2 , cell proliferation was increased compared to OM. We report that an optimized CaCl 2 protocol reliably influences ASCs osteogenesis while conserving the proliferation capacity. Thus, using these protocols provide a platform for using ASCs as a cell source in bone tissue engineering. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kirschner, A K T; Rameder, A; Schrammel, B; Indra, A; Farnleitner, A H; Sommer, R
2012-06-01
Open cooling towers are frequent sources of infections with Legionella pneumophila. The gold standard for the detection of Leg. pneumophila is based on cultivation lasting up to 10 days and detecting only culturable cells. Alternative fluorescence in situ hybridization (FISH) protocols have been proposed, but they result in faint fluorescence signals and lack specificity because of cross-hybridization with other Legionella species. Our aim was thus to develop a new FISH protocol for rapid and specific detection of Leg. pneumophila in water samples. A novel catalysed reporter deposition FISH (CARD-FISH) protocol for the detection of Leg. pneumophila was developed, which significantly enhanced signal intensity as well as specificity of the probe through the use of a novel competitor probe. The developed protocol was compared with the culture method for monitoring the seasonal development of culturable and nonculturable Leg. pneumophila in two hospital cooling tower systems. Seasonal fluctuations of Leg. pneumophila concentrations detected via CARD-FISH were related to the development of the total bacterial community in both cooling towers, with temperature and biocide as the main factors controlling this development. Our results clearly showed that the majority of the Leg. pneumophila cells were in a nonculturable state. Thus, detection of Leg. pneumophila with culture methods may underestimate the total numbers of Leg. pneumophila present. Rapid, sensitive and specific detection and quantification of Leg. pneumophila in water systems is prerequisite for reliable risk estimation. The new protocol significantly improves current methodology and can be used to monitor and screen for Leg. pneumophila concentrations in cooling towers or other water systems. © 2012 The Authors. Journal of Applied Microbiology © 2012 The Society for Applied Microbiology.
Hampf, Mathias; Gossen, Manfred
2006-09-01
We established a quantitative reporter gene protocol, the P/Rluc assay system, allowing the sequential measurement of Photinus and Renilla luciferase activities from the same extract. Other than comparable commercial reporter assay systems and their noncommercial counterparts, the P/Rluc assay system was formulated under the aspect of full compatibility with standard methods for protein assays. This feature greatly expands the range of applications for assay systems quantifying the expression of multiple luciferase reporters.
The MaxQuant computational platform for mass spectrometry-based shotgun proteomics.
Tyanova, Stefka; Temu, Tikira; Cox, Juergen
2016-12-01
MaxQuant is one of the most frequently used platforms for mass-spectrometry (MS)-based proteomics data analysis. Since its first release in 2008, it has grown substantially in functionality and can be used in conjunction with more MS platforms. Here we present an updated protocol covering the most important basic computational workflows, including those designed for quantitative label-free proteomics, MS1-level labeling and isobaric labeling techniques. This protocol presents a complete description of the parameters used in MaxQuant, as well as of the configuration options of its integrated search engine, Andromeda. This protocol update describes an adaptation of an existing protocol that substantially modifies the technique. Important concepts of shotgun proteomics and their implementation in MaxQuant are briefly reviewed, including different quantification strategies and the control of false-discovery rates (FDRs), as well as the analysis of post-translational modifications (PTMs). The MaxQuant output tables, which contain information about quantification of proteins and PTMs, are explained in detail. Furthermore, we provide a short version of the workflow that is applicable to data sets with simple and standard experimental designs. The MaxQuant algorithms are efficiently parallelized on multiple processors and scale well from desktop computers to servers with many cores. The software is written in C# and is freely available at http://www.maxquant.org.
Yu, Kate; Little, David; Plumb, Rob; Smith, Brian
2006-01-01
A quantitative Ultra Performance liquid chromatography/tandem mass spectrometry (UPL/MS/MS) protocol was developed for a five-compound mixture in rat plasma. A similar high-performance liquid chromatography/tandem mass spectrometry (HPLC/MS/MS) quantification protocol was developed for comparison purposes. Among the five test compounds, three preferred positive electrospray ionization (ESI) and two preferred negative ESI. As a result, both UPLC/MS/MS and HPLC/MS/MS analyses were performed by having the mass spectrometer collecting ESI multiple reaction monitoring (MRM) data in both positive and negative ion modes during a single injection. Peak widths for most standards were 4.8 s for the HPLC analysis and 2.4 s for the UPLC analysis. There were 17 to 20 data points obtained for each of the LC peaks. Compared with the HPLC/MS/MS method, the UPLC/MS/MS method offered 3-fold decrease in retention time, up to 10-fold increase in detected peak height, with 2-fold decrease in peak width. Limits of quantification (LOQs) for both HPLC and UPLC methods were evaluated. For UPLC/MS/MS analysis, a linear range up to four orders of magnitude was obtained with r2 values ranging from 0.991 to 0.998. The LOQs for the five analytes ranged from 0.08 to 9.85 ng/mL. Three levels of quality control (QC) samples were analyzed. For the UPLC/MS/MS protocol, the percent relative standard deviation (RSD%) for low QC (2 ng/mL) ranged from 3.42 to 8.67% (N = 18). The carryover of the UPLC/MS/MS protocol was negligible and the robustness of the UPLC/MS/MS system was evaluated with up to 963 QC injections. Copyright 2006 John Wiley & Sons, Ltd.
Musante, Luca; Tataruch-Weinert, Dorota; Kerjaschki, Dontscho; Henry, Michael; Meleady, Paula; Holthofer, Harry
2017-01-01
Urinary extracellular vesicles (UEVs) appear an ideal source of biomarkers for kidney and urogenital diseases. The majority of protocols designed for their isolation are based on differential centrifugation steps. However, little is still known of the type and amount of vesicles left in the supernatant. Here we used an isolation protocol for UEVs which uses hydrostatic filtration dialysis as first pre-enrichment step, followed by differential centrifugation. Transmission electron microscopy (TEM), mass spectrometry (MS), western blot, ELISA assays and tuneable resistive pulse sensing (TRPS) were used to characterise and quantify UEVs in the ultracentrifugation supernatant. TEM showed the presence of a variety of small size vesicles in the supernatant while protein identification by MS matched accurately with the protein list available in Vesiclepedia. Screening and relative quantification for specific vesicle markers showed that the supernatant was preferentially positive for CD9 and TSG101. ELISA tests for quantification of exosome revealed that 14%, was left in the supernatant with a particle diameter of 110 nm and concentration of 1.54 × 10 10 /ml. Here we show a comprehensive characterisation of exosomes and other small size urinary vesicles which the conventional differential centrifugation protocol may lose.
Musante, Luca; Tataruch-Weinert, Dorota; Kerjaschki, Dontscho; Henry, Michael; Meleady, Paula; Holthofer, Harry
2017-01-01
ABSTRACT Urinary extracellular vesicles (UEVs) appear an ideal source of biomarkers for kidney and urogenital diseases. The majority of protocols designed for their isolation are based on differential centrifugation steps. However, little is still known of the type and amount of vesicles left in the supernatant. Here we used an isolation protocol for UEVs which uses hydrostatic filtration dialysis as first pre-enrichment step, followed by differential centrifugation. Transmission electron microscopy (TEM), mass spectrometry (MS), western blot, ELISA assays and tuneable resistive pulse sensing (TRPS) were used to characterise and quantify UEVs in the ultracentrifugation supernatant. TEM showed the presence of a variety of small size vesicles in the supernatant while protein identification by MS matched accurately with the protein list available in Vesiclepedia. Screening and relative quantification for specific vesicle markers showed that the supernatant was preferentially positive for CD9 and TSG101. ELISA tests for quantification of exosome revealed that 14%, was left in the supernatant with a particle diameter of 110 nm and concentration of 1.54 × 1010/ml. Here we show a comprehensive characterisation of exosomes and other small size urinary vesicles which the conventional differential centrifugation protocol may lose. PMID:28326167
Yu, Yang; Rajagopal, Ram
2015-02-17
Two dispatch protocols have been adopted by electricity markets to deal with the uncertainty of wind power but the effects of the selection between the dispatch protocols have not been comprehensively analyzed. We establish a framework to compare the impacts of adopting different dispatch protocols on the efficacy of using wind power and implementing a carbon tax to reduce emissions. We suggest that a market has high potential to achieve greater emission reduction by adopting the stochastic dispatch protocol instead of the static protocol when the wind energy in the market is highly uncertain or the market has enough adjustable generators, such as gas-fired combustion generators. Furthermore, the carbon-tax policy is more cost-efficient for reducing CO2 emission when the market operates according to the stochastic protocol rather than the static protocol. An empirical study, which is calibrated according to the data from the Electric Reliability Council of Texas market, confirms that using wind energy in the Texas market results in a 12% CO2 emission reduction when the market uses the stochastic dispatch protocol instead of the 8% emission reduction associated with the static protocol. In addition, if a 6$/ton carbon tax is implemented in the Texas market operated according to the stochastic protocol, the CO2 emission is similar to the emission level from the same market with a 16$/ton carbon tax operated according to the static protocol. Correspondingly, the 16$/ton carbon tax associated with the static protocol costs 42.6% more than the 6$/ton carbon tax associated with the stochastic protocol.
Radionuclide-fluorescence Reporter Gene Imaging to Track Tumor Progression in Rodent Tumor Models
Volpe, Alessia; Man, Francis; Lim, Lindsay; Khoshnevisan, Alex; Blower, Julia; Blower, Philip J.; Fruhwirth, Gilbert O.
2018-01-01
Metastasis is responsible for most cancer deaths. Despite extensive research, the mechanistic understanding of the complex processes governing metastasis remains incomplete. In vivo models are paramount for metastasis research, but require refinement. Tracking spontaneous metastasis by non-invasive in vivo imaging is now possible, but remains challenging as it requires long-time observation and high sensitivity. We describe a longitudinal combined radionuclide and fluorescence whole-body in vivo imaging approach for tracking tumor progression and spontaneous metastasis. This reporter gene methodology employs the sodium iodide symporter (NIS) fused to a fluorescent protein (FP). Cancer cells are engineered to stably express NIS-FP followed by selection based on fluorescence-activated cell sorting. Corresponding tumor models are established in mice. NIS-FP expressing cancer cells are tracked non-invasively in vivo at the whole-body level by positron emission tomography (PET) using the NIS radiotracer [18F]BF4-. PET is currently the most sensitive in vivo imaging technology available at this scale and enables reliable and absolute quantification. Current methods either rely on large cohorts of animals that are euthanized for metastasis assessment at varying time points, or rely on barely quantifiable 2D imaging. The advantages of the described method are: (i) highly sensitive non-invasive in vivo 3D PET imaging and quantification, (ii) automated PET tracer production, (iii) a significant reduction in required animal numbers due to repeat imaging options, (iv) the acquisition of paired data from subsequent imaging sessions providing better statistical data, and (v) the intrinsic option for ex vivo confirmation of cancer cells in tissues by fluorescence microscopy or cytometry. In this protocol, we describe all steps required for routine NIS-FP-afforded non-invasive in vivo cancer cell tracking using PET/CT and ex vivo confirmation of in vivo results. This protocol has applications beyond cancer research whenever in vivo localization, expansion and long-time monitoring of a cell population is of interest. PMID:29608157
The effect of CT technical factors on quantification of lung fissure integrity
NASA Astrophysics Data System (ADS)
Chong, D.; Brown, M. S.; Ochs, R.; Abtin, F.; Brown, M.; Ordookhani, A.; Shaw, G.; Kim, H. J.; Gjertson, D.; Goldin, J. G.
2009-02-01
A new emphysema treatment uses endobronchial valves to perform lobar volume reduction. The degree of fissure completeness may predict treatment efficacy. This study investigated the behavior of a semiautomated algorithm for quantifying lung fissure integrity in CT with respect to reconstruction kernel and dose. Raw CT data was obtained for six asymptomatic patients from a high-risk population for lung cancer. The patients were scanned on either a Siemens Sensation 16 or 64, using a low-dose protocol of 120 kVp, 25 mAs. Images were reconstructed using kernels ranging from smooth to sharp (B10f, B30f, B50f, B70f). Research software was used to simulate an even lower-dose acquisition of 15 mAs, and images were generated at the same kernels resulting in 8 series per patient. The left major fissure was manually contoured axially at regular intervals, yielding 37 contours across all patients. These contours were read into an image analysis and pattern classification system which computed a Fissure Integrity Score (FIS) for each kernel and dose. FIS values were analyzed using a mixed-effects model with kernel and dose as fixed effects and patient as random effect to test for difference due to kernel and dose. Analysis revealed no difference in FIS between the smooth kernels (B10f, B30f) nor between sharp kernels (B50f, B70f), but there was a significant difference between the sharp and smooth groups (p = 0.020). There was no significant difference in FIS between the two low-dose reconstructions (p = 0.882). Using a cutoff of 90%, the number of incomplete fissures increased from 5 to 10 when the imaging protocol changed from B50f to B30f. Reconstruction kernel has a significant effect on quantification of fissure integrity in CT. This has potential implications when selecting patients for endobronchial valve therapy.
The protocol describes the Environmental Technology Verification (ETV) Program's considerations and requirements for verification of emissions reduction provided by selective catalytic reduction (SCR) technologies. The basis of the ETV will be comparison of the emissions and perf...
Wang, Hanghang; Muehlbauer, Michael J.; O’Neal, Sara K.; Newgard, Christopher B.; Hauser, Elizabeth R.; Shah, Svati H.
2017-01-01
The field of metabolomics as applied to human disease and health is rapidly expanding. In recent efforts of metabolomics research, greater emphasis has been placed on quality control and method validation. In this study, we report an experience with quality control and a practical application of method validation. Specifically, we sought to identify and modify steps in gas chromatography-mass spectrometry (GC-MS)-based, non-targeted metabolomic profiling of human plasma that could influence metabolite identification and quantification. Our experimental design included two studies: (1) a limiting-dilution study, which investigated the effects of dilution on analyte identification and quantification; and (2) a concentration-specific study, which compared the optimal plasma extract volume established in the first study with the volume used in the current institutional protocol. We confirmed that contaminants, concentration, repeatability and intermediate precision are major factors influencing metabolite identification and quantification. In addition, we established methods for improved metabolite identification and quantification, which were summarized to provide recommendations for experimental design of GC-MS-based non-targeted profiling of human plasma. PMID:28841195
Development of a method for detection and quantification of B. brongniartii and B. bassiana in soil
NASA Astrophysics Data System (ADS)
Canfora, L.; Malusà, E.; Tkaczuk, C.; Tartanus, M.; Łabanowska, B. H.; Pinzari, F.
2016-03-01
A culture independent method based on qPCR was developed for the detection and quantification of two fungal inoculants in soil. The aim was to adapt a genotyping approach based on SSR (Simple Sequence Repeat) marker to a discriminating tracing of two different species of bioinoculants in soil, after their in-field release. Two entomopathogenic fungi, Beauveria bassiana and B. brongniartii, were traced and quantified in soil samples obtained from field trials. These two fungal species were used as biological agents in Poland to control Melolontha melolontha (European cockchafer), whose larvae live in soil menacing horticultural crops. Specificity of SSR markers was verified using controls consisting of: i) soil samples containing fungal spores of B. bassiana and B. brongniartii in known dilutions; ii) the DNA of the fungal microorganisms; iii) soil samples singly inoculated with each fungus species. An initial evaluation of the protocol was performed with analyses of soil DNA and mycelial DNA. Further, the simultaneous detection and quantification of B. bassiana and B. brongniartii in soil was achieved in field samples after application of the bio-inoculants. The protocol can be considered as a relatively low cost solution for the detection, identification and traceability of fungal bio-inoculants in soil.
NASA Astrophysics Data System (ADS)
Illien, Françoise; Rodriguez, Nicolas; Amoura, Mehdi; Joliot, Alain; Pallerla, Manjula; Cribier, Sophie; Burlina, Fabienne; Sagan, Sandrine
2016-11-01
The mechanism of cell-penetrating peptides entry into cells is unclear, preventing the development of more efficient vectors for biotechnological or therapeutic purposes. Here, we developed a protocol relying on fluorometry to distinguish endocytosis from direct membrane translocation, using Penetratin, TAT and R9. The quantities of internalized CPPs measured by fluorometry in cell lysates converge with those obtained by our previously reported mass spectrometry quantification method. By contrast, flow cytometry quantification faces several limitations due to fluorescence quenching processes that depend on the cell line and occur at peptide/cell ratio >6.108 for CF-Penetratin. The analysis of cellular internalization of a doubly labeled fluorescent and biotinylated Penetratin analogue by the two independent techniques, fluorometry and mass spectrometry, gave consistent results at the quantitative and qualitative levels. Both techniques revealed the use of two alternative translocation and endocytosis pathways, whose relative efficacy depends on cell-surface sugars and peptide concentration. We confirmed that Penetratin translocates at low concentration and uses endocytosis at high μM concentrations. We further demonstrate that the hydrophobic/hydrophilic nature of the N-terminal extremity impacts on the internalization efficiency of CPPs. We expect these results and the associated protocols to help unraveling the translocation pathway to the cytosol of cells.
Development of a method for detection and quantification of B. brongniartii and B. bassiana in soil
Canfora, L.; Malusà, E.; Tkaczuk, C.; Tartanus, M.; Łabanowska, B.H.; Pinzari, F.
2016-01-01
A culture independent method based on qPCR was developed for the detection and quantification of two fungal inoculants in soil. The aim was to adapt a genotyping approach based on SSR (Simple Sequence Repeat) marker to a discriminating tracing of two different species of bioinoculants in soil, after their in-field release. Two entomopathogenic fungi, Beauveria bassiana and B. brongniartii, were traced and quantified in soil samples obtained from field trials. These two fungal species were used as biological agents in Poland to control Melolontha melolontha (European cockchafer), whose larvae live in soil menacing horticultural crops. Specificity of SSR markers was verified using controls consisting of: i) soil samples containing fungal spores of B. bassiana and B. brongniartii in known dilutions; ii) the DNA of the fungal microorganisms; iii) soil samples singly inoculated with each fungus species. An initial evaluation of the protocol was performed with analyses of soil DNA and mycelial DNA. Further, the simultaneous detection and quantification of B. bassiana and B. brongniartii in soil was achieved in field samples after application of the bio-inoculants. The protocol can be considered as a relatively low cost solution for the detection, identification and traceability of fungal bio-inoculants in soil. PMID:26975931
This ETV program generic verification protocol was prepared and reviewed for the Verification of Pesticide Drift Reduction Technologies project. The protocol provides a detailed methodology for conducting and reporting results from a verification test of pesticide drift reductio...
43 CFR 11.71 - Quantification phase-service reduction quantification.
Code of Federal Regulations, 2012 CFR
2012-10-01
... identified geohydrological units, which are aquifers or confining layers, within the assessment area. (2)(i... determined by determining: (1) The surface area of soil with reduced ability to sustain the growth of vegetation from the baseline level; (2) The surface area or volume of soil with reduced suitability as...
43 CFR 11.71 - Quantification phase-service reduction quantification.
Code of Federal Regulations, 2013 CFR
2013-10-01
... identified geohydrological units, which are aquifers or confining layers, within the assessment area. (2)(i... determined by determining: (1) The surface area of soil with reduced ability to sustain the growth of vegetation from the baseline level; (2) The surface area or volume of soil with reduced suitability as...
43 CFR 11.71 - Quantification phase-service reduction quantification.
Code of Federal Regulations, 2014 CFR
2014-10-01
... identified geohydrological units, which are aquifers or confining layers, within the assessment area. (2)(i... determined by determining: (1) The surface area of soil with reduced ability to sustain the growth of vegetation from the baseline level; (2) The surface area or volume of soil with reduced suitability as...
Quantification and bioaccessibility of California pistachio bioactives
USDA-ARS?s Scientific Manuscript database
The content of carotenoids, chlorophylls, phenolics, and tocols in pistachios (Pistacia vera L.) has not been methodically quantified. The objective of this study was to first optimize extraction protocols for lipophilic nutrients and then quantify the content of two phenolic acids, nine flavonoids,...
Zehentmeier, Sandra; Cseresnyes, Zoltan; Escribano Navarro, Juan; Niesner, Raluca A.; Hauser, Anja E.
2015-01-01
Confocal microscopy is the method of choice for the analysis of localization of multiple cell types within complex tissues such as the bone marrow. However, the analysis and quantification of cellular localization is difficult, as in many cases it relies on manual counting, thus bearing the risk of introducing a rater-dependent bias and reducing interrater reliability. Moreover, it is often difficult to judge whether the co-localization between two cells results from random positioning, especially when cell types differ strongly in the frequency of their occurrence. Here, a method for unbiased quantification of cellular co-localization in the bone marrow is introduced. The protocol describes the sample preparation used to obtain histological sections of whole murine long bones including the bone marrow, as well as the staining protocol and the acquisition of high-resolution images. An analysis workflow spanning from the recognition of hematopoietic and non-hematopoietic cell types in 2-dimensional (2D) bone marrow images to the quantification of the direct contacts between those cells is presented. This also includes a neighborhood analysis, to obtain information about the cellular microenvironment surrounding a certain cell type. In order to evaluate whether co-localization of two cell types is the mere result of random cell positioning or reflects preferential associations between the cells, a simulation tool which is suitable for testing this hypothesis in the case of hematopoietic as well as stromal cells, is used. This approach is not limited to the bone marrow, and can be extended to other tissues to permit reproducible, quantitative analysis of histological data. PMID:25938636
Digital Droplet PCR: CNV Analysis and Other Applications.
Mazaika, Erica; Homsy, Jason
2014-07-14
Digital droplet PCR (ddPCR) is an assay that combines state-of-the-art microfluidics technology with TaqMan-based PCR to achieve precise target DNA quantification at high levels of sensitivity and specificity. Because quantification is achieved without the need for standard assays in an easy to interpret, unambiguous digital readout, ddPCR is far simpler, faster, and less error prone than real-time qPCR. The basic protocol can be modified with minor adjustments to suit a wide range of applications, such as CNV analysis, rare variant detection, SNP genotyping, and transcript quantification. This unit describes the ddPCR workflow in detail for the Bio-Rad QX100 system, but the theory and data interpretation are generalizable to any ddPCR system. Copyright © 2014 John Wiley & Sons, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lamichhane, N; Padgett, K; Li, X
Purpose: To present a simple method for quantification of dual-energy CT metal artifact reduction capabilities Methods: A phantom was constructed from solid water and a steel cylinder. Solid water is commonly used for radiotherapy QA, while steel cylinders are readily available in hardware stores. The phantom was scanned on Siemens Somatom 64-slice dual-energy CT system. Three CTs were acquired at energies of 80kV (low), 120kV (nominal), and 140kV (high). The low and high energy acquisitions were used to generate dual-energy (DE) monoenergetic image sets, which also utilized metal artifact reduction algorithm (Maris). Several monoenergetic DE image sets, ranging from 70keVmore » to 190keV were generated. The size of the metal artifact was measured by two different approaches. The first approach measured the distance from the center of the steel cylinder to a location with nominal (undisturbed by metal) HU value for the 120kV, DE 70keV, and DE 190keV image sets. In the second approach, the distance from the center of the cylinder to the edge of the air pocket for the above mentioned three image sets was measured. Results: The DE 190keV synthetic image set demonstrated the largest reduction of the metal artifacts. The size of the artifact was more than three times the actual size of the milled hole in the solid water in the DE 190keV, as compared to more than 7.5 times larger as estimated from the 120kV uncorrected image Conclusion: A simple phantom for quantification of dual-energy CT metal artifact reduction capabilities was presented. This inexpensive phantom can be easily built from components available in every radiation oncology department. It allows quick assessment and quantification of the properties of different metal artifact reduction algorithms, available on modern dual-energy CT scanners.« less
NASA Astrophysics Data System (ADS)
Viswanath, Satish; Bloch, B. Nicholas; Chappelow, Jonathan; Patel, Pratik; Rofsky, Neil; Lenkinski, Robert; Genega, Elizabeth; Madabhushi, Anant
2011-03-01
Currently, there is significant interest in developing methods for quantitative integration of multi-parametric (structural, functional) imaging data with the objective of building automated meta-classifiers to improve disease detection, diagnosis, and prognosis. Such techniques are required to address the differences in dimensionalities and scales of individual protocols, while deriving an integrated multi-parametric data representation which best captures all disease-pertinent information available. In this paper, we present a scheme called Enhanced Multi-Protocol Analysis via Intelligent Supervised Embedding (EMPrAvISE); a powerful, generalizable framework applicable to a variety of domains for multi-parametric data representation and fusion. Our scheme utilizes an ensemble of embeddings (via dimensionality reduction, DR); thereby exploiting the variance amongst multiple uncorrelated embeddings in a manner similar to ensemble classifier schemes (e.g. Bagging, Boosting). We apply this framework to the problem of prostate cancer (CaP) detection on 12 3 Tesla pre-operative in vivo multi-parametric (T2-weighted, Dynamic Contrast Enhanced, and Diffusion-weighted) magnetic resonance imaging (MRI) studies, in turn comprising a total of 39 2D planar MR images. We first align the different imaging protocols via automated image registration, followed by quantification of image attributes from individual protocols. Multiple embeddings are generated from the resultant high-dimensional feature space which are then combined intelligently to yield a single stable solution. Our scheme is employed in conjunction with graph embedding (for DR) and probabilistic boosting trees (PBTs) to detect CaP on multi-parametric MRI. Finally, a probabilistic pairwise Markov Random Field algorithm is used to apply spatial constraints to the result of the PBT classifier, yielding a per-voxel classification of CaP presence. Per-voxel evaluation of detection results against ground truth for CaP extent on MRI (obtained by spatially registering pre-operative MRI with available whole-mount histological specimens) reveals that EMPrAvISE yields a statistically significant improvement (AUC=0.77) over classifiers constructed from individual protocols (AUC=0.62, 0.62, 0.65, for T2w, DCE, DWI respectively) as well as one trained using multi-parametric feature concatenation (AUC=0.67).
ECOLOGICALLY-RELEVANT QUANTIFICATION OF STREAMFLOW REGIMES IN WESTERN STREAMS
This report describes the rationale for and application of a protocol for estimation of ecologically-relevant streamflow metrics that quantify streamflow regime for ungaged sites subject to a range of human impact. The analysis presented here is focused on sites sampled by the U....
Connor, Kip M; Krah, Nathan M; Dennison, Roberta J; Aderman, Christopher M; Chen, Jing; Guerin, Karen I; Sapieha, Przemyslaw; Stahl, Andreas; Willett, Keirnan L; Smith, Lois E H
2013-01-01
The mouse model of oxygen-induced retinopathy (OIR) has been widely used in studies related to retinopathy of prematurity, proliferative diabetic retinopathy and in studies evaluating the efficacy of antiangiogenic compounds. In this model, 7-d-old (P7) mouse pups with nursing mothers are subjected to hyperoxia (75% oxygen) for 5 d, which inhibits retinal vessel growth and causes significant vessel loss. on P12, mice are returned to room air and the hypoxic avascular retina triggers both normal vessel regrowth and retinal neovascularization (NV), which is maximal at P17. neovascularization spontaneously regresses between P17 and P25. although the OIR model has been the cornerstone of studies investigating proliferative retinopathies, there is currently no harmonized protocol to assess aspects of angiogenesis and treatment outcome. In this protocol we describe standards for mouse size, sample size, retinal preparation, quantification of vascular loss, vascular regrowth, NV and neovascular regression. PMID:19816419
Assaying Cellular Viability Using the Neutral Red Uptake Assay.
Ates, Gamze; Vanhaecke, Tamara; Rogiers, Vera; Rodrigues, Robim M
2017-01-01
The neutral red uptake assay is a cell viability assay that allows in vitro quantification of xenobiotic-induced cytotoxicity. The assay relies on the ability of living cells to incorporate and bind neutral red, a weak cationic dye, in lysosomes. As such, cytotoxicity is expressed as a concentration-dependent reduction of the uptake of neutral red after exposure to the xenobiotic under investigation. The neutral red uptake assay is mainly used for hazard assessment in in vitro toxicology applications. This method has also been introduced in regulatory recommendations as part of 3T3-NRU-phototoxicity-assay, which was regulatory accepted in all EU member states in 2000 and in the OECD member states in 2004 as a test guideline (TG 432). The present protocol describes the neutral red uptake assay using the human hepatoma cell line HepG2, which is often employed as an alternative in vitro model for human hepatocytes. As an example, the cytotoxicity of acetaminophen and acetyl salicylic acid is assessed.
Cremer, Signe E; Krogh, Anne K H; Hedström, Matilda E K; Christiansen, Liselotte B; Tarnow, Inge; Kristensen, Annemarie T
2018-06-01
Platelet microparticles (PMPs) are subcellular procoagulant vesicles released upon platelet activation. In people with clinical diseases, alterations in PMP concentrations have been extensively investigated, but few canine studies exist. This study aims to validate a canine flow cytometric protocol for PMP quantification and to assess the influence of calcium on PMP concentrations. Microparticles (MP) were quantified in citrated whole blood (WB) and platelet-poor plasma (PPP) using flow cytometry. Anti-CD61 antibody and Annexin V (AnV) were used to detect platelets and phosphatidylserine, respectively. In 13 healthy dogs, CD61 + /AnV - concentrations were analyzed with/without a calcium buffer. CD61 + /AnV - , CD61 + /AnV + , and CD61 - /AnV + MP quantification were validated in 10 healthy dogs. The coefficient of variation (CV) for duplicate (intra-assay) and parallel (inter-assay) analyses and detection limits (DLs) were calculated. CD61 + /AnV - concentrations were higher in calcium buffer; 841,800 MP/μL (526,000-1,666,200) vs without; 474,200 MP/μL (278,800-997,500), P < .05. In WB, PMP were above DLs and demonstrated acceptable (<20%) intra-assay and inter-assay CVs in 9/10 dogs: 1.7% (0.5-8.9) and 9.0% (0.9-11.9), respectively, for CD61 + /AnV - and 2.4% (0.2-8.7) and 7.8% (0.0-12.8), respectively, for CD61 + /AnV + . Acceptable CVs were not seen for the CD61 - /AnV + MP. In PPP, quantifications were challenged by high inter-assay CV, overlapping DLs and hemolysis and lipemia interfered with quantification in 5/10 dogs. Calcium induced higher in vitro PMP concentrations, likely due to platelet activation. PMP concentrations were reliably quantified in WB, indicating the potential for clinical applications. PPP analyses were unreliable due to high inter-CV and DL overlap, and not obtainable due to hemolysis and lipemia interference. © 2018 American Society for Veterinary Clinical Pathology.
Randrianjatovo, I; Girbal-Neuhauser, E; Marcato-Romain, C-E
2015-06-01
Biofilms are ecosystems of closely associated bacteria encapsulated in an extracellular matrix mainly composed of polysaccharides and proteins. A novel approach was developed for in situ quantification of extracellular proteins (ePNs) in various bacterial biofilms using epicocconone, a natural, fluorescent compound that binds amine residues of proteins. Six commercial proteins were tested for their reaction with epicocconone, and bovine serum albumin (BSA) was selected for assay optimization. The optimized protocol, performed as a microassay, allowed protein amounts as low as 0.7 μg to as high as 50 μg per well to be detected. Addition of monosaccharides or polysaccharides (glucose, dextran or alginate) to the standard BSA solutions (0 to 250 μg ml(-1)) showed little or no sugar interference up to 2000 μg ml(-1), thus providing an assessment of the specificity of epicocconone for proteins. The optimized protocol was then applied to three different biofilms, and in situ quantification of ePN showed contrasted protein amounts of 22.1 ± 3.1, 38.3 ± 7.1 and 0.3 ± 0.1 μg equivalent BSA of proteins for 48-h biofilms of Pseudomonas aeruginosa, Bacillus licheniformis and Weissella confusa, respectively. Possible interference due to global matrix compounds on the in situ quantification of proteins was also investigated by applying the standard addition method (SAM). Low error percentages were obtained, indicating a correct quantification of both the ePN and the added proteins. For the first time, a specific and sensitive assay has been developed for in situ determination of ePN produced by bacterial cells. This advance should lead to an accurate, rapid tool for further protein labelling and microscopic observation of the extracellular matrix of biofilms.
Van der Fels-Klerx, Ine H J; Goossens, Louis H J; Saatkamp, Helmut W; Horst, Suzan H S
2002-02-01
This paper presents a protocol for a formal expert judgment process using a heterogeneous expert panel aimed at the quantification of continuous variables. The emphasis is on the process's requirements related to the nature of expertise within the panel, in particular the heterogeneity of both substantive and normative expertise. The process provides the opportunity for interaction among the experts so that they fully understand and agree upon the problem at hand, including qualitative aspects relevant to the variables of interest, prior to the actual quantification task. Individual experts' assessments on the variables of interest, cast in the form of subjective probability density functions, are elicited with a minimal demand for normative expertise. The individual experts' assessments are aggregated into a single probability density function per variable, thereby weighting the experts according to their expertise. Elicitation techniques proposed include the Delphi technique for the qualitative assessment task and the ELI method for the actual quantitative assessment task. Appropriately, the Classical model was used to weight the experts' assessments in order to construct a single distribution per variable. Applying this model, the experts' quality typically was based on their performance on seed variables. An application of the proposed protocol in the broad and multidisciplinary field of animal health is presented. Results of this expert judgment process showed that the proposed protocol in combination with the proposed elicitation and analysis techniques resulted in valid data on the (continuous) variables of interest. In conclusion, the proposed protocol for a formal expert judgment process aimed at the elicitation of quantitative data from a heterogeneous expert panel provided satisfactory results. Hence, this protocol might be useful for expert judgment studies in other broad and/or multidisciplinary fields of interest.
Miller, Janis M.; Garcia, Caroline E.; Hortsch, Sarah Becker; Guo, Ying; Schimpf, Megan O.
2016-01-01
Purpose Common advice for lower urinary tract symptoms (LUTS) of frequency, urgency and related bother includes elimination of potentially irritating beverages (coffee, tea, alcohol, and carbonated and/or artificially sweetened beverages). The purpose of this study was to determine compliance with standardized instruction to eliminate these potentially irritating beverages, whether LUTS improved after instruction, and if symptoms worsened with partial reintroduction. Design The three-phase fixed sequence design was: 1) baseline, 2) eliminate potentially irritating beverages listed above, and 3) reintroduce at 50% of baseline volume, with a washout period between each 3-day phase. We asked participants to maintain total intake volume by swapping in equal amounts of non-potentially irritating beverages (primarily water). Subjects and Setting The study sample comprised 30 community-dwelling women recruited through newspaper advertisement. Methods Quantification measures included 3-day voiding diaries and detailed beverage intake, and LUTS questionnaires completed during each phase. Results During Phase 2, we found significant reduction in potentially irritating beverages but complete elimination was rare. Despite the protocol demands, total beverage intake was not stable; mean (± standard deviation) daily total intake volume dropped by 6.2±14.9oz (p=0.03) during Phase 2. In Phase 3, the volume of total beverage intake returned to baseline, but intake of potentially irritating beverages also returned to near baseline rather than 50% as requested by protocol. Despite this incomplete adherence to study protocols, women reported reduction in symptoms of urge, inability to delay voiding, and bother during both phases (p≤0.01). The number of voids per day decreased on average by 1.3 and 0.9 voids during phases 2 and 3 respectively (p=0.002 and p=0.035). Conclusions Education to reduce potentially irritating beverages resulted in improvement in LUTS. However, eliminating potentially irritating beverages was difficult to achieve and maintain. Study findings do not allow us to determine if LUTS improvement was attributable to intake of fewer potentially irritating beverages, reduced intake of all beverages, the effect of self-monitoring, or some combination of these factors. PMID:26727685
Miller, Janis M; Garcia, Caroline E; Hortsch, Sarah Becker; Guo, Ying; Schimpf, Megan O
2016-01-01
Common advice for lower urinary tract symptoms (LUTS) such as frequency, urgency, and related bother includes elimination of potentially irritating beverages (coffee, tea, alcohol, and carbonated and/or artificially sweetened beverages). The purpose of this study was to determine compliance with standardized instruction to eliminate these potentially irritating beverages, whether LUTS improved after instruction, and whether symptoms worsened with partial reintroduction. The 3-phase fixed sequence design was (1) baseline, (2) eliminate potentially irritating beverages listed above, and (3) reintroduce at 50% of baseline volume, with a washout period between each 3-day phase. We asked participants to maintain total intake volume by swapping in equal amounts of nonpotentially irritating beverages (primarily water). The study sample comprised 30 community-dwelling women recruited through newspaper advertisement. Quantification measures included 3-day voiding diaries and detailed beverage intake, and LUTS questionnaires completed during each phase. During Phase 2, we found significant reduction in potentially irritating beverages but complete elimination was rare. Despite protocol demands, total beverage intake was not stable; mean (± standard deviation) daily total intake volume dropped by 6.2 ± 14.9 oz (P = .03) during Phase 2. In Phase 3, the volume of total beverage intake returned to baseline, but the intake of potentially irritating beverages also returned to near baseline rather than 50% as requested by protocol. Despite this incomplete adherence to study protocols, women reported reduction in symptoms of urge, inability to delay voiding, and bother during both phases (P ≤ .01). The number of voids per day decreased on average by 1.3 and 0.9 voids during Phases 2 and 3, respectively (P = .002 and P = .035). Education to reduce potentially irritating beverages resulted in improvement in LUTS. However, eliminating potentially irritating beverages was difficult to achieve and maintain. Study findings do not allow us to determine whether LUTS improvement was attributable to intake of fewer potentially irritating beverages, reduced intake of all beverages, the effect of self-monitoring, or some combination of these factors.
Detection and quantification of extracellular microRNAs in murine biofluids
2014-01-01
Background MicroRNAs (miRNAs) are short RNA molecules which regulate gene expression in eukaryotic cells, and are abundant and stable in biofluids such as blood serum and plasma. As such, there has been heightened interest in the utility of extracellular miRNAs as minimally invasive biomarkers for diagnosis and monitoring of a wide range of human pathologies. However, quantification of extracellular miRNAs is subject to a number of specific challenges, including the relatively low RNA content of biofluids, the possibility of contamination with serum proteins (including RNases and PCR inhibitors), hemolysis, platelet contamination/activation, a lack of well-established reference miRNAs and the biochemical properties of miRNAs themselves. Protocols for the detection and quantification of miRNAs in biofluids are therefore of high interest. Results The following protocol was validated by quantifying miRNA abundance in C57 (wild-type) and dystrophin-deficient (mdx) mice. Important differences in miRNA abundance were observed depending on whether blood was taken from the jugular or tail vein. Furthermore, efficiency of miRNA recovery was reduced when sample volumes greater than 50 μl were used. Conclusions Here we describe robust and novel procedures to harvest murine serum/plasma, extract biofluid RNA, amplify specific miRNAs by RT-qPCR and analyze the resulting data, enabling the determination of relative and absolute miRNA abundance in extracellular biofluids with high accuracy, specificity and sensitivity. PMID:24629058
Gono, Takahisa; Okazaki, Yuka; Murakami, Akihiro; Kuwana, Masataka
2018-04-09
To compare the quantitative performance for measuring anti-MDA5 antibody titer of two enzyme-linked immunosorbent assay (ELISA) systems: an in-house ELISA and the commercial MESACUP TM anti-MDA5 test. Anti-MDA5 antibody titer was measured in sera from 70 patients with dermatomyositis using an in-house ELISA and the MESACUP TM anti-MDA5 test side-by-side. For the commercial ELISA kit, serum samples diluted 1:101 were used according to the manufacturer's protocol, but serial dilutions of sera were also examined to identify the optimal serum dilution for quantification. The anti-MDA5 antibody titers measured by the in-house and commercial ELISAs were positively correlated with each other (r = 0.53, p = .0001), but the antibody titer measured by the commercial ELISA was less sensitive to change after medical treatment, and 37 (80%) of 46 anti-MDA5-positive sera had antibody titer exceeding the quantification range specified by the manufacturer (≥150 index). Experiments using diluted serum samples revealed that diluting the sera 1:5050 improved the quantitative performance of the MESACUP TM anti-MDA5 test, including a better correlation with the in-house ELISA results and an increased sensitivity to change. We improved the ability of the commercial ELISA kit to quantify anti-MDA5 antibody titer by altering its protocol.
Portnoy, Orith; Guranda, Larisa; Apter, Sara; Eiss, David; Amitai, Marianne Michal; Konen, Eli
2011-11-01
The purpose of this study was to compare opacification of the urinary collecting system and radiation dose associated with three-phase 64-MDCT urographic protocols and those associated with a split-bolus dual-phase protocol including furosemide. Images from 150 CT urographic examinations performed with three scanning protocols were retrospectively evaluated. Group A consisted of 50 sequentially registered patients who underwent a three-phase protocol with saline infusion. Group B consisted of 50 sequentially registered patients who underwent a reduced-radiation three-phase protocol with saline. Group C consisted of 50 sequentially registered patients who underwent a dual-phase split-bolus protocol that included a low-dose furosemide injection. Opacification of the urinary collecting system was evaluated with segmental binary scoring. Contrast artifacts were evaluated, and radiation doses were recorded. Results were compared by analysis of variance. A significant reduction in mean effective radiation dose was found between groups A and B (p < 0.001) and between groups B and C (p < 0.001), resulting in 65% reduction between groups A and C (p < 0.001). This reduction did not significantly affect opacification score in any of the 12 urinary segments (p = 0.079). In addition, dense contrast artifacts overlying the renal parenchyma observed with the three-phase protocols (groups A and B) were avoided with the dual-phase protocol (group C) (p < 0.001). A dual-phase protocol with furosemide injection is the preferable technique for CT urography. In comparison with commonly used three-phase protocols, the dual-phase protocol significantly reduces radiation exposure dose without reduction in image quality.
[Building Mass Spectrometry Spectral Libraries of Human Cancer Cell Lines].
Faktor, J; Bouchal, P
Cancer research often focuses on protein quantification in model cancer cell lines and cancer tissues. SWATH (sequential windowed acquisition of all theoretical fragment ion spectra), the state of the art method, enables the quantification of all proteins included in spectral library. Spectral library contains fragmentation patterns of each detectable protein in a sample. Thorough spectral library preparation will improve quantitation of low abundant proteins which usually play an important role in cancer. Our research is focused on the optimization of spectral library preparation aimed at maximizing the number of identified proteins in MCF-7 breast cancer cell line. First, we optimized the sample preparation prior entering the mass spectrometer. We examined the effects of lysis buffer composition, peptide dissolution protocol and the material of sample vial on the number of proteins identified in spectral library. Next, we optimized mass spectrometry (MS) method for spectral library data acquisition. Our thorough optimized protocol for spectral library building enabled the identification of 1,653 proteins (FDR < 1%) in 1 µg of MCF-7 lysate. This work contributed to the enhancement of protein coverage in SWATH digital biobanks which enable quantification of arbitrary protein from physically unavailable samples. In future, high quality spectral libraries could play a key role in preparing of patient proteome digital fingerprints.Key words: biomarker - mass spectrometry - proteomics - digital biobanking - SWATH - protein quantificationThis work was supported by the project MEYS - NPS I - LO1413.The authors declare they have no potential conflicts of interest concerning drugs, products, or services used in the study.The Editorial Board declares that the manuscript met the ICMJE recommendation for biomedical papers.Submitted: 7. 5. 2016Accepted: 9. 6. 2016.
Go, Young-Mi; Walker, Douglas I; Liang, Yongliang; Uppal, Karan; Soltow, Quinlyn A; Tran, ViLinh; Strobel, Frederick; Quyyumi, Arshed A; Ziegler, Thomas R; Pennell, Kurt D; Miller, Gary W; Jones, Dean P
2015-12-01
The exposome is the cumulative measure of environmental influences and associated biological responses throughout the lifespan, including exposures from the environment, diet, behavior, and endogenous processes. A major challenge for exposome research lies in the development of robust and affordable analytic procedures to measure the broad range of exposures and associated biologic impacts occurring over a lifetime. Biomonitoring is an established approach to evaluate internal body burden of environmental exposures, but use of biomonitoring for exposome research is often limited by the high costs associated with quantification of individual chemicals. High-resolution metabolomics (HRM) uses ultra-high resolution mass spectrometry with minimal sample preparation to support high-throughput relative quantification of thousands of environmental, dietary, and microbial chemicals. HRM also measures metabolites in most endogenous metabolic pathways, thereby providing simultaneous measurement of biologic responses to environmental exposures. The present research examined quantification strategies to enhance the usefulness of HRM data for cumulative exposome research. The results provide a simple reference standardization protocol in which individual chemical concentrations in unknown samples are estimated by comparison to a concurrently analyzed, pooled reference sample with known chemical concentrations. The approach was tested using blinded analyses of amino acids in human samples and was found to be comparable to independent laboratory results based on surrogate standardization or internal standardization. Quantification was reproducible over a 13-month period and extrapolated to thousands of chemicals. The results show that reference standardization protocol provides an effective strategy that will enhance data collection for cumulative exposome research. In principle, the approach can be extended to other types of mass spectrometry and other analytical methods. © The Author 2015. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Go, Young-Mi; Walker, Douglas I.; Liang, Yongliang; Uppal, Karan; Soltow, Quinlyn A.; Tran, ViLinh; Strobel, Frederick; Quyyumi, Arshed A.; Ziegler, Thomas R.; Pennell, Kurt D.; Miller, Gary W.; Jones, Dean P.
2015-01-01
The exposome is the cumulative measure of environmental influences and associated biological responses throughout the lifespan, including exposures from the environment, diet, behavior, and endogenous processes. A major challenge for exposome research lies in the development of robust and affordable analytic procedures to measure the broad range of exposures and associated biologic impacts occurring over a lifetime. Biomonitoring is an established approach to evaluate internal body burden of environmental exposures, but use of biomonitoring for exposome research is often limited by the high costs associated with quantification of individual chemicals. High-resolution metabolomics (HRM) uses ultra-high resolution mass spectrometry with minimal sample preparation to support high-throughput relative quantification of thousands of environmental, dietary, and microbial chemicals. HRM also measures metabolites in most endogenous metabolic pathways, thereby providing simultaneous measurement of biologic responses to environmental exposures. The present research examined quantification strategies to enhance the usefulness of HRM data for cumulative exposome research. The results provide a simple reference standardization protocol in which individual chemical concentrations in unknown samples are estimated by comparison to a concurrently analyzed, pooled reference sample with known chemical concentrations. The approach was tested using blinded analyses of amino acids in human samples and was found to be comparable to independent laboratory results based on surrogate standardization or internal standardization. Quantification was reproducible over a 13-month period and extrapolated to thousands of chemicals. The results show that reference standardization protocol provides an effective strategy that will enhance data collection for cumulative exposome research. In principle, the approach can be extended to other types of mass spectrometry and other analytical methods. PMID:26358001
Knoll, Florian; Raya, José G; Halloran, Rafael O; Baete, Steven; Sigmund, Eric; Bammer, Roland; Block, Tobias; Otazo, Ricardo; Sodickson, Daniel K
2015-01-01
Radial spin echo diffusion imaging allows motion-robust imaging of tissues with very low T2 values like articular cartilage with high spatial resolution and signal-to-noise ratio (SNR). However, in vivo measurements are challenging due to the significantly slower data acquisition speed of spin-echo sequences and the less efficient k-space coverage of radial sampling, which raises the demand for accelerated protocols by means of undersampling. This work introduces a new reconstruction approach for undersampled DTI. A model-based reconstruction implicitly exploits redundancies in the diffusion weighted images by reducing the number of unknowns in the optimization problem and compressed sensing is performed directly in the target quantitative domain by imposing a Total Variation (TV) constraint on the elements of the diffusion tensor. Experiments were performed for an anisotropic phantom and the knee and brain of healthy volunteers (3 and 2 volunteers, respectively). Evaluation of the new approach was conducted by comparing the results to reconstructions performed with gridding, combined parallel imaging and compressed sensing, and a recently proposed model-based approach. The experiments demonstrated improvement in terms of reduction of noise and streaking artifacts in the quantitative parameter maps as well as a reduction of angular dispersion of the primary eigenvector when using the proposed method, without introducing systematic errors into the maps. This may enable an essential reduction of the acquisition time in radial spin echo diffusion tensor imaging without degrading parameter quantification and/or SNR. PMID:25594167
Cancino-Faure, Beatriz; Fisa, Roser; Alcover, M. Magdalena; Jimenez-Marco, Teresa; Riera, Cristina
2016-01-01
Molecular techniques based on real-time polymerase chain reaction (qPCR) allow the detection and quantification of DNA but are unable to distinguish between signals from dead or live cells. Because of the lack of simple techniques to differentiate between viable and nonviable cells, the aim of this study was to optimize and evaluate a straightforward test based on propidium monoazide (PMA) dye action combined with a qPCR assay (PMA-qPCR) for the selective quantification of viable/nonviable epimastigotes of Trypanosoma cruzi. PMA has the ability to penetrate the plasma membrane of dead cells and covalently cross-link to the DNA during exposure to bright visible light, thereby inhibiting PCR amplification. Different concentrations of PMA (50–200 μM) and epimastigotes of the Maracay strain of T. cruzi (1 × 105–10 parasites/mL) were assayed; viable and nonviable parasites were tested and quantified by qPCR with a TaqMan probe specific for T. cruzi. In the PMA-qPCR assay optimized at 100 μM PMA, a significant qPCR signal reduction was observed in the nonviable versus viable epimastigotes treated with PMA, with a mean signal reduction of 2.5 logarithm units and a percentage of signal reduction > 98%, in all concentrations of parasites assayed. This signal reduction was also observed when PMA-qPCR was applied to a mixture of live/dead parasites, which allowed the detection of live cells, except when the concentration of live parasites was low (10 parasites/mL). The PMA-qPCR developed allows differentiation between viable and nonviable epimastigotes of T. cruzi and could thus be a potential method of parasite viability assessment and quantification. PMID:27139452
The application of quantitative real-time PCR (qPCR) technologies for the rapid identification of fecal bacteria in environmental waters is being considered for use as a national water quality metric in the United States. The transition from research tool to a standardized proto...
Lipid Extraction and Cholesterol Quantification: A Simple Protocol
ERIC Educational Resources Information Center
Barreto, M. Carmo
2005-01-01
Enzymatic methods are used to measure cholesterol levels but a simple and inexpensive method is described, which is particularly important when one has repeated lab sessions during the week. It can be carried out on the organic phase containing the lipids, without evaporating the solvent, yielding quick results.
Jurado-Román, Alfonso; Sánchez-Pérez, Ignacio; Lozano Ruíz-Poveda, Fernando; López-Lluva, María T; Pinilla-Echeverri, Natalia; Moreno Arciniegas, Andrea; Agudo-Quilez, Pilar; Gil Agudo, Antonio
2016-01-01
A reduction in radiation doses at the catheterization laboratory, maintaining the quality of procedures is essential. Our objective was to analyze the results of a simple radiation reduction protocol at a high-volume interventional cardiology unit. We analyzed 1160 consecutive procedures: 580 performed before the implementation of the protocol and 580 after it. The protocol consisted in: the reduction of the number of ventriculographies and aortographies, the optimization of the collimation and the geometry of the X ray tube-patient-receptor, the use of low dose-rate fluoroscopy and the reduction of the number of cine sequences using the software "last fluoroscopy hold". There were no significant differences in clinical baseline features or in the procedural characteristics with the exception of a higher percentage of radial approach (30.7% vs 69.6%; p<0.001) and of percutaneous coronary interventions of chronic total occlusions after the implementation of the protocol (2.1% vs 6.7%; p=0,001). Angiographic success was similar during both periods (98.3% vs 99.2%; p=0.2). There were no significant differences between both periods regarding the overall duration of the procedures (26.9 vs 29.6min; p=0.14), or the fluoroscopy time (13.3 vs 13.2min; p=0.8). We observed a reduction in the percentage of procedures with ventriculography (80.9% vs 7.1%; p<0.0001) or aortography (15.4% vs 4.4%; p<0.0001), the cine runs (21.8 vs 6.9; p<0.0001) and the dose-area product (165 vs 71 Gyxcm(2); p<0.0001). With the implementation of a simple radiation reduction protocol, a 57% reduction of dose-area product was observed without a reduction in the quality or the complexity of procedures. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomez-Cardona, Daniel; Nagle, Scott K.; Department of Radiology, University of Wisconsin-Madison School of Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53792
Purpose: Wall thickness (WT) is an airway feature of great interest for the assessment of morphological changes in the lung parenchyma. Multidetector computed tomography (MDCT) has recently been used to evaluate airway WT, but the potential risk of radiation-induced carcinogenesis—particularly in younger patients—might limit a wider use of this imaging method in clinical practice. The recent commercial implementation of the statistical model-based iterative reconstruction (MBIR) algorithm, instead of the conventional filtered back projection (FBP) algorithm, has enabled considerable radiation dose reduction in many other clinical applications of MDCT. The purpose of this work was to study the impact of radiationmore » dose and MBIR in the MDCT assessment of airway WT. Methods: An airway phantom was scanned using a clinical MDCT system (Discovery CT750 HD, GE Healthcare) at 4 kV levels and 5 mAs levels. Both FBP and a commercial implementation of MBIR (Veo{sup TM}, GE Healthcare) were used to reconstruct CT images of the airways. For each kV–mAs combination and each reconstruction algorithm, the contrast-to-noise ratio (CNR) of the airways was measured, and the WT of each airway was measured and compared with the nominal value; the relative bias and the angular standard deviation in the measured WT were calculated. For each airway and reconstruction algorithm, the overall performance of WT quantification across all of the 20 kV–mAs combinations was quantified by the sum of squares (SSQs) of the difference between the measured and nominal WT values. Finally, the particular kV–mAs combination and reconstruction algorithm that minimized radiation dose while still achieving a reference WT quantification accuracy level was chosen as the optimal acquisition and reconstruction settings. Results: The wall thicknesses of seven airways of different sizes were analyzed in the study. Compared with FBP, MBIR improved the CNR of the airways, particularly at low radiation dose levels. For FBP, the relative bias and the angular standard deviation of the measured WT increased steeply with decreasing radiation dose. Except for the smallest airway, MBIR enabled significant reduction in both the relative bias and angular standard deviation of the WT, particularly at low radiation dose levels; the SSQ was reduced by 50%–96% by using MBIR. The optimal reconstruction algorithm was found to be MBIR for the seven airways being assessed, and the combined use of MBIR and optimal kV–mAs selection resulted in a radiation dose reduction of 37%–83% compared with a reference scan protocol with a dose level of 1 mGy. Conclusions: The quantification accuracy of airway WT is strongly influenced by radiation dose and reconstruction algorithm. The MBIR algorithm potentially allows the desired WT quantification accuracy to be achieved with reduced radiation dose, which may enable a wider clinical use of MDCT for the assessment of airway WT, particularly for younger patients who may be more sensitive to exposures with ionizing radiation.« less
Surinova, Silvia; Hüttenhain, Ruth; Chang, Ching-Yun; Espona, Lucia; Vitek, Olga; Aebersold, Ruedi
2013-08-01
Targeted proteomics based on selected reaction monitoring (SRM) mass spectrometry is commonly used for accurate and reproducible quantification of protein analytes in complex biological mixtures. Strictly hypothesis-driven, SRM assays quantify each targeted protein by collecting measurements on its peptide fragment ions, called transitions. To achieve sensitive and accurate quantitative results, experimental design and data analysis must consistently account for the variability of the quantified transitions. This consistency is especially important in large experiments, which increasingly require profiling up to hundreds of proteins over hundreds of samples. Here we describe a robust and automated workflow for the analysis of large quantitative SRM data sets that integrates data processing, statistical protein identification and quantification, and dissemination of the results. The integrated workflow combines three software tools: mProphet for peptide identification via probabilistic scoring; SRMstats for protein significance analysis with linear mixed-effect models; and PASSEL, a public repository for storage, retrieval and query of SRM data. The input requirements for the protocol are files with SRM traces in mzXML format, and a file with a list of transitions in a text tab-separated format. The protocol is especially suited for data with heavy isotope-labeled peptide internal standards. We demonstrate the protocol on a clinical data set in which the abundances of 35 biomarker candidates were profiled in 83 blood plasma samples of subjects with ovarian cancer or benign ovarian tumors. The time frame to realize the protocol is 1-2 weeks, depending on the number of replicates used in the experiment.
NASA Astrophysics Data System (ADS)
Migliozzi, D.; Nguyen, H. T.; Gijs, M. A. M.
2018-02-01
Immunohistochemistry (IHC) is one of the main techniques currently used in the clinics for biomarker characterization. It consists in colorimetric labeling with specific antibodies followed by microscopy analysis. The results are then used for diagnosis and therapeutic targeting. Well-known drawbacks of such protocols are their limited accuracy and precision, which prevent the clinicians from having quantitative and robust IHC results. With our work, we combined rapid microfluidic immunofluorescent staining with efficient image-based cell segmentation and signal quantification to increase the robustness of both experimental and analytical protocols. The experimental protocol is very simple and based on fast-fluidic-exchange in a microfluidic chamber created on top of the formalin-fixed-paraffin-embedded (FFPE) slide by clamping it a silicon chip with a polydimethyl siloxane (PDMS) sealing ring. The image-processing protocol is based on enhancement and subsequent thresholding of the local contrast of the obtained fluorescence image. As a case study, given that the human epidermal growth factor receptor 2 (HER2) protein is often used as a biomarker for breast cancer, we applied our method to HER2+ and HER2- cell lines. We report very fast (5 minutes) immunofluorescence staining of both HER2 and cytokeratin (a marker used to define the tumor region) on FFPE slides. The image-processing program can segment cells correctly and give a cell-based quantitative immunofluorescent signal. With this method, we found a reproducible well-defined separation for the HER2-to-cytokeratin ratio for positive and negative control samples.
Quantification and characterization of leakage errors
NASA Astrophysics Data System (ADS)
Wood, Christopher J.; Gambetta, Jay M.
2018-03-01
We present a general framework for the quantification and characterization of leakage errors that result when a quantum system is encoded in the subspace of a larger system. To do this we introduce metrics for quantifying the coherent and incoherent properties of the resulting errors and we illustrate this framework with several examples relevant to superconducting qubits. In particular, we propose two quantities, the leakage and seepage rates, which together with average gate fidelity allow for characterizing the average performance of quantum gates in the presence of leakage and show how the randomized benchmarking protocol can be modified to enable the robust estimation of all three quantities for a Clifford gate set.
Novel One-step Immunoassays to Quantify α-Synuclein
Bidinosti, Michael; Shimshek, Derya R.; Mollenhauer, Brit; Marcellin, David; Schweizer, Tatjana; Lotz, Gregor P.; Schlossmacher, Michael G.; Weiss, Andreas
2012-01-01
Familial Parkinson disease (PD) can result from α-synuclein gene multiplication, implicating the reduction of neuronal α-synuclein as a therapeutic target. Moreover, α-synuclein content in human cerebrospinal fluid (CSF) represents a PD biomarker candidate. However, capture-based assays for α-synuclein quantification in CSF (such as by ELISA) have shown discrepancies and have limited suitability for high-throughput screening. Here, we describe two sensitive, in-solution, time-resolved Förster's resonance energy transfer (TR-FRET)-based immunoassays for total and oligomeric α-synuclein quantification. CSF analysis showed strong concordance for total α-synuclein content between two TR-FRET assays and, in agreement with a previously characterized 36 h protocol-based ELISA, demonstrated lower α-synuclein levels in PD donors. Critically, the assay suitability for high-throughput screening of siRNA constructs and small molecules aimed at reducing endogenous α-synuclein levels was established and validated. In a small-scale proof of concept compound screen using 384 well plates, signals ranged from <30 to >120% of the mean of vehicle-treated cells for molecules known to lower and increase cellular α-synuclein, respectively. Furthermore, a reverse genetic screen of a kinase-directed siRNA library identified seven genes that modulated α-synuclein protein levels (five whose knockdown increased and two that decreased cellular α-synuclein protein). This provides critical new biological insight into cellular pathways regulating α-synuclein steady-state expression that may help guide further drug discovery efforts. Moreover, we describe an inherent limitation in current α-synuclein oligomer detection methodology, a finding that will direct improvement of future assay design. Our one-step TR-FRET-based platform for α-synuclein quantification provides a novel platform with superior performance parameters for the rapid screening of large biomarker cohorts and of compound and genetic libraries, both of which are essential to the development of PD therapies. PMID:22843695
Uncertainty Quantification in Alchemical Free Energy Methods.
Bhati, Agastya P; Wan, Shunzhou; Hu, Yuan; Sherborne, Brad; Coveney, Peter V
2018-06-12
Alchemical free energy methods have gained much importance recently from several reports of improved ligand-protein binding affinity predictions based on their implementation using molecular dynamics simulations. A large number of variants of such methods implementing different accelerated sampling techniques and free energy estimators are available, each claimed to be better than the others in its own way. However, the key features of reproducibility and quantification of associated uncertainties in such methods have barely been discussed. Here, we apply a systematic protocol for uncertainty quantification to a number of popular alchemical free energy methods, covering both absolute and relative free energy predictions. We show that a reliable measure of error estimation is provided by ensemble simulation-an ensemble of independent MD simulations-which applies irrespective of the free energy method. The need to use ensemble methods is fundamental and holds regardless of the duration of time of the molecular dynamics simulations performed.
Jia, Xin; Fontaine, Benjamin M.; Strobel, Fred; Weinert, Emily E.
2014-01-01
A sensitive, versatile and economical method to extract and quantify cyclic nucleotide monophosphates (cNMPs) using LC-MS/MS, including both 3',5'-cNMPs and 2',3'-cNMPs, in mammalian tissues and cellular systems has been developed. Problems, such as matrix effects from complex biological samples, are addressed and have been optimized. This protocol allows for comparison of multiple cNMPs in the same system and was used to examine the relationship between tissue levels of cNMPs in a panel of rat organs. In addition, the study reports the first identification and quantification of 2',3'-cIMP. The developed method will allow for quantification of cNMPs levels in cells and tissues with varying disease states, which will provide insight into the role(s) and interplay of cNMP signalling pathways. PMID:25513747
Jia, Xin; Fontaine, Benjamin M; Strobel, Fred; Weinert, Emily E
2014-12-12
A sensitive, versatile and economical method to extract and quantify cyclic nucleotide monophosphates (cNMPs) using LC-MS/MS, including both 3',5'-cNMPs and 2',3'-cNMPs, in mammalian tissues and cellular systems has been developed. Problems, such as matrix effects from complex biological samples, are addressed and have been optimized. This protocol allows for comparison of multiple cNMPs in the same system and was used to examine the relationship between tissue levels of cNMPs in a panel of rat organs. In addition, the study reports the first identification and quantification of 2',3'-cIMP. The developed method will allow for quantification of cNMPs levels in cells and tissues with varying disease states, which will provide insight into the role(s) and interplay of cNMP signalling pathways.
Fantozzi, Anna; Ermolli, Monica; Marini, Massimiliano; Scotti, Domenico; Balla, Branko; Querci, Maddalena; Langrell, Stephen R H; Van den Eede, Guy
2007-02-21
An innovative covalent microsphere immunoassay, based on the usage of fluorescent beads coupled to a specific antibody, was developed for the quantification of the endotoxin Cry1Ab present in MON810 and Bt11 genetically modified (GM) maize lines. In particular, a specific protocol was developed to assess the presence of Cry1Ab in a very broad range of GM maize concentrations, from 0.1 to 100% [weight of genetically modified organism (GMO)/weight]. Test linearity was achieved in the range of values from 0.1 to 3%, whereas fluorescence signal increased following a nonlinear model, reaching a plateau at 25%. The limits of detection and quantification were equal to 0.018 and 0.054%, respectively. The present study describes the first application of quantitative high-throughput immunoassays in GMO analysis.
Quantification of free circulating tumor DNA as a diagnostic marker for breast cancer.
Catarino, Raquel; Ferreira, Maria M; Rodrigues, Helena; Coelho, Ana; Nogal, Ana; Sousa, Abreu; Medeiros, Rui
2008-08-01
To determine whether the amounts of circulating DNA could discriminate between breast cancer patients and healthy individuals by using real-time PCR quantification methodology. Our standard protocol for quantification of cell-free plasma DNA involved 175 consecutive patients with breast cancer and 80 healthy controls. We found increased levels of circulating DNA in breast cancer patients compared to control individuals (105.2 vs. 77.06 ng/mL, p < 0.001). We also found statistically significant differences in circulating DNA amounts in patients before and after breast surgery (105.2 vs. 59.0 ng/mL, p = 0.001). Increased plasma cell-free DNA concentration was a strong risk factor for breast cancer, conferring an increased risk for the presence of this disease (OR, 12.32; 95% CI, 2.09-52.28; p < 0.001). Quantification of circulating DNA by real-time PCR may be a good and simple tool for detection of breast cancer with a potential to clinical applicability together with other current methods used for monitoring the disease.
Keyhani, R; Scheede, S; Thielecke, I; Wenck, H; Schmucker, R; Schreiner, V; Ennen, J; Herpens, A
2009-06-01
A time- and cost-effective sweat casting method using the forearm as test site to assess the efficacy of several anti-perspirant formulations with a low number of test subjects has been evaluated and qualified. The imprint sweat casting method is based on a 2-component silcone-imprint technique to measure the efficacy of more than eight products in parallel with the same test subject. In studies using aluminum chlorohydrate (ACH) formulations as test anti-perspirants, a clear-cut correlation could be demonstrated between sweat gland activities measured by the imprint method and gravimetric measurement of sweat gland activities. Concentration-dependent inhibition of sweat gland activity could be observed with the imprint technique up to an ACH concentration of 15%, and all formulations containing 2% ACH or above resulted in statistically significant reduction of sweat gland activity (P < 0.001) when compared with untreated control areas. Furthermore, the SDs of individual studies using the imprint technique were in a range of +/-20% of sweat gland activity, which can be regarded rather low for in vivo measurements of a complex process like sweat secretion. A group-wise comparison between the measurements of anti-perspirant activity as determined by the imprint protocol and the Food and Drug Administration (FDA) Guideline compliant gravimetric hot-room protocol revealed that the test results for anti-perspirant activity obtained with the imprint protocol are similar to those obtained with the hot-room protocol. Moreover, the data generated with the imprint protocol have a high predictive value for the outcome of a later guideline-compliant hot-room test. As the imprint casting method tends to be a little more sensitive for formulations with low anti-perspirant activity, and seems to be associated with less interassay variability than the standard gravimetric hot-room test, the imprint casting method may select products which later fail to pass the standard gravimetric hot-room test. Meanwhile the imprint sweat casting has proven to be a robust method useful to support efficacy-oriented product development. Therefore, in later stages of utilization it might even evolve into an efficient claim substantiation tool.
Favazza, Christopher P; Ferrero, Andrea; Yu, Lifeng; Leng, Shuai; McMillan, Kyle L; McCollough, Cynthia H
2017-07-01
The use of iterative reconstruction (IR) algorithms in CT generally decreases image noise and enables dose reduction. However, the amount of dose reduction possible using IR without sacrificing diagnostic performance is difficult to assess with conventional image quality metrics. Through this investigation, achievable dose reduction using a commercially available IR algorithm without loss of low contrast spatial resolution was determined with a channelized Hotelling observer (CHO) model and used to optimize a clinical abdomen/pelvis exam protocol. A phantom containing 21 low contrast disks-three different contrast levels and seven different diameters-was imaged at different dose levels. Images were created with filtered backprojection (FBP) and IR. The CHO was tasked with detecting the low contrast disks. CHO performance indicated dose could be reduced by 22% to 25% without compromising low contrast detectability (as compared to full-dose FBP images) whereas 50% or more dose reduction significantly reduced detection performance. Importantly, default settings for the scanner and protocol investigated reduced dose by upward of 75%. Subsequently, CHO-based protocol changes to the default protocol yielded images of higher quality and doses more consistent with values from a larger, dose-optimized scanner fleet. CHO assessment provided objective data to successfully optimize a clinical CT acquisition protocol.
Quantification of 5'-deoxy-5'-methylthioadenosine in heat-treated natural rubber latex serum.
Pitakpornpreecha, Thanawat; Plubrukarn, Anuchit; Wititsuwannakul, Rapepun
2012-01-01
5'-Deoxy-5'-methylthioadenosine (MTA) is one of the biologically active components found in natural rubber latex (NRL) serum, a common waste product from rubber plantations. In this study the contents of MTA in heat-treated NRL serum were measured in order to assess the potential of the serum as an alternative source of MTA. To devise an HPLC/UV-based quantitative analytical protocol for the determination of MTA, and to determine the effect of heat treatment on the content of MTA in NRL serum from various sources. An HPLC/UV-based determination of MTA using an acidic eluant was devised and validated. In the heat treatment, the effect of refluxing times on MTA liberation was evaluated. The quantification protocol was validated with satisfying linearity, limits of detection and quantitation, precisions for peak areas and recovery percentages from intra- and inter-day operations. The amounts of MTA in the NRL sera from various sources increased with heat treatment to yield 5-12 μg MTA/mL of serum. The devised protocol was found to be satisfyingly applicable to the routine determination of MTA in NRL serum. The effect of heat treatment on the content of MTA also indicated another possible use for NRL serum, normally discarded in vast amounts by the rubber industry, as an alternative source of MTA. Copyright © 2011 John Wiley & Sons, Ltd.
Prado-Cabrero, Alfonso; Beatty, Stephen; Stack, Jim; Howard, Alan; Nolan, John M
2016-07-01
In our previous work we identified the presence of meso -zeaxanthin [(3R,3'S)-zeaxanthin] in trout flesh and skin (Nolan et al., 2014), but were not able to quantify this carotenoid with the method used at that time. In the present study, we developed a protocol that allows for the quantification of lutein and the three stereoisomers of zeaxanthin [(3R,3'R)-zeaxanthin, meso -zeaxanthin and (3S,3'S)-zeaxanthin] in fish flesh. We tested this protocol in two species of farmed trout ( Oncorhynchus mykiss and Salmo Trutta ), and we detected and quantified these carotenoids. The concentrations of each carotenoid detected (ranging from 1.18 ± 0.68 ng g -1 flesh for meso -zeaxanthin to 38.72 ± 15.87 ng g -1 flesh for lutein) were highly comparable for the two fish species tested. In conclusion, we report, for the first time, the concentrations of zeaxanthin stereoisomers (including meso -zeaxanthin) and lutein in trout flesh. This work adds further to the knowledge on the presence of these carotenoids in the human food chain.
Tessonnière, H; Vidal, S; Barnavon, L; Alexandre, H; Remize, F
2009-02-28
Because the yeast Brettanomyces produces volatile phenols and acetic acid, it is responsible for wine spoilage. The uncontrolled accumulation of these molecules in wine leads to sensorial defects that compromise wine quality. The need for a rapid, specific, sensitive and reliable method to detect this spoilage yeast has increased over the last decade. All these requirements are met by real-time PCR. We here propose improvements of existing methods to enhance the robustness of the assay. Six different protocols to isolate DNA from a wine and three PCR mix compositions were tested, and the best method was selected. Insoluble PVPP addition during DNA extraction by a classical phenol:chloroform protocol succeeded in the relief of PCR inhibitors from wine. We developed an internal control which was efficient to avoid false negative results due to decreases in the efficiency of DNA isolation and/or amplification. The method was evaluated by an intra-laboratory study for its specificity, linearity, repeatability and reproducibility. A standard curve was established from 14 different wines artificially inoculated. The quantification limit was 31 cfu/mL.
Unice, Kenneth M; Kreider, Marisa L; Panko, Julie M
2012-11-08
Pyrolysis(pyr)-GC/MS analysis of characteristic thermal decomposition fragments has been previously used for qualitative fingerprinting of organic sources in environmental samples. A quantitative pyr-GC/MS method based on characteristic tire polymer pyrolysis products was developed for tread particle quantification in environmental matrices including soil, sediment, and air. The feasibility of quantitative pyr-GC/MS analysis of tread was confirmed in a method evaluation study using artificial soil spiked with known amounts of cryogenically generated tread. Tread concentration determined by blinded analyses was highly correlated (r2 ≥ 0.88) with the known tread spike concentration. Two critical refinements to the initial pyrolysis protocol were identified including use of an internal standard and quantification by the dimeric markers vinylcyclohexene and dipentene, which have good specificity for rubber polymer with no other appreciable environmental sources. A novel use of deuterated internal standards of similar polymeric structure was developed to correct the variable analyte recovery caused by sample size, matrix effects, and ion source variability. The resultant quantitative pyr-GC/MS protocol is reliable and transferable between laboratories.
How to Make Data a Blessing to Parametric Uncertainty Quantification and Reduction?
NASA Astrophysics Data System (ADS)
Ye, M.; Shi, X.; Curtis, G. P.; Kohler, M.; Wu, J.
2013-12-01
In a Bayesian point of view, probability of model parameters and predictions are conditioned on data used for parameter inference and prediction analysis. It is critical to use appropriate data for quantifying parametric uncertainty and its propagation to model predictions. However, data are always limited and imperfect. When a dataset cannot properly constrain model parameters, it may lead to inaccurate uncertainty quantification. While in this case data appears to be a curse to uncertainty quantification, a comprehensive modeling analysis may help understand the cause and characteristics of parametric uncertainty and thus turns data into a blessing. In this study, we illustrate impacts of data on uncertainty quantification and reduction using an example of surface complexation model (SCM) developed to simulate uranyl (U(VI)) adsorption. The model includes two adsorption sites, referred to as strong and weak sites. The amount of uranium adsorption on these sites determines both the mean arrival time and the long tail of the breakthrough curves. There is one reaction on the weak site but two reactions on the strong site. The unknown parameters include fractions of the total surface site density of the two sites and surface complex formation constants of the three reactions. A total of seven experiments were conducted with different geochemical conditions to estimate these parameters. The experiments with low initial concentration of U(VI) result in a large amount of parametric uncertainty. A modeling analysis shows that it is because the experiments cannot distinguish the relative adsorption affinity of the strong and weak sites on uranium adsorption. Therefore, the experiments with high initial concentration of U(VI) are needed, because in the experiments the strong site is nearly saturated and the weak site can be determined. The experiments with high initial concentration of U(VI) are a blessing to uncertainty quantification, and the experiments with low initial concentration help modelers turn a curse into a blessing. The data impacts on uncertainty quantification and reduction are quantified using probability density functions of model parameters obtained from Markov Chain Monte Carlo simulation using the DREAM algorithm. This study provides insights to model calibration, uncertainty quantification, experiment design, and data collection in groundwater reactive transport modeling and other environmental modeling.
Vilela, Simone Furgeri Godinho; Junqueira, Juliana Campos; Barbosa, Junia Oliveira; Majewski, Marta; Munin, Egberto; Jorge, Antonio Olavo Cardoso
2012-06-01
The organization of biofilms in the oral cavity gives them added resistance to antimicrobial agents. The action of phenothiazinic photosensitizers on oral biofilms has already been reported. However, the action of the malachite green photosensitizer upon biofilm-organized microorganisms has not been described. The objective of the present work was to compare the action of malachite green with the phenothiazinic photosensitizers (methylene blue and toluidine blue) on Staphylococcus aureus and Escherichia coli biofilms. The biofilms were grown on sample pieces of acrylic resin and subjected to photodynamic therapy using a 660-nm diode laser and photosensitizer concentrations ranging from 37.5 to 3000 μM. After photodynamic therapy, cells from the biofilms were dispersed in a homogenizer and cultured in Brain Heart Infusion broth for quantification of colony-forming units per experimental protocol. For each tested microorganism, two control groups were maintained: one exposed to the laser radiation without the photosensitizer (L+PS-) and other treated with the photosensitizer without exposure to the red laser light (L-PS+). The results were subjected to descriptive statistical analysis. The best results for S. aureus and E. coli biofilms were obtained with photosensitizer concentrations of approximately 300 μM methylene blue, with microbial reductions of 0.8-1.0 log(10); 150 μM toluidine blue, with microbial reductions of 0.9-1.0 log(10); and 3000 μM malachite green, with microbial reductions of 1.6-4.0 log(10). Greater microbial reduction was achieved with the malachite green photosensitizer when used at higher concentrations than those employed for the phenothiazinic dyes. Copyright © 2011 Elsevier Ltd. All rights reserved.
Frew, John A.; Grue, Christian E.
2012-01-01
The neonicotinoid insecticide imidacloprid (IMI) has been proposed as an alternative to carbaryl for controlling indigenous burrowing shrimp on commercial oyster beds in Willapa Bay and Grays Harbor, Washington. A focus of concern over the use of this insecticide in an aquatic environment is the potential for adverse effects from exposure to non-target species residing in the Bay, such as juvenile Chinook (Oncorhynchus tshawytscha) and cutthroat trout (O. clarki). Federal registration and State permiting approval for the use of IMI will require confirmation that the compound does not adversely impact these salmonids following field applications. This will necessitate an environmental monitoring program for evaluating exposure in salmonids following the treatment of beds. Quantification of IMI residues in tissue can be used for determining salmonid exposure to the insecticide. Refinement of an existing protocol using liquid-chromatography mass spectrometry (LC-MS) detection would provide the low limits of quantification, given the relatively small tissue sample sizes, necessary for determining exposure in individual fish. Such an approach would not be viable for the environmental monitoring effort in Willapa Bay and Grays Harbor due to the high costs associated with running multiple analyses, however. A new sample preparation protocol was developed for use with a commercially available enzyme-linked immunosorbent assay (ELISA) for the quantification of IMI, thereby providing a low-cost alternative to LC-MS for environmental monitoring in Willapa Bay and Grays Harbor. Extraction of the analyte from the salmonid brain tissue was achieved by Dounce homogenization in 4.0 mL of 20.0 mM Triton X-100, followed by a 6 h incubation at 50–55 °C. Centrifugal ultrafiltration and reversed phase solid phase extraction were used for sample cleanup. The limit of quantification for an average 77.0 mg whole brain sample was calculated at 18.2 μg kg-1 (ppb) with an average recovery of 79%. This relatively low limit of quantification allows for the analysis of individual fish. Using controlled laboratory studies, a curvelinear relationship was found between the measured IMI residue concentrations in brain tissue and exposure concentrations in seawater. Additonally, a range of IMI brain residue concentrations was associated with an overt effect; illustrating the utility of the IMI tissue residue quantification approach for linking exposure with defined effects.
Image processing tools dedicated to quantification in 3D fluorescence microscopy
NASA Astrophysics Data System (ADS)
Dieterlen, A.; De Meyer, A.; Colicchio, B.; Le Calvez, S.; Haeberlé, O.; Jacquey, S.
2006-05-01
3-D optical fluorescent microscopy now becomes an efficient tool for the volume investigation of living biological samples. Developments in instrumentation have permitted to beat off the conventional Abbe limit. In any case the recorded image can be described by the convolution equation between the original object and the Point Spread Function (PSF) of the acquisition system. Due to the finite resolution of the instrument, the original object is recorded with distortions and blurring, and contaminated by noise. This induces that relevant biological information cannot be extracted directly from raw data stacks. If the goal is 3-D quantitative analysis, then to assess optimal performance of the instrument and to ensure the data acquisition reproducibility, the system characterization is mandatory. The PSF represents the properties of the image acquisition system; we have proposed the use of statistical tools and Zernike moments to describe a 3-D PSF system and to quantify the variation of the PSF. This first step toward standardization is helpful to define an acquisition protocol optimizing exploitation of the microscope depending on the studied biological sample. Before the extraction of geometrical information and/or intensities quantification, the data restoration is mandatory. Reduction of out-of-focus light is carried out computationally by deconvolution process. But other phenomena occur during acquisition, like fluorescence photo degradation named "bleaching", inducing an alteration of information needed for restoration. Therefore, we have developed a protocol to pre-process data before the application of deconvolution algorithms. A large number of deconvolution methods have been described and are now available in commercial package. One major difficulty to use this software is the introduction by the user of the "best" regularization parameters. We have pointed out that automating the choice of the regularization level; also greatly improves the reliability of the measurements although it facilitates the use. Furthermore, to increase the quality and the repeatability of quantitative measurements a pre-filtering of images improves the stability of deconvolution process. In the same way, the PSF prefiltering stabilizes the deconvolution process. We have shown that Zemike polynomials can be used to reconstruct experimental PSF, preserving system characteristics and removing the noise contained in the PSF.
Single-Molecule Fluorescence In Situ Hybridization (FISH) of Circular RNA CDR1as.
Kocks, Christine; Boltengagen, Anastasiya; Piwecka, Monika; Rybak-Wolf, Agnieszka; Rajewsky, Nikolaus
2018-01-01
Individual mRNA molecules can be imaged in fixed cells by hybridization with multiple, singly labeled oligonucleotide probes, followed by computational identification of fluorescent signals. This approach, called single-molecule RNA fluorescence in situ hybridization (smRNA FISH), allows subcellular localization and absolute quantification of RNA molecules in individual cells. Here, we describe a simple smRNA FISH protocol for two-color imaging of a circular RNA, CDR1as, simultaneously with an unrelated messenger RNA. The protocol can be adapted to circRNAs that coexist with overlapping, noncircular mRNA isoforms produced from the same genetic locus.
De Keyser, Ellen; Desmet, Laurence; Van Bockstaele, Erik; De Riek, Jan
2013-06-24
Flower colour variation is one of the most crucial selection criteria in the breeding of a flowering pot plant, as is also the case for azalea (Rhododendron simsii hybrids). Flavonoid biosynthesis was studied intensively in several species. In azalea, flower colour can be described by means of a 3-gene model. However, this model does not clarify pink-coloration. The last decade gene expression studies have been implemented widely for studying flower colour. However, the methods used were often only semi-quantitative or quantification was not done according to the MIQE-guidelines. We aimed to develop an accurate protocol for RT-qPCR and to validate the protocol to study flower colour in an azalea mapping population. An accurate RT-qPCR protocol had to be established. RNA quality was evaluated in a combined approach by means of different techniques e.g. SPUD-assay and Experion-analysis. We demonstrated the importance of testing noRT-samples for all genes under study to detect contaminating DNA. In spite of the limited sequence information available, we prepared a set of 11 reference genes which was validated in flower petals; a combination of three reference genes was most optimal. Finally we also used plasmids for the construction of standard curves. This allowed us to calculate gene-specific PCR efficiencies for every gene to assure an accurate quantification. The validity of the protocol was demonstrated by means of the study of six genes of the flavonoid biosynthesis pathway. No correlations were found between flower colour and the individual expression profiles. However, the combination of early pathway genes (CHS, F3H, F3'H and FLS) is clearly related to co-pigmentation with flavonols. The late pathway genes DFR and ANS are to a minor extent involved in differentiating between coloured and white flowers. Concerning pink coloration, we could demonstrate that the lower intensity in this type of flowers is correlated to the expression of F3'H. Currently in plant research, validated and qualitative RT-qPCR protocols are still rare. The protocol in this study can be implemented on all plant species to assure accurate quantification of gene expression. We have been able to correlate flower colour to the combined regulation of structural genes, both in the early and late branch of the pathway. This allowed us to differentiate between flower colours in a broader genetic background as was done so far in flower colour studies. These data will now be used for eQTL mapping to comprehend even more the regulation of this pathway.
2013-01-01
Background Flower colour variation is one of the most crucial selection criteria in the breeding of a flowering pot plant, as is also the case for azalea (Rhododendron simsii hybrids). Flavonoid biosynthesis was studied intensively in several species. In azalea, flower colour can be described by means of a 3-gene model. However, this model does not clarify pink-coloration. The last decade gene expression studies have been implemented widely for studying flower colour. However, the methods used were often only semi-quantitative or quantification was not done according to the MIQE-guidelines. We aimed to develop an accurate protocol for RT-qPCR and to validate the protocol to study flower colour in an azalea mapping population. Results An accurate RT-qPCR protocol had to be established. RNA quality was evaluated in a combined approach by means of different techniques e.g. SPUD-assay and Experion-analysis. We demonstrated the importance of testing noRT-samples for all genes under study to detect contaminating DNA. In spite of the limited sequence information available, we prepared a set of 11 reference genes which was validated in flower petals; a combination of three reference genes was most optimal. Finally we also used plasmids for the construction of standard curves. This allowed us to calculate gene-specific PCR efficiencies for every gene to assure an accurate quantification. The validity of the protocol was demonstrated by means of the study of six genes of the flavonoid biosynthesis pathway. No correlations were found between flower colour and the individual expression profiles. However, the combination of early pathway genes (CHS, F3H, F3'H and FLS) is clearly related to co-pigmentation with flavonols. The late pathway genes DFR and ANS are to a minor extent involved in differentiating between coloured and white flowers. Concerning pink coloration, we could demonstrate that the lower intensity in this type of flowers is correlated to the expression of F3'H. Conclusions Currently in plant research, validated and qualitative RT-qPCR protocols are still rare. The protocol in this study can be implemented on all plant species to assure accurate quantification of gene expression. We have been able to correlate flower colour to the combined regulation of structural genes, both in the early and late branch of the pathway. This allowed us to differentiate between flower colours in a broader genetic background as was done so far in flower colour studies. These data will now be used for eQTL mapping to comprehend even more the regulation of this pathway. PMID:23800303
Karakatsanis, Nicolas A; Lodge, Martin A; Tahari, Abdel K; Zhou, Y; Wahl, Richard L; Rahmim, Arman
2013-10-21
Static whole-body PET/CT, employing the standardized uptake value (SUV), is considered the standard clinical approach to diagnosis and treatment response monitoring for a wide range of oncologic malignancies. Alternative PET protocols involving dynamic acquisition of temporal images have been implemented in the research setting, allowing quantification of tracer dynamics, an important capability for tumor characterization and treatment response monitoring. Nonetheless, dynamic protocols have been confined to single-bed-coverage limiting the axial field-of-view to ~15-20 cm, and have not been translated to the routine clinical context of whole-body PET imaging for the inspection of disseminated disease. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. We investigate solutions to address the challenges of: (i) long acquisitions, (ii) small number of dynamic frames per bed, and (iii) non-invasive quantification of kinetics in the plasma. In the present study, a novel dynamic (4D) whole-body PET acquisition protocol of ~45 min total length is presented, composed of (i) an initial 6 min dynamic PET scan (24 frames) over the heart, followed by (ii) a sequence of multi-pass multi-bed PET scans (six passes × seven bed positions, each scanned for 45 s). Standard Patlak linear graphical analysis modeling was employed, coupled with image-derived plasma input function measurements. Ordinary least squares Patlak estimation was used as the baseline regression method to quantify the physiological parameters of tracer uptake rate Ki and total blood distribution volume V on an individual voxel basis. Extensive Monte Carlo simulation studies, using a wide set of published kinetic FDG parameters and GATE and XCAT platforms, were conducted to optimize the acquisition protocol from a range of ten different clinically acceptable sampling schedules examined. The framework was also applied to six FDG PET patient studies, demonstrating clinical feasibility. Both simulated and clinical results indicated enhanced contrast-to-noise ratios (CNRs) for Ki images in tumor regions with notable background FDG concentration, such as the liver, where SUV performed relatively poorly. Overall, the proposed framework enables enhanced quantification of physiological parameters across the whole body. In addition, the total acquisition length can be reduced from 45 to ~35 min and still achieve improved or equivalent CNR compared to SUV, provided the true Ki contrast is sufficiently high. In the follow-up companion paper, a set of advanced linear regression schemes is presented to particularly address the presence of noise, and attempt to achieve a better trade-off between the mean-squared error and the CNR metrics, resulting in enhanced task-based imaging.
Karakatsanis, Nicolas A.; Lodge, Martin A.; Tahari, Abdel K.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman
2013-01-01
Static whole body PET/CT, employing the standardized uptake value (SUV), is considered the standard clinical approach to diagnosis and treatment response monitoring for a wide range of oncologic malignancies. Alternative PET protocols involving dynamic acquisition of temporal images have been implemented in the research setting, allowing quantification of tracer dynamics, an important capability for tumor characterization and treatment response monitoring. Nonetheless, dynamic protocols have been confined to single bed-coverage limiting the axial field-of-view to ~15–20 cm, and have not been translated to the routine clinical context of whole-body PET imaging for the inspection of disseminated disease. Here, we pursue a transition to dynamic whole body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. We investigate solutions to address the challenges of: (i) long acquisitions, (ii) small number of dynamic frames per bed, and (iii) non-invasive quantification of kinetics in the plasma. In the present study, a novel dynamic (4D) whole body PET acquisition protocol of ~45min total length is presented, composed of (i) an initial 6-min dynamic PET scan (24 frames) over the heart, followed by (ii) a sequence of multi-pass multi-bed PET scans (6 passes x 7 bed positions, each scanned for 45sec). Standard Patlak linear graphical analysis modeling was employed, coupled with image-derived plasma input function measurements. Ordinary least squares (OLS) Patlak estimation was used as the baseline regression method to quantify the physiological parameters of tracer uptake rate Ki and total blood distribution volume V on an individual voxel basis. Extensive Monte Carlo simulation studies, using a wide set of published kinetic FDG parameters and GATE and XCAT platforms, were conducted to optimize the acquisition protocol from a range of 10 different clinically acceptable sampling schedules examined. The framework was also applied to six FDG PET patient studies, demonstrating clinical feasibility. Both simulated and clinical results indicated enhanced contrast-to-noise ratios (CNRs) for Ki images in tumor regions with notable background FDG concentration, such as the liver, where SUV performed relatively poorly. Overall, the proposed framework enables enhanced quantification of physiological parameters across the whole-body. In addition, the total acquisition length can be reduced from 45min to ~35min and still achieve improved or equivalent CNR compared to SUV, provided the true Ki contrast is sufficiently high. In the follow-up companion paper, a set of advanced linear regression schemes is presented to particularly address the presence of noise, and attempt to achieve a better trade-off between the mean-squared error (MSE) and the CNR metrics, resulting in enhanced task-based imaging. PMID:24080962
NASA Astrophysics Data System (ADS)
Karakatsanis, Nicolas A.; Lodge, Martin A.; Tahari, Abdel K.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman
2013-10-01
Static whole-body PET/CT, employing the standardized uptake value (SUV), is considered the standard clinical approach to diagnosis and treatment response monitoring for a wide range of oncologic malignancies. Alternative PET protocols involving dynamic acquisition of temporal images have been implemented in the research setting, allowing quantification of tracer dynamics, an important capability for tumor characterization and treatment response monitoring. Nonetheless, dynamic protocols have been confined to single-bed-coverage limiting the axial field-of-view to ˜15-20 cm, and have not been translated to the routine clinical context of whole-body PET imaging for the inspection of disseminated disease. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. We investigate solutions to address the challenges of: (i) long acquisitions, (ii) small number of dynamic frames per bed, and (iii) non-invasive quantification of kinetics in the plasma. In the present study, a novel dynamic (4D) whole-body PET acquisition protocol of ˜45 min total length is presented, composed of (i) an initial 6 min dynamic PET scan (24 frames) over the heart, followed by (ii) a sequence of multi-pass multi-bed PET scans (six passes × seven bed positions, each scanned for 45 s). Standard Patlak linear graphical analysis modeling was employed, coupled with image-derived plasma input function measurements. Ordinary least squares Patlak estimation was used as the baseline regression method to quantify the physiological parameters of tracer uptake rate Ki and total blood distribution volume V on an individual voxel basis. Extensive Monte Carlo simulation studies, using a wide set of published kinetic FDG parameters and GATE and XCAT platforms, were conducted to optimize the acquisition protocol from a range of ten different clinically acceptable sampling schedules examined. The framework was also applied to six FDG PET patient studies, demonstrating clinical feasibility. Both simulated and clinical results indicated enhanced contrast-to-noise ratios (CNRs) for Ki images in tumor regions with notable background FDG concentration, such as the liver, where SUV performed relatively poorly. Overall, the proposed framework enables enhanced quantification of physiological parameters across the whole body. In addition, the total acquisition length can be reduced from 45 to ˜35 min and still achieve improved or equivalent CNR compared to SUV, provided the true Ki contrast is sufficiently high. In the follow-up companion paper, a set of advanced linear regression schemes is presented to particularly address the presence of noise, and attempt to achieve a better trade-off between the mean-squared error and the CNR metrics, resulting in enhanced task-based imaging.
Gaubert, Alexandra; Jeudy, Jérémy; Rougemont, Blandine; Bordes, Claire; Lemoine, Jérôme; Casabianca, Hervé; Salvador, Arnaud
2016-07-01
In a stricter legislative context, greener detergent formulations are developed. In this way, synthetic surfactants are frequently replaced by bio-sourced surfactants and/or used at lower concentrations in combination with enzymes. In this paper, a LC-MS/MS method was developed for the identification and quantification of enzymes in laundry detergents. Prior to the LC-MS/MS analyses, a specific sample preparation protocol was developed due to matrix complexity (high surfactant percentages). Then for each enzyme family mainly used in detergent formulations (protease, amylase, cellulase, and lipase), specific peptides were identified on a high resolution platform. A LC-MS/MS method was then developed in selected reaction monitoring (SRM) MS mode for the light and corresponding heavy peptides. The method was linear on the peptide concentration ranges 25-1000 ng/mL for protease, lipase, and cellulase; 50-1000 ng/mL for amylase; and 5-1000 ng/mL for cellulase in both water and laundry detergent matrices. The application of the developed analytical strategy to real commercial laundry detergents enabled enzyme identification and absolute quantification. For the first time, identification and absolute quantification of enzymes in laundry detergent was realized by LC-MS/MS in a single run. Graphical Abstract Identification and quantification of enzymes by LC-MS/MS.
Kim, Jong-Seo; Fillmore, Thomas L; Liu, Tao; Robinson, Errol; Hossain, Mahmud; Champion, Boyd L; Moore, Ronald J; Camp, David G; Smith, Richard D; Qian, Wei-Jun
2011-12-01
Selected reaction monitoring (SRM)-MS is an emerging technology for high throughput targeted protein quantification and verification in biomarker discovery studies; however, the cost associated with the application of stable isotope-labeled synthetic peptides as internal standards can be prohibitive for screening a large number of candidate proteins as often required in the preverification phase of discovery studies. Herein we present a proof of concept study using an (18)O-labeled proteome reference as global internal standards (GIS) for SRM-based relative quantification. The (18)O-labeled proteome reference (or GIS) can be readily prepared and contains a heavy isotope ((18)O)-labeled internal standard for every possible tryptic peptide. Our results showed that the percentage of heavy isotope ((18)O) incorporation applying an improved protocol was >99.5% for most peptides investigated. The accuracy, reproducibility, and linear dynamic range of quantification were further assessed based on known ratios of standard proteins spiked into the labeled mouse plasma reference. Reliable quantification was observed with high reproducibility (i.e. coefficient of variance <10%) for analyte concentrations that were set at 100-fold higher or lower than those of the GIS based on the light ((16)O)/heavy ((18)O) peak area ratios. The utility of (18)O-labeled GIS was further illustrated by accurate relative quantification of 45 major human plasma proteins. Moreover, quantification of the concentrations of C-reactive protein and prostate-specific antigen was illustrated by coupling the GIS with standard additions of purified protein standards. Collectively, our results demonstrated that the use of (18)O-labeled proteome reference as GIS provides a convenient, low cost, and effective strategy for relative quantification of a large number of candidate proteins in biological or clinical samples using SRM.
Simple, Fast, and Sensitive Method for Quantification of Tellurite in Culture Media▿
Molina, Roberto C.; Burra, Radhika; Pérez-Donoso, José M.; Elías, Alex O.; Muñoz, Claudia; Montes, Rebecca A.; Chasteen, Thomas G.; Vásquez, Claudio C.
2010-01-01
A fast, simple, and reliable chemical method for tellurite quantification is described. The procedure is based on the NaBH4-mediated reduction of TeO32− followed by the spectrophotometric determination of elemental tellurium in solution. The method is highly reproducible, is stable at different pH values, and exhibits linearity over a broad range of tellurite concentrations. PMID:20525868
Kakkar, Chandan; Sripathi, Smiti; Parakh, Anushri; Shrivastav, Rajendra
2016-01-01
Introduction Urolithiasis is one of the major, recurring problem in young individuals and CT being the commonest diagnostic modality used. In order to reduce the radiation dose to the patient who are young and as stone formation is a recurring process; one of the simplest way would be, low dose CT along with tube current modulation. Aim Aim of this study was to compare the sensitivity and specificity of low dose (70mAs) with standard dose (250mAs) protocol in detecting urolithiasis and to define the tube current and mean effective patient dose by these protocols. Materials and Methods A prospective study was conducted in 200 patients over a period of 2 years with acute flank pain presentation. CT was performed in 100 cases with standard dose and another 100 with low dose protocol using tube current modulation. Sensitivity and specificity for calculus detection, percentage reduction of dose and tube current with low dose protocol was calculated. Results Urolithiasis was detected in 138 patients, 67 were examined by high dose and 71 were by low dose protocol. Sensitivity and Specificity of low dose protocol was 97.1% and 96.4% with similar results found in high BMI patients. Tube current modulation resulted in reduction of effective tube current by 12.17%. The mean effective patient dose for standard dose was 10.33 mSv whereas 2.92 mSv for low dose with 51.13–53.8% reduction in low dose protocol. Conclusion The study has reinforced that low-dose CT with tube current modulation is appropriate for diagnosis of urolithiasis with significant reduction in tube current and patient effective dose. PMID:27437322
Koteshwar, Prakashini; Kakkar, Chandan; Sripathi, Smiti; Parakh, Anushri; Shrivastav, Rajendra
2016-05-01
Urolithiasis is one of the major, recurring problem in young individuals and CT being the commonest diagnostic modality used. In order to reduce the radiation dose to the patient who are young and as stone formation is a recurring process; one of the simplest way would be, low dose CT along with tube current modulation. Aim of this study was to compare the sensitivity and specificity of low dose (70mAs) with standard dose (250mAs) protocol in detecting urolithiasis and to define the tube current and mean effective patient dose by these protocols. A prospective study was conducted in 200 patients over a period of 2 years with acute flank pain presentation. CT was performed in 100 cases with standard dose and another 100 with low dose protocol using tube current modulation. Sensitivity and specificity for calculus detection, percentage reduction of dose and tube current with low dose protocol was calculated. Urolithiasis was detected in 138 patients, 67 were examined by high dose and 71 were by low dose protocol. Sensitivity and Specificity of low dose protocol was 97.1% and 96.4% with similar results found in high BMI patients. Tube current modulation resulted in reduction of effective tube current by 12.17%. The mean effective patient dose for standard dose was 10.33 mSv whereas 2.92 mSv for low dose with 51.13-53.8% reduction in low dose protocol. The study has reinforced that low-dose CT with tube current modulation is appropriate for diagnosis of urolithiasis with significant reduction in tube current and patient effective dose.
SU-F-J-48: Effect of Scan Length On Magnitude of Imaging Dose in KV CBCT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deshpande, S; Naidu, S; Sutar, A
Purpose: To study effect of scan length on magnitude of imaging dose deposition in Varian kV CBCT for head & neck and pelvis CBCT. Methods: To study effect of scan length we measured imaging dose at depth of 8 cm for head and neck Cone Beam Computed Tomography (CBCT) acquisition ( X ray beam energy is used 100kV and 200 degree of gantry rotation) and at 16 cm depth for pelvis CBCT acquisition ( X ray beam energy used is 125 kV and 360 degree of gantry rotation) in specially designed phantom. We used farmer chamber which was calibrated inmore » kV X ray range for measurements .Dose was measured with default field size, and reducing field size along y direction to 10 cm and 5 cm. Results: As the energy of the beam decreases the scattered radiation increases and this contributes significantly to the dose deposited in the patient. By reducing the scan length to 10 Cm from default 20.6 cm we found a dose reduction of 14% for head and neck CBCT protocol and a reduction of 26% for pelvis CBCT protocol. Similarly for a scan length of 5cm compared to default the dose reduction in head and neck CBCT protocol is 36% while in the pelvis CBCT protocol the dose reduction is 50%. Conclusion: By limiting the scan length we can control the scatter radiation generated and hence the dose to the patient. However the variation in dose reduction for same length used in two protocols is because of the scan geometry. The pelvis CBCT protocol uses a full rotation and head and neck CBCT protocol uses partial rotation.« less
Orthogonal-state-based cryptography in quantum mechanics and local post-quantum theories
NASA Astrophysics Data System (ADS)
Aravinda, S.; Banerjee, Anindita; Pathak, Anirban; Srikanth, R.
2014-02-01
We introduce the concept of cryptographic reduction, in analogy with a similar concept in computational complexity theory. In this framework, class A of crypto-protocols reduces to protocol class B in a scenario X, if for every instance a of A, there is an instance b of B and a secure transformation X that reproduces a given b, such that the security of b guarantees the security of a. Here we employ this reductive framework to study the relationship between security in quantum key distribution (QKD) and quantum secure direct communication (QSDC). We show that replacing the streaming of independent qubits in a QKD scheme by block encoding and transmission (permuting the order of particles block by block) of qubits, we can construct a QSDC scheme. This forms the basis for the block reduction from a QSDC class of protocols to a QKD class of protocols, whereby if the latter is secure, then so is the former. Conversely, given a secure QSDC protocol, we can of course construct a secure QKD scheme by transmitting a random key as the direct message. Then the QKD class of protocols is secure, assuming the security of the QSDC class which it is built from. We refer to this method of deduction of security for this class of QKD protocols, as key reduction. Finally, we propose an orthogonal-state-based deterministic key distribution (KD) protocol which is secure in some local post-quantum theories. Its security arises neither from geographic splitting of a code state nor from Heisenberg uncertainty, but from post-measurement disturbance.
Chaudhry, Waseem; Hussain, Nasir; Ahlberg, Alan W; Croft, Lori B; Fernandez, Antonio B; Parker, Mathew W; Swales, Heather H; Slomka, Piotr J; Henzlova, Milena J; Duvall, W Lane
2017-06-01
A stress-first myocardial perfusion imaging (MPI) protocol saves time, is cost effective, and decreases radiation exposure. A limitation of this protocol is the requirement for physician review of the stress images to determine the need for rest images. This hurdle could be eliminated if an experienced technologist and/or automated computer quantification could make this determination. Images from consecutive patients who were undergoing a stress-first MPI with attenuation correction at two tertiary care medical centers were prospectively reviewed independently by a technologist and cardiologist blinded to clinical and stress test data. Their decision on the need for rest imaging along with automated computer quantification of perfusion results was compared with the clinical reference standard of an assessment of perfusion images by a board-certified nuclear cardiologist that included clinical and stress test data. A total of 250 patients (mean age 61 years and 55% female) who underwent a stress-first MPI were studied. According to the clinical reference standard, 42 (16.8%) and 208 (83.2%) stress-first images were interpreted as "needing" and "not needing" rest images, respectively. The technologists correctly classified 229 (91.6%) stress-first images as either "needing" (n = 28) or "not needing" (n = 201) rest images. Their sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were 66.7%, 96.6%, 80.0%, and 93.5%, respectively. An automated stress TPD score ≥1.2 was associated with optimal sensitivity and specificity and correctly classified 179 (71.6%) stress-first images as either "needing" (n = 31) or "not needing" (n = 148) rest images. Its sensitivity, specificity, PPV, and NPV were 73.8%, 71.2%, 34.1%, and 93.1%, respectively. In a model whereby the computer or technologist could correct for the other's incorrect classification, 242 (96.8%) stress-first images were correctly classified. The composite sensitivity, specificity, PPV, and NPV were 83.3%, 99.5%, 97.2%, and 96.7%, respectively. Technologists and automated quantification software had a high degree of agreement with the clinical reference standard for determining the need for rest images in a stress-first imaging protocol. Utilizing an experienced technologist and automated systems to screen stress-first images could expand the use of stress-first MPI to sites where the cardiologist is not immediately available for interpretation.
Traub-Dargatz, Josie L; Weese, J Scott; Rousseau, Joyce D; Dunowska, Magdalena; Morley, Paul S; Dargatz, David A
2006-07-01
Reduction factors (RFs) for bacterial counts on examiners' hands were compared when performing a standardized equine physical examination, followed by the use of one of 3 hand-hygiene protocols (washing with soap, ethanol gel application, and chlorohexidine-ethanol application). The mean RFs were 1.29 log10 and 1.44 log10 at 2 study sites for the alcohol-gel (62% ethyl alcohol active ingredient) protocols and 1.47 log10 and 1.94 log10 at 2 study sites for the chlorhexidine-alcohol (61% ethyl alcohol plus 1% chlorhexidine active ingredients) protocols, respectively. The RFs were significantly different (P < 0.0001) between the hand-washing group and the other 2 treatment groups (the alcohol-gel and the chlorhexidine-alcohol lotion). The use of alcohol-based gels or chlorhexidine-alcohol hand hygiene protocols must still be proven effective in equine practice settings, but in this study, these protocols were equivalent or superior to hand washing for reduction in bacterial load on the hands of people after they perform routine physical examinations.
Traub-Dargatz, Josie L.; Weese, J. Scott; Rousseau, Joyce D.; Dunowska, Magdalena; Morley, Paul S.; Dargatz, David A.
2006-01-01
Abstract Reduction factors (RFs) for bacterial counts on examiners’ hands were compared when performing a standardized equine physical examination, followed by the use of one of 3 hand-hygiene protocols (washing with soap, ethanol gel application, and chlorohexidine-ethanol application). The mean RFs were 1.29 log10 and 1.44 log10 at 2 study sites for the alcohol-gel (62% ethyl alcohol active ingredient) protocols and 1.47 log10 and 1.94 log10 at 2 study sites for the chlorhexidine-alcohol (61% ethyl alcohol plus 1% chlorhexidine active ingredients) protocols, respectively. The RFs were significantly different (P < 0.0001) between the hand-washing group and the other 2 treatment groups (the alcohol-gel and the chlorhexidine-alcohol lotion). The use of alcohol-based gels or chlorhexidine-alcohol hand hygiene protocols must still be proven effective in equine practice settings, but in this study, these protocols were equivalent or superior to hand washing for reduction in bacterial load on the hands of people after they perform routine physical examinations. PMID:16898109
NASA Astrophysics Data System (ADS)
Balcerzyk, Marcin; Fernández-López, Rosa; Parrado-Gallego, Ángel; Pachón-Garrudo, Víctor Manuel; Chavero-Royan, José; Hevilla, Juan; Jiménez-Ortega, Elisa; Leal, Antonio
2017-11-01
Tumour uptake value is a critical result in [18F]FDG-PET/CT ([18F]fluorodeoxyglucose) quantitative scans such as the dose prescription for radiotherapy and oncology. The quantification is highly dependent on the protocol of acquisition and reconstruction of the image, especially in low activity tumours. During adjusting acquisition and reconstruction protocols available in our Siemens Biograph mCT scanner for EARL (ResEARch 4 Life®) [18F]FDG-PET/CT accreditation requirements, we developed reconstruction protocols which will be used in PET based radiotherapy planning able to reduce inter-/intra-institute variability in Standard Uptake Value (SUV) results, and to bring Recovery Coefficient to 1 as close as possible for Image Quality NEMA 2007 phantom. Primary and secondary tumours from two patients were assessed by four independent evaluators. The influence of reconstruction protocols on tumour clinical assessment was presented. We proposed the improvement route for EARL accredited protocols so that they may be developed in classes to take advantage of scanner possibilities. The application of optimized reconstruction protocol eliminates the need of partial volume corrections.
Suhr, Anna Catharina; Vogeser, Michael; Grimm, Stefanie H
2016-05-30
For quotable quantitative analysis of endogenous analytes in complex biological samples by isotope dilution LC-MS/MS, the creation of appropriate calibrators is a challenge, since analyte-free authentic material is in general not available. Thus, surrogate matrices are often used to prepare calibrators and controls. However, currently employed validation protocols do not include specific experiments to verify the suitability of a surrogate matrix calibration for quantification of authentic matrix samples. The aim of the study was the development of a novel validation experiment to test whether surrogate matrix based calibrators enable correct quantification of authentic matrix samples. The key element of the novel validation experiment is the inversion of nonlabelled analytes and their stable isotope labelled (SIL) counterparts in respect to their functions, i.e. SIL compound is the analyte and nonlabelled substance is employed as internal standard. As a consequence, both surrogate and authentic matrix are analyte-free regarding SIL analytes, which allows a comparison of both matrices. We called this approach Isotope Inversion Experiment. As figure of merit we defined the accuracy of inverse quality controls in authentic matrix quantified by means of a surrogate matrix calibration curve. As a proof-of-concept application a LC-MS/MS assay addressing six corticosteroids (cortisol, cortisone, corticosterone, 11-deoxycortisol, 11-deoxycorticosterone, and 17-OH-progesterone) was chosen. The integration of the Isotope Inversion Experiment in the validation protocol for the steroid assay was successfully realized. The accuracy results of the inverse quality controls were all in all very satisfying. As a consequence the suitability of a surrogate matrix calibration for quantification of the targeted steroids in human serum as authentic matrix could be successfully demonstrated. The Isotope Inversion Experiment fills a gap in the validation process for LC-MS/MS assays quantifying endogenous analytes. We consider it a valuable and convenient tool to evaluate the correct quantification of authentic matrix samples based on a calibration curve in surrogate matrix. Copyright © 2016 Elsevier B.V. All rights reserved.
Jakubowska, Natalia; Beldì, Giorgia; Peychès Bach, Aurélie; Simoneau, Catherine
2014-01-01
This paper presents the outcome of the development, optimisation and validation at European Union level of an analytical method for using poly(2,6-diphenyl phenylene oxide--PPPO), which is stipulated in Regulation (EU) No. 10/2011, as food simulant E for testing specific migration from plastics into dry foodstuffs. Two methods for fortifying respectively PPPO and a low-density polyethylene (LDPE) film with surrogate substances that are relevant to food contact were developed. A protocol for cleaning the PPPO and an efficient analytical method were developed for the quantification of butylhydroxytoluene (BHT), benzophenone (BP), diisobutylphthalate (DiBP), bis(2-ethylhexyl) adipate (DEHA) and 1,2-cyclohexanedicarboxylic acid, diisononyl ester (DINCH) from PPPO. A protocol for a migration test from plastics using small migration cells was also developed. The method was validated by an inter-laboratory comparison (ILC) with 16 national reference laboratories for food contact materials in the European Union. This allowed for the first time data to be obtained on the precision and laboratory performance of both migration and quantification. The results showed that the validation ILC was successful even when taking into account the complexity of the exercise. The results showed that the method performance was 7-9% repeatability standard deviation (rSD) for most substances (regardless of concentration), with 12% rSD for the high level of BHT and for DiBP at very low levels. The reproducibility standard deviation results for the 16 European Union laboratories were in the range of 20-30% for the quantification from PPPO (for the three levels of concentrations of the five substances) and 15-40% from migration experiments from the fortified plastic at 60°C for 10 days and subsequent quantification. Considering the lack of data previously available in the literature, this work has demonstrated that the validation of a method is possible both for migration from a film and for quantification into a corresponding simulant for specific migration.
Gámez-Cenzano, Cristina; Pino-Sorroche, Francisco
2014-04-01
There is a growing interest in using quantification in FDG-PET/CT in oncology, especially for evaluating response to therapy. Complex full quantitative procedures with blood sampling and dynamic scanning have been clinically replaced by the use of standardized uptake value measurements that provide an index of regional tracer uptake normalized to the administered dose of FDG. Some approaches have been proposed for assessing quantitative metabolic response, such as EORTC and PERCIST criteria in solid tumors. When using standardized uptake value in clinical routine and multicenter trials, standardization of protocols and quality control procedures of instrumentation is required. Copyright © 2014 Elsevier Inc. All rights reserved.
Disinfection of human cardiac valve allografts in tissue banking: systematic review report.
Germain, M; Strong, D M; Dowling, G; Mohr, J; Duong, A; Garibaldi, A; Simunovic, N; Ayeni, O R
2016-12-01
Cardiovascular allografts are usually disinfected using antibiotics, but protocols vary significantly between tissue banks. It is likely that different disinfection protocols will not have the same level of efficacy; they may also have varying effects on the structural integrity of the tissue, which could lead to significant differences in terms of clinical outcome in recipients. Ideally, a disinfection protocol should achieve the greatest bioburden reduction with the lowest possible impact on tissue integrity. We conducted a systematic review of methods applied to disinfect cardiovascular tissues. The use of multiple broad spectrum antibiotics in conjunction with an antifungal agent resulted in the greatest reduction in bioburden. Antibiotic incubation periods were limited to less than 24 h, and most protocols incubated tissues at 4 °C, however one study demonstrated a greater reduction of microbial load at 37 °C. None of the reviewed studies looked at the impact of these disinfection protocols on the risk of infection or any other clinical outcome in recipients.
Fan, Lu; Brett, Michael T; Jiang, Wenju; Li, Bo
2017-10-01
The objective of this study was to determine the composition of nitrogen (N) in the effluents of advanced N removal (ANR) wastewater treatment plants (WWTPs). This study also tested two different experimental protocols for determining dissolved N recalcitrance. An analysis of 15 effluent samples from five WWTPs, showed effluent concentrations and especially effluent composition varied greatly from one system to the other, with total nitrogen (TN) ranging between 1.05 and 8.10 mg L -1 . Nitrate (NO 3 - ) accounted for between 38 ± 32% of TN, and ammonium accounted for a further 29 ± 28%. All of these samples were dominated by dissolved inorganic nitrogen (DIN; NO 3 - + NH 4 + ), and uptake experiments indicated the DIN fraction was as expected highly bioavailable. Dissolved organic N (DON) accounted for 20 ± 11% for the total dissolved N in these effluents, and uptake experiments indicated the bioavailability of this fraction varied between 27 ± 26% depending on the WWTP assessed. These results indicate near complete DIN removal should be the primary goal of ANR treatment systems. The comparison of bioavailable nitrogen (BAN) quantification protocols showed that the dissolved nitrogen uptake bioassay approach was clearly a more reliable way to determine BAN concentrations compared to the conventional cell yield protocol. Moreover, because the nitrogen uptake experiment was much more sensitive, this protocol made it easier to detect extrinsic factors (such as biological contamination or toxicity) that could affect the accuracy of these bioassays. Based on these results, we recommend the nitrogen uptake bioassay using filtered and autoclaved samples to quantify BAN concentrations. However, for effluent samples indicating toxicity, algal bioassays will not accurately quantify BAN. Copyright © 2017 Elsevier Ltd. All rights reserved.
How Efficient Is My (Medicinal) Chemistry?
Vanden Eynde, Jean Jacques
2016-01-01
“Greening” a chemical transformation is not about only changing the nature of a solvent or decreasing the reaction temperature. There are metrics enabling a critical quantification of the efficiency of an experimental protocol. Some of them are applied to different sequences for the preparation of paracetamol in order to understand their performance parameters and elucidate pathways for improvement. PMID:27196914
Neyroud, Daria; Cheng, Arthur J; Bourdillon, Nicolas; Kayser, Bengt; Place, Nicolas; Westerblad, Håkan
2016-01-01
The interpolated twitch technique (ITT) is the gold standard to assess voluntary activation and central fatigue. Yet, its validity has been questioned. Here we studied how peripheral fatigue can affect the ITT. Repeated contractions at submaximal frequencies were produced by supramaximal electrical stimulations of the human adductor pollicis muscle in vivo and of isolated rat soleus fiber bundles; an extra stimulation pulse was given during contractions to induce a superimposed twitch. Human muscles fatigued by repeated 30-Hz stimulation trains (3 s on-1 s off) showed an ~80% reduction in the superimposed twitch force accompanied by a severely reduced EMG response (M-wave amplitude), which implies action potential failure. Subsequent experiments combined a less intense stimulation protocol (1.5 s on-3 s off) with ischemia to cause muscle fatigue, but which preserved M-wave amplitude. However, the superimposed twitch force still decreased markedly more than the potentiated twitch force; with ITT this would reflect increased "voluntary activation." In contrast, the superimposed twitch force was relatively spared when a similar protocol was performed in rat soleus bundles. Force relaxation was slowed by >150% in fatigued human muscles, whereas it was unchanged in rat soleus bundles. Accordingly, results similar to those in the human muscle were obtained when relaxation was slowed by cooling the rat soleus muscles. In conclusion, our data demonstrate that muscle fatigue can confound the quantification of central fatigue using the ITT.
Catalano, Valentina; Moreno-Sanz, Paula; Lorenzi, Silvia; Grando, Maria Stella
2016-09-21
The genetic varietal authentication of wine was investigated according to DNA isolation procedures reported for enological matrices and also by testing 11 commercial extraction kits and various protocol modifications. Samples were collected at different stages of the winemaking process of renowned Italian wines Brunello di Montalcino, Lambruschi Modenesi, and Trento DOC. Results demonstrated not only that grape DNA loss is produced by the fermentation process but also that clarification and stabilization operations contribute to the reduction of double-stranded DNA content on wine. Despite the presence of inhibitors, downstream PCR genotyping yielded reliable nuclear and chloroplast SSR markers for must samples, whereas no amplification or inconsistent results were obtained at later stages of the vinification. In addition, a TaqMan genotyping assay based on cultivar-specific single-nucleotide polymorphisms (SNPs) was designed, which allowed assessment of grapevine DNA mixtures. Once the wine matrix limitations are overcome, this sensitive tool may be implemented for the relative quantification of cultivars used for blend wines or frauds.
Nicolás, Paula; Lassalle, Verónica L; Ferreira, María L
2017-02-01
The aim of this manuscript was to study the application of a new method of protein quantification in Candida antarctica lipase B commercial solutions. Error sources associated to the traditional Bradford technique were demonstrated. Eight biocatalysts based on C. antarctica lipase B (CALB) immobilized onto magnetite nanoparticles were used. Magnetite nanoparticles were coated with chitosan (CHIT) and modified with glutaraldehyde (GLUT) and aminopropyltriethoxysilane (APTS). Later, CALB was adsorbed on the modified support. The proposed novel protein quantification method included the determination of sulfur (from protein in CALB solution) by means of Atomic Emission by Inductive Coupling Plasma (AE-ICP). Four different protocols were applied combining AE-ICP and classical Bradford assays, besides Carbon, Hydrogen and Nitrogen (CHN) analysis. The calculated error in protein content using the "classic" Bradford method with bovine serum albumin as standard ranged from 400 to 1200% when protein in CALB solution was quantified. These errors were calculated considering as "true protein content values" the results of the amount of immobilized protein obtained with the improved method. The optimum quantification procedure involved the combination of Bradford method, ICP and CHN analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
Moody, Jonathan B; Lee, Benjamin C; Corbett, James R; Ficaro, Edward P; Murthy, Venkatesh L
2015-10-01
A number of exciting advances in PET/CT technology and improvements in methodology have recently converged to enhance the feasibility of routine clinical quantification of myocardial blood flow and flow reserve. Recent promising clinical results are pointing toward an important role for myocardial blood flow in the care of patients. Absolute blood flow quantification can be a powerful clinical tool, but its utility will depend on maintaining precision and accuracy in the face of numerous potential sources of methodological errors. Here we review recent data and highlight the impact of PET instrumentation, image reconstruction, and quantification methods, and we emphasize (82)Rb cardiac PET which currently has the widest clinical application. It will be apparent that more data are needed, particularly in relation to newer PET technologies, as well as clinical standardization of PET protocols and methods. We provide recommendations for the methodological factors considered here. At present, myocardial flow reserve appears to be remarkably robust to various methodological errors; however, with greater attention to and more detailed understanding of these sources of error, the clinical benefits of stress-only blood flow measurement may eventually be more fully realized.
Seyer, Alexandre; Fenaille, François; Féraudet-Tarisse, Cecile; Volland, Hervé; Popoff, Michel R; Tabet, Jean-Claude; Junot, Christophe; Becher, François
2012-06-05
Epsilon toxin (ETX) is one of the most lethal toxins produced by Clostridium species and is considered as a potential bioterrorist weapon. Here, we present a rapid mass spectrometry-based method for ETX quantification in complex matrixes. As a prerequisite, naturally occurring prototoxin and toxin species were first structurally characterized by top-down and bottom-up experiments, to identify the most pertinent peptides for quantification. Following selective ETX immunoextraction and trypsin digestion, two proteotypic peptides shared by all the toxin forms were separated by ultraperformance liquid chromatography (UPLC) and monitored by ESI-MS (electrospray ionization-mass spectrometry) operating in the multiple reaction monitoring mode (MRM) with collision-induced dissociation. Thorough protocol optimization, i.e., a 15 min immunocapture, a 2 h enzymatic digestion, and an UPLC-MS/MS detection, allowed the whole quantification process including the calibration curve to be performed in less than 4 h, without compromising assay robustness and sensitivity. The assay sensitivity in milk and serum was estimated at 5 ng·mL(-1) for ETX, making this approach complementary to enzyme linked immunosorbent assay (ELISA) techniques.
Rossano, Adam J; Romero, Michael F
2017-08-11
Epithelial ion transport is vital to systemic ion homeostasis as well as maintenance of essential cellular electrochemical gradients. Intracellular pH (pHi) is influenced by many ion transporters and thus monitoring pHi is a useful tool for assessing transporter activity. Modern Genetically Encoded pH-Indicators (GEpHIs) provide optical quantification of pHi in intact cells on a cellular and subcellular scale. This protocol describes real-time quantification of cellular pHi regulation in Malpighian Tubules (MTs) of Drosophila melanogaster through ex vivo live-imaging of pHerry, a pseudo-ratiometric GEpHI with a pKa well-suited to track pH changes in the cytosol. Extracted adult fly MTs are composed of morphologically and functionally distinct sections of single-cell layer epithelia, and can serve as an accessible and genetically tractable model for investigation of epithelial transport. GEpHIs offer several advantages over conventional pH-sensitive fluorescent dyes and ion-selective electrodes. GEpHIs can label distinct cell populations provided appropriate promoter elements are available. This labeling is particularly useful in ex vivo, in vivo, and in situ preparations, which are inherently heterogeneous. GEpHIs also permit quantification of pHi in intact tissues over time without need for repeated dye treatment or tissue externalization. The primary drawback of current GEpHIs is the tendency to aggregate in cytosolic inclusions in response to tissue damage and construct over-expression. These shortcomings, their solutions, and the inherent advantages of GEpHIs are demonstrated in this protocol through assessment of basolateral proton (H + ) transport in functionally distinct principal and stellate cells of extracted fly MTs. The techniques and analysis described are readily adaptable to a wide variety of vertebrate and invertebrate preparations, and the sophistication of the assay can be scaled from teaching labs to intricate determination of ion flux via specific transporters.
Specific and quantitative detection of human polyomaviruses BKV, JCV, and SV40 by real time PCR.
McNees, Adrienne L; White, Zoe S; Zanwar, Preeti; Vilchez, Regis A; Butel, Janet S
2005-09-01
The polyomaviruses that infect humans, BK virus (BKV), JC virus (JCV), and simian virus 40 (SV40), typically establish subclinical persistent infections. However, reactivation of these viruses in immunocompromised hosts is associated with renal nephropathy and hemorrhagic cystitis (HC) caused by BKV and with progressive multifocal leukoencephalopathy (PML) caused by JCV. Additionally, SV40 is associated with several types of human cancers including primary brain and bone cancers, mesotheliomas, and non-Hodgkin's lymphoma. Advancements in detection of these viruses may contribute to improved diagnosis and treatment of affected patients. To develop sensitive and specific real time quantitative polymerase chain reaction (RQ-PCR) assays for the detection of T-antigen DNA sequences of the human polyomaviruses BKV, JCV, and SV40 using the ABI Prism 7000 Sequence Detection System. Assays for absolute quantification of the viral T-ag sequences were designed and the sensitivity and specificity were evaluated. A quantitative assay to measure the single copy human RNAse P gene was also developed and evaluated in order to normalize viral gene copy numbers to cell numbers. Quantification of the target genes is sensitive and specific over a 7 log dynamic range. Ten copies each of the viral and cellular genes are reproducibly and accurately detected. The sensitivity of detection of the RQ-PCR assays is increased 10- to 100-fold compared to conventional PCR and agarose gel protocols. The primers and probes used to detect the viral genes are specific for each virus and there is no cross reactivity within the dynamic range of the standard dilutions. The sensitivity of detection for these assays is not reduced in human cellular extracts; however, different DNA extraction protocols may affect quantification. These assays provide a technique for rapid and specific quantification of polyomavirus genomes per cell in human samples.
Wallace, Adam N; Vyhmeister, Ross; Bagade, Swapnil; Chatterjee, Arindam; Hicks, Brandon; Ramirez-Giraldo, Juan Carlos; McKinstry, Robert C
2015-06-01
Cerebrospinal fluid shunts are primarily used for the treatment of hydrocephalus. Shunt complications may necessitate multiple non-contrast head CT scans resulting in potentially high levels of radiation dose starting at an early age. A new head CT protocol using automatic exposure control and automated tube potential selection has been implemented at our institution to reduce radiation exposure. The purpose of this study was to evaluate the reduction in radiation dose achieved by this protocol compared with a protocol with fixed parameters. A retrospective sample of 60 non-contrast head CT scans assessing for cerebrospinal fluid shunt malfunction was identified, 30 of which were performed with each protocol. The radiation doses of the two protocols were compared using the volume CT dose index and dose length product. The diagnostic acceptability and quality of each scan were evaluated by three independent readers. The new protocol lowered the average volume CT dose index from 15.2 to 9.2 mGy representing a 39 % reduction (P < 0.01; 95 % CI 35-44 %) and lowered the dose length product from 259.5 to 151.2 mGy/cm representing a 42 % reduction (P < 0.01; 95 % CI 34-50 %). The new protocol produced diagnostically acceptable scans with comparable image quality to the fixed parameter protocol. A pediatric shunt non-contrast head CT protocol using automatic exposure control and automated tube potential selection reduced patient radiation dose compared with a fixed parameter protocol while producing diagnostic images of comparable quality.
Capraro, Geoffrey A; Mader, Timothy J; Coughlin, Bret F; Lovewell, Carolanne; St Louis, Myron R L; Tirabassi, Michael; Wadie, George; Smithline, Howard A
2007-04-01
To assess whether near-infrared spectroscopy can detect testicular hypoxia in a sheep model of testicular torsion within 6 hours of experimental torsion. This was a randomized, controlled, nonblinded study. Trans-scrotal, near-infrared, spectroscopy-derived testicular tissue saturation of oxygen values were obtained from the posterior hemiscrota of 6 anesthetized sheep at baseline and every 15 minutes for 6 hours after either experimental-side, 720-degree, unilateral, medial testicular torsion and orchidopexy or control-side sham procedure with orchidopexy and then for 75 minutes after reduction of torsion and pexy. Color Doppler ultrasonography was performed every 30 minutes to confirm loss of vascular flow on the experimental side, return of flow after torsion reduction, and preserved flow on the control side. Near infrared spectroscopy detected a prompt, sustained reduction in testicular tissue saturation of oxygen after experimental torsion. Further, it documented a rapid return of these values to pretorsion levels after reduction of torsion. Experimental-side testicular tissue saturation of oxygen fell from a median value of 59% (interquartile range [IQR] 57% to 69%) at baseline to 14% (IQR 11% to 29%) at 2.5 hours of torsion, and postreduction values were approximately 70%. Control-side testicular tissue saturation of oxygen values increased from a median value of 67% (IQR 59% to 68%) at baseline to 77% (IQR 77% to 94%) at 2.5 hours and remained at approximately 80% for the entire protocol. The difference in median testicular tissue saturation of oxygen between experimental and control sides, using the Friedman test, was found to be significant (P=.017). This study demonstrates the feasibility, in a sheep model, of using near-infrared spectroscopy for the noninvasive diagnosis of testicular torsion and for quantification of reperfusion after torsion reduction. The applicability of these findings, from an animal model using complete torsion, to the clinical setting remains to be established.
Piletska, Elena V; Karim, Kal; Cutler, Malcolm; Piletsky, Sergey A
2013-01-01
A polymeric adsorbent for extraction of the antimalarial drug artemisinin from Artemisia annua L. was computationally designed. This polymer demonstrated a high capacity for artemisinin (120 mg g(-1) ), quantitative recovery (87%) and was found to be an effective material for purification of artemisinin from complex plant matrix. The artemisinin quantification was conducted using an optimised HPLC-MS protocol, which was characterised by high precision and linearity in the concentration range between 0.05 and 2 μg mL(-1) . Optimisation of the purification protocol also involved screening of commercial adsorbents for the removal of waxes and other interfering natural compounds, which inhibit the crystallisation of artemisinin. As a result of a two step-purification protocol crystals of artemisinin were obtained, and artemisinin purity was evaluated as 75%. By performing the second stage of purification twice, the purity of artemisinin can be further improved to 99%. The developed protocol produced high-purity artemisinin using only a few purification steps that makes it suitable for large scale industrial manufacturing process. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Pelz, Johann Otto; Weinreich, Anna; Karlas, Thomas; Saur, Dorothee
2017-01-01
Currently, colour-coded duplex sonography (2D-CDS) is clinical standard for detection and grading of internal carotid artery stenosis (ICAS). However, unlike angiographic imaging modalities, 2D-CDS assesses ICAS by its hemodynamic effects rather than luminal changes. Aim of this study was to evaluate freehand 3D ultrasound (3DUS) for direct visualisation and quantification of ICAS. Thirty-seven patients with 43 ICAS were examined with 2D-CDS as reference standard and with freehand B-mode respectively power-mode 3DUS. Stenotic value of 3D reconstructed ICAS was calculated as distal diameter respectively distal cross-sectional area (CSA) reduction percentage and compared with 2D-CDS. There was a trend but no significant difference in successful 3D reconstruction of ICAS between B-mode and power mode (examiner 1 {Ex1} 81% versus 93%, examiner 2 {Ex2} 84% versus 88%). Inter-rater agreement was best for power-mode 3DUS and assessment of stenotic value as distal CSA reduction percentage (intraclass correlation coefficient {ICC} 0.90) followed by power-mode 3DUS and distal diameter reduction percentage (ICC 0.81). Inter-rater agreement was poor for B-mode 3DUS (ICC, distal CSA reduction 0.36, distal diameter reduction 0.51). Intra-rater agreement for power-mode 3DUS was good for both measuring methods (ICC, distal CSA reduction 0.88 {Ex1} and 0.78 {Ex2}; ICC, distal diameter reduction 0.83 {Ex1} and 0.76 {Ex2}). In comparison to 2D-CDS inter-method agreement was good and clearly better for power-mode 3DUS (ICC, distal diameter reduction percentage: Ex1 0.85, Ex2 0.78; distal CSA reduction percentage: Ex1 0.63, Ex2 0.57) than for B-mode 3DUS (ICC, distal diameter reduction percentage: Ex1 0.40, Ex2 0.52; distal CSA reduction percentage: Ex1 0.15, Ex2 0.51). Non-invasive power-mode 3DUS is superior to B-mode 3DUS for imaging and quantification of ICAS. Thereby, further studies are warranted which should now compare power-mode 3DUS with the angiographic gold standard imaging modalities for quantification of ICAS, i.e. with CTA or CE-MRA.
Security of Continuous-Variable Quantum Key Distribution via a Gaussian de Finetti Reduction
NASA Astrophysics Data System (ADS)
Leverrier, Anthony
2017-05-01
Establishing the security of continuous-variable quantum key distribution against general attacks in a realistic finite-size regime is an outstanding open problem in the field of theoretical quantum cryptography if we restrict our attention to protocols that rely on the exchange of coherent states. Indeed, techniques based on the uncertainty principle are not known to work for such protocols, and the usual tools based on de Finetti reductions only provide security for unrealistically large block lengths. We address this problem here by considering a new type of Gaussian de Finetti reduction, that exploits the invariance of some continuous-variable protocols under the action of the unitary group U (n ) (instead of the symmetric group Sn as in usual de Finetti theorems), and by introducing generalized S U (2 ,2 ) coherent states. Crucially, combined with an energy test, this allows us to truncate the Hilbert space globally instead as at the single-mode level as in previous approaches that failed to provide security in realistic conditions. Our reduction shows that it is sufficient to prove the security of these protocols against Gaussian collective attacks in order to obtain security against general attacks, thereby confirming rigorously the widely held belief that Gaussian attacks are indeed optimal against such protocols.
Security of Continuous-Variable Quantum Key Distribution via a Gaussian de Finetti Reduction.
Leverrier, Anthony
2017-05-19
Establishing the security of continuous-variable quantum key distribution against general attacks in a realistic finite-size regime is an outstanding open problem in the field of theoretical quantum cryptography if we restrict our attention to protocols that rely on the exchange of coherent states. Indeed, techniques based on the uncertainty principle are not known to work for such protocols, and the usual tools based on de Finetti reductions only provide security for unrealistically large block lengths. We address this problem here by considering a new type of Gaussian de Finetti reduction, that exploits the invariance of some continuous-variable protocols under the action of the unitary group U(n) (instead of the symmetric group S_{n} as in usual de Finetti theorems), and by introducing generalized SU(2,2) coherent states. Crucially, combined with an energy test, this allows us to truncate the Hilbert space globally instead as at the single-mode level as in previous approaches that failed to provide security in realistic conditions. Our reduction shows that it is sufficient to prove the security of these protocols against Gaussian collective attacks in order to obtain security against general attacks, thereby confirming rigorously the widely held belief that Gaussian attacks are indeed optimal against such protocols.
Arellano, Cécile; Allal, Ben; Goubaa, Anwar; Roché, Henri; Chatelut, Etienne
2014-11-01
A selective and accurate analytical method is needed to quantify tamoxifen and its phase I metabolites in a prospective clinical protocol, for evaluation of pharmacokinetic parameters of tamoxifen and its metabolites in adjuvant treatment of breast cancer. The selectivity of the analytical method is a fundamental criteria to allow the quantification of the main active metabolites (Z)-isomers from (Z)'-isomers. An UPLC-MS/MS method was developed and validated for the quantification of (Z)-tamoxifen, (Z)-endoxifen, (E)-endoxifen, Z'-endoxifen, (Z)'-endoxifen, (Z)-4-hydroxytamoxifen, (Z)-4'-hydroxytamoxifen, N-desmethyl tamoxifen, and tamoxifen-N-oxide. The validation range was set between 0.5ng/mL and 125ng/mL for 4-hydroxytamoxifen and endoxifen isomers, and between 12.5ng/mL and 300ng/mL for tamoxifen, tamoxifen N-desmethyl and tamoxifen-N-oxide. The application to patient plasma samples was performed. Copyright © 2014 Elsevier B.V. All rights reserved.
Bach, H-J; Jessen, I; Schloter, M; Munch, J C
2003-01-01
Real-time TaqMan-PCR assays were developed for detection, differentiation and absolute quantification of the pathogenic subspecies of Clavibacter michiganensis (Cm) in one single PCR run. The designed primer pair, targeting intergenic sequences of the rRNA operon (ITS) common in all subspecies, was suitable for the amplification of the expected 223-nt DNA fragments of all subspecies. Closely related bacteria were completely discriminated, except of Rathayibacter iranicus, from which weak PCR product bands appeared on agarose gel after 35 PCR cycles. Sufficient specificity of PCR detection was reached by introduction of the additional subspecies specific probes used in TaqMan-PCR. Only Cm species were detected and there was clear differentiation among the subspecies C. michiganensis sepedonicus (Cms), C. michiganensis michiganensis (Cmm), C. michiganensis nebraskensis (Cmn), C. michiganensis insidiosus (Cmi) and C. michiganensis tessellarius (Cmt). The TaqMan assays were optimized to enable a simultaneous quantification of each subspecies. Validity is shown by comparison with cell counts.
This generic verification protocol provides a detailed method for conducting and reporting results from verification testing of pesticide application technologies. It can be used to evaluate technologies for their potential to reduce spray drift, hence the term “drift reduction t...
Abad, Neetu; Malik, Tasneem; Ariyarajah, Archchun; Ongpin, Patricia; Hogben, Matthew; McDonald, Suzanna L R; Marrinan, Jaclyn; Massaquoi, Thomas; Thorson, Anna; Ervin, Elizabeth; Bernstein, Kyle; Ross, Christine; Liu, William J; Kroeger, Karen; Durski, Kara N; Broutet, Nathalie; Knust, Barbara; Deen, Gibrilla F
2017-09-01
During the 2014-2016 West Africa Ebola Virus Disease (EVD) epidemic, the public health community had concerns that sexual transmission of the Ebola virus (EBOV) from EVD survivors was a risk, due to EBOV persistence in body fluids of EVD survivors, particularly semen. The Sierra Leone Ebola Virus Persistence Study was initiated to investigate this risk by assessing EBOV persistence in numerous body fluids of EVD survivors and providing risk reduction counseling based on test results for semen, vaginal fluid, menstrual blood, urine, rectal fluid, sweat, tears, saliva, and breast milk. This publication describes implementation of the counseling protocol and the key lessons learned. The Ebola Virus Persistence Risk Reduction Behavioral Counseling Protocol was developed from a framework used to prevent transmission of HIV and other sexually transmitted infections. The framework helped to identify barriers to risk reduction and facilitated the development of a personalized risk-reduction plan, particularly around condom use and abstinence. Pre-test and post-test counseling sessions included risk reduction guidance, and post-test counseling was based on the participants' individual test results. The behavioral counseling protocol enabled study staff to translate the study's body fluid test results into individualized information for study participants. The Ebola Virus Persistence Risk Reduction Behavioral Counseling Protocol provided guidance to mitigate the risk of EBOV transmission from EVD survivors. It has since been shared with and adapted by other EVD survivor body fluid testing programs and studies in Ebola-affected countries.
Pore network quantification of sandstones under experimental CO2 injection using image analysis
NASA Astrophysics Data System (ADS)
Berrezueta, Edgar; González-Menéndez, Luís; Ordóñez-Casado, Berta; Olaya, Peter
2015-04-01
Automated-image identification and quantification of minerals, pores and textures together with petrographic analysis can be applied to improve pore system characterization in sedimentary rocks. Our case study is focused on the application of these techniques to study the evolution of rock pore network subjected to super critical CO2-injection. We have proposed a Digital Image Analysis (DIA) protocol that guarantees measurement reproducibility and reliability. This can be summarized in the following stages: (i) detailed description of mineralogy and texture (before and after CO2-injection) by optical and scanning electron microscopy (SEM) techniques using thin sections; (ii) adjustment and calibration of DIA tools; (iii) data acquisition protocol based on image capture with different polarization conditions (synchronized movement of polarizers); (iv) study and quantification by DIA that allow (a) identification and isolation of pixels that belong to the same category: minerals vs. pores in each sample and (b) measurement of changes in pore network, after the samples have been exposed to new conditions (in our case: SC-CO2-injection). Finally, interpretation of the petrography and the measured data by an automated approach were done. In our applied study, the DIA results highlight the changes observed by SEM and microscopic techniques, which consisted in a porosity increase when CO2 treatment occurs. Other additional changes were minor: variations in the roughness and roundness of pore edges, and pore aspect ratio, shown in the bigger pore population. Additionally, statistic tests of pore parameters measured were applied to verify that the differences observed between samples before and after CO2-injection were significant.
The Tool for the Reduction and Assessment of Chemical and other environmental Impacts (TRACI) was developed to allow the quantification of environmental impacts for a variety of impact categories which are necessary for a comprehensive impact assessment. See Figure 1. TRACI is c...
The report gives results of a quantification of the level of fugitive emission reductions resulting from the use of enclosed doctor blade (EDB) systems in place of traditional ink feed systems at flexographic and rotogravure printing operations. An EDB system is an innovative ink...
Censi, F; Barbaro, V; Bartolini, P; Calcagnini, G; Michelucci, A; Gensini, G F; Cerutti, S
2000-01-01
The aim of this study was to determine the presence of organization of atrial activation processes during atrial fibrillation (AF) by assessing whether the activation sequences are wholly random or are governed by deterministic mechanisms. We performed both linear and nonlinear analyses based on the cross correlation function (CCF) and recurrence plot quantification (RPQ), respectively. Recurrence plots were quantified by three variables: percent recurrence (PR), percent determinism (PD), and entropy of recurrences (ER). We recorded bipolar intra-atrial electrograms in two atrial sites during chronic AF in 19 informed subjects, following two protocols. In one, both recording sites were in the right atrium; in the other protocol, one site was in the right atrium, the other one in the left atrium. We extracted 19 episodes of type I AF (Wells' classification). RPQ detected transient recurrent patterns in all the episodes, while CCF was significant only in ten episodes. Surrogate data analysis, based on a cross-phase randomization procedure, decreased PR, PD, and ER values. The detection of spatiotemporal recurrent patterns together with the surrogate data results indicate that during AF a certain degree of local organization exists, likely caused by deterministic mechanisms of activation.
Unice, Kenneth M.; Kreider, Marisa L.; Panko, Julie M.
2012-01-01
Pyrolysis(pyr)-GC/MS analysis of characteristic thermal decomposition fragments has been previously used for qualitative fingerprinting of organic sources in environmental samples. A quantitative pyr-GC/MS method based on characteristic tire polymer pyrolysis products was developed for tread particle quantification in environmental matrices including soil, sediment, and air. The feasibility of quantitative pyr-GC/MS analysis of tread was confirmed in a method evaluation study using artificial soil spiked with known amounts of cryogenically generated tread. Tread concentration determined by blinded analyses was highly correlated (r2 ≥ 0.88) with the known tread spike concentration. Two critical refinements to the initial pyrolysis protocol were identified including use of an internal standard and quantification by the dimeric markers vinylcyclohexene and dipentene, which have good specificity for rubber polymer with no other appreciable environmental sources. A novel use of deuterated internal standards of similar polymeric structure was developed to correct the variable analyte recovery caused by sample size, matrix effects, and ion source variability. The resultant quantitative pyr-GC/MS protocol is reliable and transferable between laboratories. PMID:23202830
Flow cytometry for enrichment and titration in massively parallel DNA sequencing
Sandberg, Julia; Ståhl, Patrik L.; Ahmadian, Afshin; Bjursell, Magnus K.; Lundeberg, Joakim
2009-01-01
Massively parallel DNA sequencing is revolutionizing genomics research throughout the life sciences. However, the reagent costs and labor requirements in current sequencing protocols are still substantial, although improvements are continuously being made. Here, we demonstrate an effective alternative to existing sample titration protocols for the Roche/454 system using Fluorescence Activated Cell Sorting (FACS) technology to determine the optimal DNA-to-bead ratio prior to large-scale sequencing. Our method, which eliminates the need for the costly pilot sequencing of samples during titration is capable of rapidly providing accurate DNA-to-bead ratios that are not biased by the quantification and sedimentation steps included in current protocols. Moreover, we demonstrate that FACS sorting can be readily used to highly enrich fractions of beads carrying template DNA, with near total elimination of empty beads and no downstream sacrifice of DNA sequencing quality. Automated enrichment by FACS is a simple approach to obtain pure samples for bead-based sequencing systems, and offers an efficient, low-cost alternative to current enrichment protocols. PMID:19304748
Protocol biopsies in renal transplantation: prognostic value of structural monitoring.
Serón, D; Moreso, F
2007-09-01
The natural history of renal allograft damage has been characterized in serial protocol biopsies. The prevalence of subclinical rejection (SCR) is maximal during the first months and it is associated with the progression of interstitial fibrosis/tubular atrophy (IF/TA) and a decreased graft survival. IF/TA rapidly progress during the first months and constitutes an independent predictor of graft survival. IF/TA associated with transplant vasculopathy, SCR, or transplant glomerulopathy implies a poorer prognosis than IF/TA without additional lesions. These observations suggest that protocol biopsies could be considered a surrogate of graft survival. Preliminary data suggest that the predictive value of protocol biopsies is not inferior to acute rejection or renal function. Additionally, protocol biopsies have been employed as a secondary efficacy variable in clinical trials. This strategy has been useful to demonstrate a decrease in the progression of IF/TA in some calcineurin-free regimens. Quantification of renal damage is associated with graft survival suggesting that quantitative parameters might improve the predictive value of protocol biopsies. Validation of protocol biopsies as a surrogate of graft survival is actively pursued, as the utility of classical surrogates of graft outcome such as acute rejection has become less useful because of its decreased prevalence with actual immunosuppression.
The importance of the Montreal Protocol in protecting climate.
Velders, Guus J M; Andersen, Stephen O; Daniel, John S; Fahey, David W; McFarland, Mack
2007-03-20
The 1987 Montreal Protocol on Substances that Deplete the Ozone Layer is a landmark agreement that has successfully reduced the global production, consumption, and emissions of ozone-depleting substances (ODSs). ODSs are also greenhouse gases that contribute to the radiative forcing of climate change. Using historical ODSs emissions and scenarios of potential emissions, we show that the ODS contribution to radiative forcing most likely would have been much larger if the ODS link to stratospheric ozone depletion had not been recognized in 1974 and followed by a series of regulations. The climate protection already achieved by the Montreal Protocol alone is far larger than the reduction target of the first commitment period of the Kyoto Protocol. Additional climate benefits that are significant compared with the Kyoto Protocol reduction target could be achieved by actions under the Montreal Protocol, by managing the emissions of substitute fluorocarbon gases and/or implementing alternative gases with lower global warming potentials.
The importance of the Montreal Protocol in protecting climate
Velders, Guus J. M.; Andersen, Stephen O.; Daniel, John S.; Fahey, David W.; McFarland, Mack
2007-01-01
The 1987 Montreal Protocol on Substances that Deplete the Ozone Layer is a landmark agreement that has successfully reduced the global production, consumption, and emissions of ozone-depleting substances (ODSs). ODSs are also greenhouse gases that contribute to the radiative forcing of climate change. Using historical ODSs emissions and scenarios of potential emissions, we show that the ODS contribution to radiative forcing most likely would have been much larger if the ODS link to stratospheric ozone depletion had not been recognized in 1974 and followed by a series of regulations. The climate protection already achieved by the Montreal Protocol alone is far larger than the reduction target of the first commitment period of the Kyoto Protocol. Additional climate benefits that are significant compared with the Kyoto Protocol reduction target could be achieved by actions under the Montreal Protocol, by managing the emissions of substitute fluorocarbon gases and/or implementing alternative gases with lower global warming potentials. PMID:17360370
Cost-effectiveness analysis of risk-reduction measures to reach water safety targets.
Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof; Pettersson, Thomas J R
2011-01-01
Identifying the most suitable risk-reduction measures in drinking water systems requires a thorough analysis of possible alternatives. In addition to the effects on the risk level, also the economic aspects of the risk-reduction alternatives are commonly considered important. Drinking water supplies are complex systems and to avoid sub-optimisation of risk-reduction measures, the entire system from source to tap needs to be considered. There is a lack of methods for quantification of water supply risk reduction in an economic context for entire drinking water systems. The aim of this paper is to present a novel approach for risk assessment in combination with economic analysis to evaluate risk-reduction measures based on a source-to-tap approach. The approach combines a probabilistic and dynamic fault tree method with cost-effectiveness analysis (CEA). The developed approach comprises the following main parts: (1) quantification of risk reduction of alternatives using a probabilistic fault tree model of the entire system; (2) combination of the modelling results with CEA; and (3) evaluation of the alternatives with respect to the risk reduction, the probability of not reaching water safety targets and the cost-effectiveness. The fault tree method and CEA enable comparison of risk-reduction measures in the same quantitative unit and consider costs and uncertainties. The approach provides a structured and thorough analysis of risk-reduction measures that facilitates transparency and long-term planning of drinking water systems in order to avoid sub-optimisation of available resources for risk reduction. Copyright © 2010 Elsevier Ltd. All rights reserved.
Gariani, Joanna; Martin, Steve P; Botsikas, Diomidis; Becker, Christoph D; Montet, Xavier
2018-06-14
To compare radiation dose and image quality of thoracoabdominal scans obtained with a high-pitch protocol (pitch 3.2) and iterative reconstruction (Sinogram Affirmed Iterative Reconstruction) in comparison to standard pitch reconstructed with filtered back projection (FBP) using dual source CT. 114 CT scans (Somatom Definition Flash, Siemens Healthineers, Erlangen, Germany), 39 thoracic scans, 54 thoracoabdominal scans and 21 abdominal scans were performed. Analysis of three protocols was undertaken; pitch of 1 reconstructed with FBP, pitch of 3.2 reconstructed with SAFIRE, pitch of 3.2 with stellar detectors reconstructed with SAFIRE. Objective and subjective image analysis were performed. Dose differences of the protocols used were compared. Dose was reduced when comparing scans with a pitch of 1 reconstructed with FBP to high-pitch scans with a pitch of 3.2 reconstructed with SAFIRE with a reduction of volume CT dose index of 75% for thoracic scans, 64% for thoracoabdominal scans and 67% for abdominal scans. There was a further reduction after the implementation of stellar detectors reflected in a reduction of 36% of the dose-length product for thoracic scans. This was not at the detriment of image quality, contrast-to-noise ratio, signal-to-noise ratio and the qualitative image analysis revealed a superior image quality in the high-pitch protocols. The combination of a high pitch protocol with iterative reconstruction allows significant dose reduction in routine chest and abdominal scans whilst maintaining or improving diagnostic image quality, with a further reduction in thoracic scans with stellar detectors. Advances in knowledge: High pitch imaging with iterative reconstruction is a tool that can be used to reduce dose without sacrificing image quality.
Santos, Hugo M; Reboiro-Jato, Miguel; Glez-Peña, Daniel; Nunes-Miranda, J D; Fdez-Riverola, Florentino; Carvallo, R; Capelo, J L
2010-09-15
The decision peptide-driven tool implements a software application for assisting the user in a protocol for accurate protein quantification based on the following steps: (1) protein separation through gel electrophoresis; (2) in-gel protein digestion; (3) direct and inverse (18)O-labeling and (4) matrix assisted laser desorption ionization time of flight mass spectrometry, MALDI analysis. The DPD software compares the MALDI results of the direct and inverse (18)O-labeling experiments and quickly identifies those peptides with paralleled loses in different sets of a typical proteomic workflow. Those peptides are used for subsequent accurate protein quantification. The interpretation of the MALDI data from direct and inverse labeling experiments is time-consuming requiring a significant amount of time to do all comparisons manually. The DPD software shortens and simplifies the searching of the peptides that must be used for quantification from a week to just some minutes. To do so, it takes as input several MALDI spectra and aids the researcher in an automatic mode (i) to compare data from direct and inverse (18)O-labeling experiments, calculating the corresponding ratios to determine those peptides with paralleled losses throughout different sets of experiments; and (ii) allow to use those peptides as internal standards for subsequent accurate protein quantification using (18)O-labeling. In this work the DPD software is presented and explained with the quantification of protein carbonic anhydrase. Copyright (c) 2010 Elsevier B.V. All rights reserved.
This test/QA plan for evaluation the generic test protocol for high speed wind tunnel, representing aerial application, pesticide spray drift reduction technologies (DRT) for row and field crops is in conformance with EPA Requirements for Quality Assurance Project Plans (EPA QA/R...
This test/QA plan for evaluation the generic test protocol for high speed wind tunnel, representing aerial application, pesticide spray drift reduction technologies (DRT) for row and field crops is in conformance with EPA Requirements for Quality Assurance Project Plans (EPA QA/R...
Tenten-Diepenmaat, Marloes; Dekker, Joost; Steenbergen, Menno; Huybrechts, Elleke; Roorda, Leo D; van Schaardenburg, Dirkjan; Bus, Sicco A; van der Leeden, Marike
2016-03-01
Improving foot orthoses (FOs) in patients with rheumatoid arthritis (RA) by using in-shoe plantar pressure measurements seems promising. The objectives of this study were to evaluate (1) the outcome on plantar pressure distribution of FOs that were adapted using in-shoe plantar pressure measurements according to a protocol and (2) the protocol feasibility. Forty-five RA patients with foot problems were included in this observational proof-of concept study. FOs were custom-made by a podiatrist according to usual care. Regions of Interest (ROIs) for plantar pressure reduction were selected. According to a protocol, usual care FOs were evaluated using in-shoe plantar pressure measurements and, if necessary, adapted. Plantar pressure-time integrals at the ROIs were compared between the following conditions: (1) no-FO versus usual care FO and (2) usual care FO versus adapted FO. Semi-structured interviews were held with patients and podiatrists to evaluate the feasibility of the protocol. Adapted FOs were developed in 70% of the patients. In these patients, usual care FOs showed a mean 9% reduction in pressure-time integral at forefoot ROIs compared to no-FOs (p=0.01). FO adaptation led to an additional mean 3% reduction in pressure-time integral (p=0.05). The protocol was considered feasible by patients. Podiatrists considered the protocol more useful to achieve individual rather than general treatment goals. A final protocol was proposed. Using in-shoe plantar pressure measurements for adapting foot orthoses for patients with RA leads to a small additional plantar pressure reduction in the forefoot. Further research on the clinical relevance of this outcome is required. Copyright © 2016. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Gurney, K. R.
2014-12-01
Scientific research on quantification of anthropogenic greenhouse gas emissions at national and sub-national scales within the US has advanced considerably in the last decade. Large investment has been made in building systems capable of observing greenhouse gases in the atmosphere at multiple scales, measuring direct anthropogenic fluxes near sources and modeling the linkages between fluxes and observed concentrations. Much of this research has been focused at improving the "verification" component of "monitoring, reporting, and verification" and indeed, has achieved successes in recent years. However, there are opportunities for ongoing scientific research to contribute critical new information to policymakers. In order to realize this contribution, additional but complementary, research foci must be emphasized. Examples include more focus on anthropogenic emission drivers, quantification at scales relevant to human decision-making, and exploration of cost versus uncertainty in observing/modeling systems. I will review what I think are the opportunities to better align scientific research with current and emerging US climate change policymaking. I will then explore a few examples of where expansion or alteration of greenhouse gas flux quantification research focus could better align with current and emerging US climate change policymaking such as embodied in the proposed EPA rule aimed at reducing emissions from US power plants, California's ongoing emissions reduction policymaking and aspirational emission reduction efforts in multiple US cities.
WE-AB-204-05: Harmonizing PET/CT Quantification in Multicenter Studies: A Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marques da Silva, A; Fischer, A
2015-06-15
Purpose: To present the implementation of a strategy to harmonize FDG PET/CT quantification (SUV), performed with different scanner models and manufacturers. Methods: The strategy was based on Boellaard (2011) and EARL FDG-PET/CT accreditation program, that propose quality control measurements for harmonizing scanner performance. A NEMA IEC Body phantom study was performed using four different devices: PHP-1 (Gemini TF Base, Philips); PHP-2 (Gemini GXL, Philips); GEH (Discovery 600, General Electric); SMS (Biograph Hi-Rez 16, Siemens). The SUV Recovery Coefficient (RC) was calculated using the clinical protocol and other clinically relevant reconstruction parameters. The most appropriate reconstruction parameters (MARP) for SUV harmonization,more » in each scanner, are those which achieve EARL harmonizing standards. They were identified using the lowest root mean square errors (RMSE). To evaluate the strategy’s effectiveness, the Maximum Differences (MD) between the clinical and MARP RC values were calculated. Results: The reconstructions parameters that obtained the lowest RMSE are: FBP 5mm (PHP-1); LOR-RAMLA 2i0.008l (PHP-2); VuePointHD 2i32s10mm (GEH); and FORE+OSEM 4i8s6mm (SMS). Thus, to ensure that quantitative PET image measurements are interchangeable between these sites, images must be reconstructed with the above-mentioned parameters. Although, a decoupling between the best image for PET/CT qualitative analysis and the best image for quantification studies was observed. The MD showed that the strategy was effective in reducing the variability of SUV quantification for small structures (<17mm). Conclusion: The harmonization strategy of the SUV quantification implemented with these devices was effective in reducing the variability of small structures quantification, minimizing the inter-scanner and inter-institution differences in quantification. However, it is essential that, in addition to the harmonization of quantification, the standardization of the methodology of patient preparation must be maintained, in order to minimize the SUV variability due to biological factors. Financial support by CAPES.« less
Chaudhry, Waseem; Hussain, Nasir; Ahlberg, Alan W.; Croft, Lori B.; Fernandez, Antonio B.; Parker, Mathew W.; Swales, Heather H.; Slomka, Piotr J.; Henzlova, Milena J.; Duvall, W. Lane
2016-01-01
Background A stress-first myocardial perfusion imaging (MPI) protocol saves time, is cost effective, and decreases radiation exposure. A limitation of this protocol is the requirement for physician review of the stress images to determine the need for rest images. This hurdle could be eliminated if an experienced technologist and/or automated computer quantification could make this determination. Methods Images from consecutive patients who were undergoing a stress-first MPI with attenuation correction at two tertiary care medical centers were prospectively reviewed independently by a technologist and cardiologist blinded to clinical and stress test data. Their decision on the need for rest imaging along with automated computer quantification of perfusion results was compared with the clinical reference standard of an assessment of perfusion images by a board-certified nuclear cardiologist that included clinical and stress test data. Results A total of 250 patients (mean age 61 years and 55% female) who underwent a stress-first MPI were studied. According to the clinical reference standard, 42 (16.8%) and 208 (83.2%) stress-first images were interpreted as “needing” and “not needing” rest images, respectively. The technologists correctly classified 229 (91.6%) stress-first images as either “needing” (n = 28) or “not needing” (n = 201) rest images. Their sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were 66.7%, 96.6%, 80.0%, and 93.5%, respectively. An automated stress TPD score ≥1.2 was associated with optimal sensitivity and specificity and correctly classified 179 (71.6%) stress-first images as either “needing” (n = 31) or “not needing” (n = 148) rest images. Its sensitivity, specificity, PPV, and NPV were 73.8%, 71.2%, 34.1%, and 93.1%, respectively. In a model whereby the computer or technologist could correct for the other's incorrect classification, 242 (96.8%) stress-first images were correctly classified. The composite sensitivity, specificity, PPV, and NPV were 83.3%, 99.5%, 97.2%, and 96.7%, respectively. Conclusion Technologists and automated quantification software had a high degree of agreement with the clinical reference standard for determining the need for rest images in a stress-first imaging protocol. Utilizing an experienced technologist and automated systems to screen stress-first images could expand the use of stress-first MPI to sites where the cardiologist is not immediately available for interpretation. PMID:26566774
Malik, Tasneem; Ariyarajah, Archchun; Ongpin, Patricia; Hogben, Matthew; McDonald, Suzanna L. R.; Marrinan, Jaclyn; Massaquoi, Thomas; Thorson, Anna; Ervin, Elizabeth; Bernstein, Kyle; Ross, Christine; Liu, William J.; Kroeger, Karen; Durski, Kara N.; Broutet, Nathalie; Knust, Barbara; Deen, Gibrilla F.
2017-01-01
Background During the 2014–2016 West Africa Ebola Virus Disease (EVD) epidemic, the public health community had concerns that sexual transmission of the Ebola virus (EBOV) from EVD survivors was a risk, due to EBOV persistence in body fluids of EVD survivors, particularly semen. The Sierra Leone Ebola Virus Persistence Study was initiated to investigate this risk by assessing EBOV persistence in numerous body fluids of EVD survivors and providing risk reduction counseling based on test results for semen, vaginal fluid, menstrual blood, urine, rectal fluid, sweat, tears, saliva, and breast milk. This publication describes implementation of the counseling protocol and the key lessons learned. Methodology/Principal findings The Ebola Virus Persistence Risk Reduction Behavioral Counseling Protocol was developed from a framework used to prevent transmission of HIV and other sexually transmitted infections. The framework helped to identify barriers to risk reduction and facilitated the development of a personalized risk-reduction plan, particularly around condom use and abstinence. Pre-test and post-test counseling sessions included risk reduction guidance, and post-test counseling was based on the participants’ individual test results. The behavioral counseling protocol enabled study staff to translate the study’s body fluid test results into individualized information for study participants. Conclusions/Significance The Ebola Virus Persistence Risk Reduction Behavioral Counseling Protocol provided guidance to mitigate the risk of EBOV transmission from EVD survivors. It has since been shared with and adapted by other EVD survivor body fluid testing programs and studies in Ebola-affected countries. PMID:28892490
Roy R. Rosenberger; Carl J. Houtman
2000-01-01
The USPS Image Analysis (IA) protocol recommends the use of hydrophobic dyes to develop contrast between pressure sensitive adhesive (PSA) particles and cellulosic fibers before using a dirt counter to detect all contaminants that have contrast with the handsheet background. Unless the sample contains no contaminants other than those of interest, two measurement steps...
Abbatiello, Susan E; Mani, D R; Schilling, Birgit; Maclean, Brendan; Zimmerman, Lisa J; Feng, Xingdong; Cusack, Michael P; Sedransk, Nell; Hall, Steven C; Addona, Terri; Allen, Simon; Dodder, Nathan G; Ghosh, Mousumi; Held, Jason M; Hedrick, Victoria; Inerowicz, H Dorota; Jackson, Angela; Keshishian, Hasmik; Kim, Jong Won; Lyssand, John S; Riley, C Paige; Rudnick, Paul; Sadowski, Pawel; Shaddox, Kent; Smith, Derek; Tomazela, Daniela; Wahlander, Asa; Waldemarson, Sofia; Whitwell, Corbin A; You, Jinsam; Zhang, Shucha; Kinsinger, Christopher R; Mesri, Mehdi; Rodriguez, Henry; Borchers, Christoph H; Buck, Charles; Fisher, Susan J; Gibson, Bradford W; Liebler, Daniel; Maccoss, Michael; Neubert, Thomas A; Paulovich, Amanda; Regnier, Fred; Skates, Steven J; Tempst, Paul; Wang, Mu; Carr, Steven A
2013-09-01
Multiple reaction monitoring (MRM) mass spectrometry coupled with stable isotope dilution (SID) and liquid chromatography (LC) is increasingly used in biological and clinical studies for precise and reproducible quantification of peptides and proteins in complex sample matrices. Robust LC-SID-MRM-MS-based assays that can be replicated across laboratories and ultimately in clinical laboratory settings require standardized protocols to demonstrate that the analysis platforms are performing adequately. We developed a system suitability protocol (SSP), which employs a predigested mixture of six proteins, to facilitate performance evaluation of LC-SID-MRM-MS instrument platforms, configured with nanoflow-LC systems interfaced to triple quadrupole mass spectrometers. The SSP was designed for use with low multiplex analyses as well as high multiplex approaches when software-driven scheduling of data acquisition is required. Performance was assessed by monitoring of a range of chromatographic and mass spectrometric metrics including peak width, chromatographic resolution, peak capacity, and the variability in peak area and analyte retention time (RT) stability. The SSP, which was evaluated in 11 laboratories on a total of 15 different instruments, enabled early diagnoses of LC and MS anomalies that indicated suboptimal LC-MRM-MS performance. The observed range in variation of each of the metrics scrutinized serves to define the criteria for optimized LC-SID-MRM-MS platforms for routine use, with pass/fail criteria for system suitability performance measures defined as peak area coefficient of variation <0.15, peak width coefficient of variation <0.15, standard deviation of RT <0.15 min (9 s), and the RT drift <0.5min (30 s). The deleterious effect of a marginally performing LC-SID-MRM-MS system on the limit of quantification (LOQ) in targeted quantitative assays illustrates the use and need for a SSP to establish robust and reliable system performance. Use of a SSP helps to ensure that analyte quantification measurements can be replicated with good precision within and across multiple laboratories and should facilitate more widespread use of MRM-MS technology by the basic biomedical and clinical laboratory research communities.
Abbatiello, Susan E.; Mani, D. R.; Schilling, Birgit; MacLean, Brendan; Zimmerman, Lisa J.; Feng, Xingdong; Cusack, Michael P.; Sedransk, Nell; Hall, Steven C.; Addona, Terri; Allen, Simon; Dodder, Nathan G.; Ghosh, Mousumi; Held, Jason M.; Hedrick, Victoria; Inerowicz, H. Dorota; Jackson, Angela; Keshishian, Hasmik; Kim, Jong Won; Lyssand, John S.; Riley, C. Paige; Rudnick, Paul; Sadowski, Pawel; Shaddox, Kent; Smith, Derek; Tomazela, Daniela; Wahlander, Asa; Waldemarson, Sofia; Whitwell, Corbin A.; You, Jinsam; Zhang, Shucha; Kinsinger, Christopher R.; Mesri, Mehdi; Rodriguez, Henry; Borchers, Christoph H.; Buck, Charles; Fisher, Susan J.; Gibson, Bradford W.; Liebler, Daniel; MacCoss, Michael; Neubert, Thomas A.; Paulovich, Amanda; Regnier, Fred; Skates, Steven J.; Tempst, Paul; Wang, Mu; Carr, Steven A.
2013-01-01
Multiple reaction monitoring (MRM) mass spectrometry coupled with stable isotope dilution (SID) and liquid chromatography (LC) is increasingly used in biological and clinical studies for precise and reproducible quantification of peptides and proteins in complex sample matrices. Robust LC-SID-MRM-MS-based assays that can be replicated across laboratories and ultimately in clinical laboratory settings require standardized protocols to demonstrate that the analysis platforms are performing adequately. We developed a system suitability protocol (SSP), which employs a predigested mixture of six proteins, to facilitate performance evaluation of LC-SID-MRM-MS instrument platforms, configured with nanoflow-LC systems interfaced to triple quadrupole mass spectrometers. The SSP was designed for use with low multiplex analyses as well as high multiplex approaches when software-driven scheduling of data acquisition is required. Performance was assessed by monitoring of a range of chromatographic and mass spectrometric metrics including peak width, chromatographic resolution, peak capacity, and the variability in peak area and analyte retention time (RT) stability. The SSP, which was evaluated in 11 laboratories on a total of 15 different instruments, enabled early diagnoses of LC and MS anomalies that indicated suboptimal LC-MRM-MS performance. The observed range in variation of each of the metrics scrutinized serves to define the criteria for optimized LC-SID-MRM-MS platforms for routine use, with pass/fail criteria for system suitability performance measures defined as peak area coefficient of variation <0.15, peak width coefficient of variation <0.15, standard deviation of RT <0.15 min (9 s), and the RT drift <0.5min (30 s). The deleterious effect of a marginally performing LC-SID-MRM-MS system on the limit of quantification (LOQ) in targeted quantitative assays illustrates the use and need for a SSP to establish robust and reliable system performance. Use of a SSP helps to ensure that analyte quantification measurements can be replicated with good precision within and across multiple laboratories and should facilitate more widespread use of MRM-MS technology by the basic biomedical and clinical laboratory research communities. PMID:23689285
ERIC Educational Resources Information Center
Orchowski, Lindsay M.; Gidycz, Christine A.; Raffle, Holly
2008-01-01
The current study extends the development and evaluation of an existing and previously evaluated sexual assault risk reduction program with a self-defense component for college women (N = 300). The program protocol was revised to address psychological barriers to responding assertively to risky dating situations, and a placebo-control group was…
Pourhajibagher, Maryam; Raoofian, Reza; Ghorbanzadeh, Roghayeh; Bahador, Abbas
2018-03-01
The infected root canal system harbors one of the highest accumulations of polymicrobial infections. Since the eradication of endopathogenic microbiota is a major goal in endodontic infection therapy, photo-activated disinfection (PAD) can be used as an alternative therapeutic method in endodontic treatment. Compared to cultivation-based approaches, molecular techniques are more reliable for identifying microbial agents associated with endodontic infections. The purpose of this study was to evaluate the ability of designed multiplex real-time PCR protocol for the rapid detection and quantification of six common microorganisms involved in endodontic infection before and after the PAD. Samples were taken from the root canals of 50 patients with primary and secondary/persistent endodontic infections using sterile paper points. PAD with toluidine blue O (TBO) plus diode laser was performed on root canals. Resampling was then performed, and the samples were transferred to transport medium. Then, six target microorganisms were detected using multiplex real-time PCR before and after the PAD. Veillonella parvula was found using multiplex real-time PCR to have the highest frequency among samples collected before the PAD (29.4%), followed by Porphyromonas gingivalis (23.1%), Aggregatibacter actinomycetemcomitans (13.6%), Actinomyces naeslundii (13.0%), Enterococcus faecalis (11.5%), and Lactobacillus rhamnosus (9.4%). After TBO-mediated PAD, P. gingivalis strains, the most resistance microorganisms, were recovered in 41.7% of the samples using molecular approach (P > 0.05). As the results shown, multiplex real-time PCR as an accurate detection approach with high-throughput and TBO-mediated PAD as an efficient antimicrobial strategy due to the significant reduction of the endopathogenic count can be used for detection and treatment of microbiota involved in infected root canals, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.
15 CFR 990.52 - Injury assessment-quantification.
Code of Federal Regulations, 2011 CFR
2011-01-01
... REGULATIONS NATURAL RESOURCE DAMAGE ASSESSMENTS Restoration Planning Phase § 990.52 Injury assessment... extent of injury to a natural resource, with subsequent translation of that adverse change to a reduction...
NASA Astrophysics Data System (ADS)
Terzidis, Michael; Chatgilialoglu, Chryssostomos
2015-07-01
5',8-Cyclo-2'-deoxyadenosine (cdA) and 5',8-cyclo-2'-deoxyguanosine (cdG) are lesions resulting from hydroxyl radical (HO•) attack on the 5'H of the nucleoside sugar moiety and exist in both 5'R and 5'S diastereomeric forms. Increased levels of cdA and cdG are linked to Nucleotide Excision Repair mechanism deficiency and mutagenesis. Discrepancies in the damage measurements reported over recent years indicated the weakness of the actual protocols, in particular for ensuring the quantitative release of these lesions from the DNA sample and the appropriate method for their analysis. Herein we report the detailed revision leading to a cost-effective and efficient protocol for the DNA damage measurement, consisting of the nuclease benzonase and nuclease P1 enzymatic combination for DNA digestion followed by liquid chromatography isotope dilution tandem mass spectrometry analysis.
Immunoelectron microscopy in embryos.
Sierralta, W D
2001-05-01
Immunogold labeling of proteins in sections of embryos embedded in acrylate media provides an important analytical tool when the resolving power of the electron microscope is required to define sites of protein function. The protocol presented here was established to analyze the role and dynamics of the activated protein kinase C/Rack1 regulatory system in the patterning and outgrowth of limb bud mesenchyme. With minor changes, especially in the composition of the fixative solution, the protocol should be easily adaptable for the postembedding immunogold labeling of any other antigen in tissues of embryos of diverse species. Quantification of the labeling can be achieved by using electron microscope systems capable of supporting digital image analysis. Copyright 2001 Academic Press.
Maier, Barbara; Vogeser, Michael
2013-04-01
Isotope dilution LC-MS/MS methods used in the clinical laboratory typically involve multi-point external calibration in each analytical series. Our aim was to test the hypothesis that determination of target analyte concentrations directly derived from the relation of the target analyte peak area to the peak area of a corresponding stable isotope labelled internal standard compound [direct isotope dilution analysis (DIDA)] may be not inferior to conventional external calibration with respect to accuracy and reproducibility. Quality control samples and human serum pools were analysed in a comparative validation protocol for cortisol as an exemplary analyte by LC-MS/MS. Accuracy and reproducibility were compared between quantification either involving a six-point external calibration function, or a result calculation merely based on peak area ratios of unlabelled and labelled analyte. Both quantification approaches resulted in similar accuracy and reproducibility. For specified analytes, reliable analyte quantification directly derived from the ratio of peak areas of labelled and unlabelled analyte without the need for a time consuming multi-point calibration series is possible. This DIDA approach is of considerable practical importance for the application of LC-MS/MS in the clinical laboratory where short turnaround times often have high priority.
de Paulo, Jéssica Fiorotti; Camargo, Mariana Guedes; Coutinho-Rodrigues, Caio Junior Balduino; Marciano, Allan Felipe; de Freitas, Maria Clemente; da Silva, Emily Mesquita; Gôlo, Patrícia Silva; Morena, Diva Denelle Spadacci; da Costa Angelo, Isabele; Bittencourt, Vânia Rita Elias Pinheiro
2018-06-01
Hemocytes, cells present in the hemocoel, are involved in the immune response of arthropods challenged with entomopathogens. The present study established the best methodology for harvesting hemocytes from Rhipicephalus microplus and evaluated the number of hemocytes in addition to histological analysis from ovaries of fungus-infected females and tested the virulence of GFP-fungi transformants. Different centrifugation protocols were tested, and the one in which presented fewer disrupted cells and higher cell recovery was applied for evaluating the effect of Metarhizium spp. on hemocytes against R. microplus. After processing, protocol number 1 (i.e., hemolymph samples were centrifuged at 500×g for 3 min at 4 °C) was considered more efficient, with two isolates used (Metarhizium robertsii ARSEF 2575 and Metarhizium anisopliae ARSEF 549), both wild types and GFP, to assess their virulence. In the biological assays, the GFP-fungi were as virulent as wild types, showing no significant differences. Subsequently, hemocyte quantifications were performed after inoculation, which exhibited notable changes in the number of hemocytes, reducing by approximately 80% in females previously treated with Metarhizium isolates in comparison to non-treated females. Complementarily, 48 h after inoculation, in which hemolymph could not be obtained, histological analysis showed the high competence of these fungi to colonize ovary from ticks. Here, for the first time, the best protocol (i.e., very low cell disruption and high cell recovery) for R. microplus hemocyte obtaining was established aiming to guide directions to other studies that involves cellular responses from ticks to fungi infection.
Goulas, Vlasios; Manganaris, George A
2012-01-01
Triterpenic acids, such as maslinic acid and oleanolic acid, are commonly found in olive fruits and have been associated with many health benefits. The drying and extraction methods, as well as the solvents used, are critical factors in the determination of their concentration in plant tissues. Thus, there is an emerging need for standardisation of an efficient extraction protocol that determines triterpenic acid content in olive fruits. To evaluate common extraction methods of triterpenic acids from olive fruits and to determine the effect of the drying method on their content in order to propose an optimum protocol for their quantification. The efficacy of different drying and extraction methods was evaluated through the quantification of maslinic acid and oleanolic acid contents using the reversed-phase HPLC technique. Data showed that ultrasonic assisted extraction with ethanol or a mixture of ethanol:methanol (1:1, v/v) resulted in the recovery of significantly higher amounts of triterpenic acids than other methods used. The drying method also affected the estimated triterpenic acid content; frozen or lyophilised olive fruit material gave higher yields of triterpenic acids compared with air-dried material at both 35°C and 105°C. This study provides a rapid and low-cost extraction method, i.e. ultrasonic assisted extraction with an eco-friendly solvent such as ethanol, from frozen or lyophilised olive fruit for the accurate determination of the triterpenic acid content in olive fruit. Copyright © 2011 John Wiley & Sons, Ltd.
Miklosi, Andras G; Del Favero, Giorgia; Bulat, Tanja; Höger, Harald; Shigemoto, Ryuichi; Marko, Doris; Lubec, Gert
2018-06-01
Although dopamine receptors D1 and D2 play key roles in hippocampal function, their synaptic localization within the hippocampus has not been fully elucidated. In order to understand precise functions of pre- or postsynaptic dopamine receptors (DRs), the development of protocols to differentiate pre- and postsynaptic DRs is essential. So far, most studies on determination and quantification of DRs did not discriminate between subsynaptic localization. Therefore, the aim of the study was to generate a robust workflow for the localization of DRs. This work provides the basis for future work on hippocampal DRs, in light that DRs may have different functions at pre- or postsynaptic sites. Synaptosomes from rat hippocampi isolated by a sucrose gradient protocol were prepared for super-resolution direct stochastic optical reconstruction microscopy (dSTORM) using Bassoon as a presynaptic zone and Homer1 as postsynaptic density marker. Direct labeling of primary validated antibodies against dopamine receptors D1 (D1R) and D2 (D2R) with Alexa Fluor 594 enabled unequivocal assignment of D1R and D2R to both, pre- and postsynaptic sites. D1R immunoreactivity clusters were observed within the presynaptic active zone as well as at perisynaptic sites at the edge of the presynaptic active zone. The results may be useful for the interpretation of previous studies and the design of future work on DRs in the hippocampus. Moreover, the reduction of the complexity of brain tissue by the use of synaptosomal preparations and dSTORM technology may represent a useful tool for synaptic localization of brain proteins.
Weakley, Jonathon Js; Till, Kevin; Read, Dale B; Phibbs, Padraic J; Roe, Gregory; Darrall-Jones, Joshua; Jones, Ben L
2017-08-04
Training that is efficient and effective is of great importance to an athlete. One method of improving efficiency is by incorporating supersets into resistance training routines. However, the structuring of supersets is still unexplored. Therefore, the purpose of this study was to assess the effects of agonist-antagonist (A-A), alternate peripheral (A-P), and similar biomechanical (SB) superset configurations on rate of perceived exertion (RPE), kinetic and kinematic changes during the bench press. 10 subjects performed resistance training protocols in a randomized-crossover design, with magnitude-based inferences assessing changes/differences within and between protocols. Changes in RPE were very likely and almost certainly greater in the A-P and SB protocols when compared with the A-A, while all superset protocols had very likely to almost certain reductions in mean velocity and power from baseline. Reductions in mean velocity and power were almost certainly greater in the SB protocol, with differences between the A-A and A-P protocols being unclear. Decreases in peak force were likely and almost certain in the A-A and SB protocols respectively, with changes in A-P being unclear. Differences between these protocols showed likely greater decreases in SB peak forces when compared to A-A, with all other superset comparisons being unclear. This study demonstrates the importance of exercise selection when incorporating supersets into a training routine. It is suggested that the practitioner uses A-A supersets when aiming to improve training efficiency and minimize reductions in kinetic and kinematic output of the agonist musculature while completing the barbell bench press.
SU-F-J-16: Planar KV Imaging Dose Reduction Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gershkevitsh, E; Zolotuhhin, D
Purpose: IGRT has become an indispensable tool in modern radiotherapy with kV imaging used in many departments due to superior image quality and lower dose when compared to MV imaging. Many departments use manufacturer supplied protocols for imaging which are not always optimised between image quality and radiation dose (ALARA). Methods: Whole body phantom PBU-50 (Kyoto Kagaku ltd., Japan) for imaging in radiology has been imaged on Varian iX accelerator (Varian Medical Systems, USA) with OBI 1.5 system. Manufacturer’s default protocols were adapted by modifying kV and mAs values when imaging different anatomical regions of the phantom (head, thorax, abdomen,more » pelvis, extremities). Images with different settings were independently reviewed by two persons and their suitability for IGRT set-up correction protocols were evaluated. The suitable images with the lowest mAs were then selected. The entrance surface dose (ESD) for manufacturer’s default protocols and modified protocols were measured with RTI Black Piranha (RTI Group, Sweden) and compared. Image quality was also measured with kVQC phantom (Standard Imaging, USA) for different protocols. The modified protocols have been applied for clinical work. Results: For most cases optimized protocols reduced the ESD on average by a factor of 3(range 0.9–8.5). Further reduction in ESD has been observed by applying bow-tie filter designed for CBCT. The largest reduction in dose (12.2 times) was observed for Thorax lateral protocol. The dose was slightly increased (by 10%) for large pelvis AP protocol. Conclusion: Manufacturer’s default IGRT protocols could be optimised to reduce the ESD to the patient without losing the necessary image quality for patient set-up correction. For patient set-up with planar kV imaging the bony anatomy is mostly used and optimization should focus on this aspect. Therefore, the current approach with anthropomorphic phantom is more advantageous in optimization over standard kV quality control phantoms and SNR metrics.« less
Zarb, Francis; McEntee, Mark F; Rainford, Louise
2015-06-01
To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.
Absolute Quantification of Middle- to High-Abundant Plasma Proteins via Targeted Proteomics.
Dittrich, Julia; Ceglarek, Uta
2017-01-01
The increasing number of peptide and protein biomarker candidates requires expeditious and reliable quantification strategies. The utilization of liquid chromatography coupled to quadrupole tandem mass spectrometry (LC-MS/MS) for the absolute quantitation of plasma proteins and peptides facilitates the multiplexed verification of tens to hundreds of biomarkers from smallest sample quantities. Targeted proteomics assays derived from bottom-up proteomics principles rely on the identification and analysis of proteotypic peptides formed in an enzymatic digestion of the target protein. This protocol proposes a procedure for the establishment of a targeted absolute quantitation method for middle- to high-abundant plasma proteins waiving depletion or enrichment steps. Essential topics as proteotypic peptide identification and LC-MS/MS method development as well as sample preparation and calibration strategies are described in detail.
Chloroxyanion residue quantification in cantaloupes treated with chlorine dioxide gas
USDA-ARS?s Scientific Manuscript database
Previous studies show that treatment of cantaloupes with chlorine dioxide (ClO2) gas at 5 mg/L for 10 minutes, results in a significant reduction (p<0.05) in initial microflora, an increase in shelf life without any alteration in color, and a 4.6 and 4.3 log reduction of E. coli O157:H7 and L. monoc...
This report sets standards by which the emissions reduction provided by fuel and lubricant technologies can be tested and be tested in a comparable way. It is a generic protocol under the Environmental Technology Verification program.
Idilman, Ilkay S; Keskin, Onur; Celik, Azim; Savas, Berna; Elhan, Atilla Halil; Idilman, Ramazan; Karcaaltincaba, Musturay
2016-03-01
Many imaging methods have been defined for quantification of hepatic steatosis in non-alcoholic fatty liver disease (NAFLD). However, studies comparing the efficiency of magnetic resonance imaging-proton density fat fraction (MRI-PDFF), magnetic resonance spectroscopy (MRS), and liver histology for quantification of liver fat content are limited. To compare the efficiency of MRI-PDFF and MRS in the quantification of liver fat content in individuals with NAFLD. A total of 19 NAFLD patients underwent MRI-PDFF, MRS, and liver biopsy for quantification of liver fat content. The MR examinations were performed on a 1.5 HDx MRI system. The MRI protocol included T1-independent volumetric multi-echo gradient-echo imaging with T2* correction and spectral fat modeling and MRS with STEAM technique. A close correlation was observed between liver MRI-PDFF- and histology- determined steatosis (r = 0.743, P < 0.001) and between liver MRS- and histology-determined steatosis (r = 0.712, P < 0.001), with no superiority between them (ƶ = 0.19, P = 0.849). For quantification of hepatic steatosis, a high correlation was observed between the two MRI methods (r = 0.986, P < 0.001). MRI-PDFF and MRS accurately differentiated moderate/severe steatosis from mild/no hepatic steatosis (P = 0.007 and 0.013, respectively), with no superiority between them (AUCMRI-PDFF = 0.881 ± 0.0856 versus AUCMRS = 0.857 ± 0.0924, P = 0.461). Both MRI-PDFF and MRS can be used for accurate quantification of hepatic steatosis. © The Foundation Acta Radiologica 2015.
Automatic Segmentation and Quantification of Filamentous Structures in Electron Tomography
Loss, Leandro A.; Bebis, George; Chang, Hang; Auer, Manfred; Sarkar, Purbasha; Parvin, Bahram
2016-01-01
Electron tomography is a promising technology for imaging ultrastructures at nanoscale resolutions. However, image and quantitative analyses are often hindered by high levels of noise, staining heterogeneity, and material damage either as a result of the electron beam or sample preparation. We have developed and built a framework that allows for automatic segmentation and quantification of filamentous objects in 3D electron tomography. Our approach consists of three steps: (i) local enhancement of filaments by Hessian filtering; (ii) detection and completion (e.g., gap filling) of filamentous structures through tensor voting; and (iii) delineation of the filamentous networks. Our approach allows for quantification of filamentous networks in terms of their compositional and morphological features. We first validate our approach using a set of specifically designed synthetic data. We then apply our segmentation framework to tomograms of plant cell walls that have undergone different chemical treatments for polysaccharide extraction. The subsequent compositional and morphological analyses of the plant cell walls reveal their organizational characteristics and the effects of the different chemical protocols on specific polysaccharides. PMID:28090597
Automatic Segmentation and Quantification of Filamentous Structures in Electron Tomography.
Loss, Leandro A; Bebis, George; Chang, Hang; Auer, Manfred; Sarkar, Purbasha; Parvin, Bahram
2012-10-01
Electron tomography is a promising technology for imaging ultrastructures at nanoscale resolutions. However, image and quantitative analyses are often hindered by high levels of noise, staining heterogeneity, and material damage either as a result of the electron beam or sample preparation. We have developed and built a framework that allows for automatic segmentation and quantification of filamentous objects in 3D electron tomography. Our approach consists of three steps: (i) local enhancement of filaments by Hessian filtering; (ii) detection and completion (e.g., gap filling) of filamentous structures through tensor voting; and (iii) delineation of the filamentous networks. Our approach allows for quantification of filamentous networks in terms of their compositional and morphological features. We first validate our approach using a set of specifically designed synthetic data. We then apply our segmentation framework to tomograms of plant cell walls that have undergone different chemical treatments for polysaccharide extraction. The subsequent compositional and morphological analyses of the plant cell walls reveal their organizational characteristics and the effects of the different chemical protocols on specific polysaccharides.
Quantification of concentrated Chinese medicine granules by quantitative polymerase chain reaction.
Lo, Yat-Tung; Shaw, Pang-Chui
2017-10-25
Determination of the amount of constituent in a multi-herb product is important for quality control. In concentrated Chinese medicine granules (CCMG), no dregs are left after dissolution of the CCMG. This study is the first to examine the feasibility of using quantitative polymerase chain reaction (qPCR) to find the amount of CCMG in solution form. DNA was extracted from Hirudo and Zaocys CCMG mixed at different ratios and amplified in qPCR using species-specific primers. The threshold cycle (C T ) obtained was compared with the respective standard curves. Results showed that reproducible quantification results could be obtained (1) for 5-50mg CCMG using a modified DNA extraction protocol, (2) amongst DNA extracted from the same batch of CCMG and (3) amongst different batches of CCMG from the same company. This study demonstrated the constitute amount of CCMG in a mixture could be determined using qPCR. This work has extended the application of DNA techniques for the quantification of herbal products and this approach may be developed for quality assurance in the CCMG industry. Copyright © 2017 Elsevier B.V. All rights reserved.
Becker, François; Fourgeau, Patrice; Carpentier, Patrick H; Ouchène, Amina
2018-06-01
We postulate that blue telangiectasia and brownish pigmentation at ankle level, early markers of chronic venous insufficiency, can be quantified for longitudinal studies of chronic venous disease in Caucasian people. Objectives and methods To describe a photographic technique specially developed for this purpose. The pictures were acquired using a dedicated photo stand to position the foot in a reproducible way, with a normalized lighting and acquisition protocol. The image analysis was performed with a tool developed using algorithms optimized to detect and quantify blue telangiectasia and brownish pigmentation and their relative surface in the region of interest. To test the short-term reproducibility of the measures. Results The quantification of the blue telangiectasia and of the brownish pigmentation using an automated digital photo analysis is feasible. The short-term reproducibility is good for blue telangiectasia quantification. It is a less accurate for the brownish pigmentation. Conclusion The blue telangiectasia of the corona phlebectatica and the ankle flare can be assessed using a clinimetric approach based on the automated digital photo analysis.
Nuriel, Tal; Deeb, Ruba S.; Hajjar, David P.; Gross, Steven S.
2008-01-01
Nitration of tyrosine residues by nitric oxide (NO)-derived species results in the accumulation of 3-nitrotyrosine in proteins, a hallmark of nitrosative stress in cells and tissues. Tyrosine nitration is recognized as one of the multiple signaling modalities used by NO-derived species for the regulation of protein structure and function in health and disease. Various methods have been described for the quantification of protein 3-nitrotyrosine residues, and several strategies have been presented toward the goal of proteome-wide identification of protein tyrosine modification sites. This chapter details a useful protocol for the quantification of 3-nitrotyrosine in cells and tissues using high-pressure liquid chromatography with electrochemical detection. Additionally, this chapter describes a novel biotin-tagging strategy for specific enrichment of 3-nitrotyrosine-containing peptides. Application of this strategy, in conjunction with high-throughput MS/MS-based peptide sequencing, is anticipated to fuel efforts in developing comprehensive inventories of nitrosative stress-induced protein-tyrosine modification sites in cells and tissues. PMID:18554526
Smol, Thomas; Nibourel, Olivier; Marceau-Renaut, Alice; Celli-Lebras, Karine; Berthon, Céline; Quesnel, Bruno; Boissel, Nicolas; Terré, Christine; Thomas, Xavier; Castaigne, Sylvie; Dombret, Hervé; Preudhomme, Claude; Renneville, Aline
2015-12-01
EVI1 overexpression confers poor prognosis in acute myeloid leukemia (AML). Quantification of EVI1 expression has been mainly assessed by real-time quantitative PCR (RT-qPCR) based on relative quantification of EVI1-1D splice variant. In this study, we developed a RT-qPCR assay to perform quantification of EVI1 expression covering the different splice variants. A sequence localized in EVI1 exons 14 and 15 was cloned into plasmids that were used to establish RT-qPCR standard curves. Threshold values to define EVI1 overexpression were determined using 17 bone marrow (BM) and 31 peripheral blood (PB) control samples and were set at 1% in BM and 0.5% in PB. Samples from 64 AML patients overexpressing EVI1 included in the ALFA-0701 or -0702 trials were collected at diagnosis and during follow-up (n=152). Median EVI1 expression at AML diagnosis was 23.3% in BM and 3.6% in PB. EVI1 expression levels significantly decreased between diagnostic and post-induction samples, with an average variation from 21.6% to 3.56% in BM and from 4.0% to 0.22% in PB, but did not exceed 1 log10 reduction. Our study demonstrates that the magnitude of reduction in EVI1 expression levels between AML diagnosis and follow-up is not sufficient to allow sensitive detection of minimal residual disease. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bayesian deconvolution and quantification of metabolites in complex 1D NMR spectra using BATMAN.
Hao, Jie; Liebeke, Manuel; Astle, William; De Iorio, Maria; Bundy, Jacob G; Ebbels, Timothy M D
2014-01-01
Data processing for 1D NMR spectra is a key bottleneck for metabolomic and other complex-mixture studies, particularly where quantitative data on individual metabolites are required. We present a protocol for automated metabolite deconvolution and quantification from complex NMR spectra by using the Bayesian automated metabolite analyzer for NMR (BATMAN) R package. BATMAN models resonances on the basis of a user-controllable set of templates, each of which specifies the chemical shifts, J-couplings and relative peak intensities for a single metabolite. Peaks are allowed to shift position slightly between spectra, and peak widths are allowed to vary by user-specified amounts. NMR signals not captured by the templates are modeled non-parametrically by using wavelets. The protocol covers setting up user template libraries, optimizing algorithmic input parameters, improving prior information on peak positions, quality control and evaluation of outputs. The outputs include relative concentration estimates for named metabolites together with associated Bayesian uncertainty estimates, as well as the fit of the remainder of the spectrum using wavelets. Graphical diagnostics allow the user to examine the quality of the fit for multiple spectra simultaneously. This approach offers a workflow to analyze large numbers of spectra and is expected to be useful in a wide range of metabolomics studies.
Katharopoulos, Efstathios; Touloupi, Katerina; Touraki, Maria
2016-08-01
The present study describes the development of a simple and efficient screening system that allows identification and quantification of nine bacteriocins produced by Lactococcus lactis. Cell-free L. lactis extracts presented a broad spectrum of antibacterial activity, including Gram-negative bacteria, Gram-positive bacteria, and fungi. The characterization of their sensitivity to pH, and heat, showed that the extracts retained their antibacterial activity at extreme pH values and in a wide temperature range. The loss of antibacterial activity following treatment of the extracts with lipase or protease suggests a lipoproteinaceous nature of the produced antimicrobials. The extracts were subjected to a purification protocol that employs a two phase extraction using ammonium sulfate precipitation and organic solvent precipitation, followed by ion exchange chromatography, solid phase extraction and HPLC. In the nine fractions that presented antimicrobial activity, bacteriocins were quantified by the turbidometric method using a standard curve of nisin and by the HPLC method with nisin as the external standard, with both methods producing comparable results. Turbidometry appears to be unique in the qualitative determination of bacteriocins but the only method suitable to both separate and quantify the bacteriocins providing increased sensitivity, accuracy, and precision is HPLC. Copyright © 2016 Elsevier B.V. All rights reserved.
This handbook contains protocols that compare the immediate performance of subslab depressurization (SSD) mitigation system with performance months or years later. These protocols provide a methodology to test SSD radon mitigation systems in situ to determine long-term performanc...
de Kinkelder, R; van der Veen, R L P; Verbaak, F D; Faber, D J; van Leeuwen, T G; Berendschot, T T J M
2011-01-01
Purpose Accurate assessment of the amount of macular pigment (MPOD) is necessary to investigate the role of carotenoids and their assumed protective functions. High repeatability and reliability are important to monitor patients in studies investigating the influence of diet and supplements on MPOD. We evaluated the Macuscope (Macuvision Europe Ltd., Lapworth, Solihull, UK), a recently introduced device for measuring MPOD using the technique of heterochromatic flicker photometry (HFP). We determined agreement with another HFP device (QuantifEye; MPS 9000 series: Tinsley Precision Instruments Ltd., Croydon, Essex, UK) and a fundus reflectance method. Methods The right eyes of 23 healthy subjects (mean age 33.9±15.1 years) were measured. We determined agreement with QuantifEye and correlation with a fundus reflectance method. Repeatability of QuantifEye was assessed in 20 other healthy subjects (mean age 32.1±7.3 years). Repeatability was also compared with measurements by a fundus reflectance method in 10 subjects. Results We found low agreement between test and retest measurements with Macuscope. The average difference and the limits of agreement were −0.041±0.32. We found high agreement between test and retest measurements of QuantifEye (−0.02±0.18) and the fundus reflectance method (−0.04±0.18). MPOD data obtained by Macuscope and QuantifEye showed poor agreement: −0.017±0.44. For Macuscope and the fundus reflectance method, the correlation coefficient was r=0.05 (P=0.83). A significant correlation of r=0.87 (P<0.001) was found between QuantifEye and the fundus reflectance method. Conclusions Because repeatability of Macuscope measurements was low (ie, wide limits of agreement) and MPOD values correlated poorly with the fundus reflectance method, and agreed poorly with QuantifEye, the tested Macuscope protocol seems less suitable for studying MPOD. PMID:21057522
NASA Astrophysics Data System (ADS)
Lee, Hyun-Seok; Heun Kim, Sook; Jeong, Ji-Seon; Lee, Yong-Moon; Yim, Yong-Hyeon
2015-10-01
An element-based reductive approach provides an effective means of realizing International System of Units (SI) traceability for high-purity biological standards. Here, we develop an absolute protein quantification method using double isotope dilution (ID) inductively coupled plasma mass spectrometry (ICP-MS) combined with microwave-assisted acid digestion for the first time. We validated the method and applied it to certify the candidate protein certified reference material (CRM) of human growth hormone (hGH). The concentration of hGH was determined by analysing the total amount of sulfur in hGH. Next, the size-exclusion chromatography method was used with ICP-MS to characterize and quantify sulfur-containing impurities. By subtracting the contribution of sulfur-containing impurities from the total sulfur content in the hGH CRM, we obtained a SI-traceable certification value. The quantification result obtained with the present method based on sulfur analysis was in excellent agreement with the result determined via a well-established protein quantification method based on amino acid analysis using conventional acid hydrolysis combined with an ID liquid chromatography-tandem mass spectrometry. The element-based protein quantification method developed here can be generally used for SI-traceable absolute quantification of proteins, especially pure-protein standards.
Nagayama, Y; Nakaura, T; Oda, S; Tsuji, A; Urata, J; Furusawa, M; Tanoue, S; Utsunomiya, D; Yamashita, Y
2018-02-01
To perform an intra-individual investigation of the usefulness of a contrast medium (CM) and radiation dose-reduction protocol using single-source computed tomography (CT) combined with 100 kVp and sinogram-affirmed iterative reconstruction (SAFIRE) for whole-body CT (WBCT; chest-abdomen-pelvis CT) in oncology patients. Forty-three oncology patients who had undergone WBCT under both 120 and 100 kVp protocols at different time points (mean interscan intervals: 98 days) were included retrospectively. The CM doses for the 120 and 100 kVp protocols were 600 and 480 mg iodine/kg, respectively; 120 kVp images were reconstructed with filtered back-projection (FBP), whereas 100 kVp images were reconstructed with FBP (100 kVp-F) and the SAFIRE (100 kVp-S). The size-specific dose estimate (SSDE), iodine load and image quality of each protocol were compared. The SSDE and iodine load of 100 kVp protocol were 34% and 21%, respectively, lower than of 120 kVp protocol (SSDE: 10.6±1.1 versus 16.1±1.8 mGy; iodine load: 24.8±4versus 31.5±5.5 g iodine, p<0.01). Contrast enhancement, objective image noise, contrast-to-noise-ratio, and visual score of 100 kVp-S were similar to or better than of 120 kVp protocol. Compared with the 120 kVp protocol, the combined use of 100 kVp and SAFIRE in WBCT for oncology assessment with an SSCT facilitated substantial reduction in the CM and radiation dose while maintaining image quality. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Marchi, S; Bonora, M; Patergnani, S; Giorgi, C; Pinton, P
2017-01-01
It is widely acknowledged that mitochondria are highly active structures that rapidly respond to cellular and environmental perturbations by changing their shape, number, and distribution. Mitochondrial remodeling is a key component of diverse biological processes, ranging from cell cycle progression to autophagy. In this chapter, we describe different methodologies for the morphological study of the mitochondrial network. Instructions are given for the preparation of samples for fluorescent microscopy, based on genetically encoded strategies or the employment of synthetic fluorescent dyes. We also propose detailed protocols to analyze mitochondrial morphometric parameters from both three-dimensional and bidimensional datasets. Finally, we describe a protocol for the visualization and quantification of mitochondrial structures through electron microscopy. © 2017 Elsevier Inc. All rights reserved.
Shan, Jinyu; Clokie, Martha
2009-01-01
Bacteriophages manipulate bacterial gene expression in order to express their own genes or influence bacterial metabolism. Gene expression can be studied using real-time PCR or microarrays. Either technique requires the prior isolation of high quality RNA uncontaminated by the presence of genomic DNA. We outline the considerations necessary when working with bacteriophage infected bacterial cells. We also give an example of a protocol for extraction and quantification of high quality RNA from infected bacterial cells, using the marine cyanobacterium WH7803 and the phage S-PM2 as a case study. This protocol can be modified to extract RNA from the host/bacteriophage of interest.
RNA-Seq for Bacterial Gene Expression.
Poulsen, Line Dahl; Vinther, Jeppe
2018-06-01
RNA sequencing (RNA-seq) has become the preferred method for global quantification of bacterial gene expression. With the continued improvements in sequencing technology and data analysis tools, the most labor-intensive and expensive part of an RNA-seq experiment is the preparation of sequencing libraries, which is also essential for the quality of the data obtained. Here, we present a straightforward and inexpensive basic protocol for preparation of strand-specific RNA-seq libraries from bacterial RNA as well as a computational pipeline for the data analysis of sequencing reads. The protocol is based on the Illumina platform and allows easy multiplexing of samples and the removal of sequencing reads that are PCR duplicates. © 2018 by John Wiley & Sons, Inc. © 2018 John Wiley & Sons, Inc.
Lin, Yi-Reng; Huang, Mei-Fang; Wu, You-Ying; Liu, Meng-Chieh; Huang, Jing-Heng; Chen, Ziyu; Shiue, Yow-Ling; Wu, Chia-En; Liang, Shih-Shin
2017-09-01
In this work, we synthesized internal standards for four garlic organosulfur compounds (OSCs) by reductive amination with 13 C, D 2 -formaldehyde, and developed an isotope dilution analysis method to quantitate these organosulfur components in garlic samples. Internal standards were synthesized for internal absolute quantification of S-allylcysteine (SAC), S-allylcysteine sulfoxide (alliin), S-methylcysteine (SMC), and S-ethylcysteine (SEC). We used a multiple reaction monitoring (MRM) to detect 13 C, D 2 -formaldehyde-modified OSCs by ultrahigh-performance liquid phase chromatography coupled with tandem mass spectrometry (UHPLC-MS/MS) and obtained MS spectra showing different ratios of 13 C, D 2 -formaldehyde-modified and H 2 -formaldehyde-modified compounds. The resulting labeled and unlabeled OSCs were exhibited correlation coefficient (R 2 ) ranged from 0.9989 to 0.9994, respectively. The average recoveries for four OSCs at three concentration levels ranged from 89% to 105%. By 13 C, D 2 -formaldehyde and sodium cyanoborohydride, the reductive amination-based method can be utilized to generate novel internal standard for isotope dilution and to extend the quantitative application. Copyright © 2017 Elsevier Ltd. All rights reserved.
Seibert, Cathrin; Davidson, Brian R; Fuller, Barry J; Patterson, Laurence H; Griffiths, William J; Wang, Yuqin
2009-04-01
Here we report the identification and approximate quantification of cytochrome P450 (CYP) proteins in human liver microsomes as determined by nano-LC-MS/MS with application of the exponentially modified protein abundance index (emPAI) algorithm during database searching. Protocols based on 1D-gel protein separation and 2D-LC peptide separation gave comparable results. In total, 18 CYP isoforms were unambiguously identified based on unique peptide matches. Further, we have determined the absolute quantity of two CYP enzymes (2E1 and 1A2) in human liver microsomes using stable-isotope dilution mass spectrometry, where microsomal proteins were separated by 1D-gel electrophoresis, digested with trypsin in the presence of either a CYP2E1- or 1A2-specific stable-isotope labeled tryptic peptide and analyzed by LC-MS/MS. Using multiple reaction monitoring (MRM) for the isotope-labeled tryptic peptides and their natural unlabeled analogues quantification could be performed over the range of 0.1-1.5 pmol on column. Liver microsomes from four individuals were analyzed for CYP2E1 giving values of 88-200 pmol/mg microsomal protein. The CYP1A2 content of microsomes from a further three individuals ranged from 165 to 263 pmol/mg microsomal protein. Although, in this proof-of-concept study for CYP quantification, the two CYP isoforms were quantified from different samples, there are no practical reasons to prevent multiplexing the method to allow the quantification of multiple CYP isoforms in a single sample.
Seibert, Cathrin; Davidson, Brian R.; Fuller, Barry J.; Patterson, Laurence H.; Griffiths, William J.; Wang, Yuqin
2009-01-01
Here we report the identification and approximate quantification of cytochrome P450 (CYP) proteins in human liver microsomes as determined by nano-LC-MS/MS with application of the exponentially modified protein abundance index (emPAI) algorithm during database searching. Protocols based on 1D-gel protein separation and 2D-LC peptide separation gave comparable results. In total 18 CYP isoforms were unambiguously identified based on unique peptide matches. Further, we have determined the absolute quantity of two CYP enzymes (2E1 and 1A2) in human liver microsomes using stable-isotope dilution mass spectrometry, where microsomal proteins were separated by 1D-gel electrophoresis, digested with trypsin in the presence of either a CYP2E1- or 1A2-specific stable-isotope labelled tryptic peptide and analysed by LC-MS/MS. Using multiple reaction monitoring (MRM) for the isotope-labelled tryptic peptides and their natural unlabelled analogues quantification could be performed over the range of 0.1 – 1.5 pmol on column. Liver microsomes from four individuals were analysed for CYP2E1 giving values of 88 - 200 pmol/mg microsomal protein. The CYP1A2 content of microsomes from a further three individuals ranged from 165 – 263 pmol/mg microsomal protein. Although, in this proof-of-concept study for CYP quantification, the two CYP-isoforms were quantified from different samples, there are no practical reasons to prevent multiplexing the method to allow the quantification of multiple CYP-isoforms in a single sample. PMID:19714871
A refined methodology for modeling volume quantification performance in CT
NASA Astrophysics Data System (ADS)
Chen, Baiyu; Wilson, Joshua; Samei, Ehsan
2014-03-01
The utility of CT lung nodule volume quantification technique depends on the precision of the quantification. To enable the evaluation of quantification precision, we previously developed a mathematical model that related precision to image resolution and noise properties in uniform backgrounds in terms of an estimability index (e'). The e' was shown to predict empirical precision across 54 imaging and reconstruction protocols, but with different correlation qualities for FBP and iterative reconstruction (IR) due to the non-linearity of IR impacted by anatomical structure. To better account for the non-linearity of IR, this study aimed to refine the noise characterization of the model in the presence of textured backgrounds. Repeated scans of an anthropomorphic lung phantom were acquired. Subtracted images were used to measure the image quantum noise, which was then used to adjust the noise component of the e' calculation measured from a uniform region. In addition to the model refinement, the validation of the model was further extended to 2 nodule sizes (5 and 10 mm) and 2 segmentation algorithms. Results showed that the magnitude of IR's quantum noise was significantly higher in structured backgrounds than in uniform backgrounds (ASiR, 30-50%; MBIR, 100-200%). With the refined model, the correlation between e' values and empirical precision no longer depended on reconstruction algorithm. In conclusion, the model with refined noise characterization relfected the nonlinearity of iterative reconstruction in structured background, and further showed successful prediction of quantification precision across a variety of nodule sizes, dose levels, slice thickness, reconstruction algorithms, and segmentation software.
An efficacious oral health care protocol for immunocompromised patients.
Solomon, C S; Shaikh, A B; Arendorf, T M
1995-01-01
A twice-weekly oral and perioral examination was provided to 120 patients receiving antineoplastic therapy. Sixty patients were monitored while following the traditional hospital oral care protocol (chlorhexidine, hydrogen peroxide, sodium bicarbonate, thymol glycol, benzocaine mouthrinse, and nystatin). The mouth care protocol was then changed (experimental protocol = chlorhexidine, benzocaine lozenges, amphotericin B lozenges), and patients were monitored until the sample size matched that of the hospital mouth care regime. There was a statistically significant reduction in oral complications upon introduction and maintenance of the experimental protocol.
NASA Astrophysics Data System (ADS)
Shafiq, Natis
Energy transfer (ET) based sensitization of silicon (Si) using proximal nanocrystal quantum dots (NQDs) has been studied extensively in recent years as a means to develop thin and flexible Si based solar cells. The driving force for this research activity is a reduction in materials cost. To date, the main method for determining the role of ET in sensitizing Si has been optical spectroscopic studies. The quantitative contribution from two modes of ET (namely, nonradiative and radiative) has been reported using time-resolved photoluminescence (TRPL) spectroscopy coupled with extensive theoretical modelling. Thus, optical techniques have established the potential for utilizing ET based sensitization of Si as a feasible way to develop novel NQD-Si hybrid solar cells. However, the ultimate measure of the efficiency of ET-based mechanisms is the generation of electron-hole pairs by the impinging photons. It is therefore important to perform electrical measurements. However, only a couple of studies have attempted electrical quantification of ET modes. A few studies have focused on photocurrent measurements, without considering industrially relevant photovoltaic (PV) systems. Therefore, there is a need to develop a systematic approach for the electrical quantification of ET-generated charges and to help engineer new PV architectures optimized for harnessing the full advantages of ET mechanisms. Within this context, the work presented in this dissertation aims to develop an experimental testing protocol that can be applied to different PV structures for quantifying ET contributions from electrical measurements. We fabricated bulk Si solar cells (SCs) as a test structure and utilized CdSe/ZnS NQDs for ET based sensitization. The NQD-bulk Si hybrid devices showed ˜30% PV enhancement after NQD deposition. We measured external quantum efficiency (EQE) of these devices to quantify ET-generated charges. Reflectance measurements were also performed to decouple contributions of intrinsic optical effects (i.e., anti-reflection) from NQD mediated ET processes. Our analysis indicates that the contribution of ET-generated charges cannot be detected by EQE measurements. Instead, changes in the optical properties (i.e., anti-reflection property) due to the NQD layer are found to be the primary source of the photocurrent enhancement. Based on this finding, we propose to minimize bulk Si absorption by using an ultrathin (˜300 nm) Si PV architecture which should enable measurements of ET-generated charges. We describe an optimized process flow for fabricating such ultrathin Si devices. The devices fabricated by this method behave like photo-detectors and show enhanced sensitivity under 1 Sun AM1.5G illumination. The geometry and process flow of these devices make it possible to incorporate NQDs for sensitization. Overall, this dissertation provides a protocol for the quantification of ET-generated charges and documents an optimized process flow for the development of an ultrathin Si solar cells.
Iyama, Yuji; Nakaura, Takeshi; Yokoyama, Koichi; Kidoh, Masafumi; Harada, Kazunori; Oda, Seitaro; Tokuyasu, Shinichi; Yamashita, Yasuyuki
This study aimed to evaluate the feasibility of a low contrast, low-radiation dose protocol of 80-peak kilovoltage (kVp) with prospective electrocardiography-gated cardiac computed tomography (CT) using knowledge-based iterative model reconstruction (IMR). Thirty patients underwent an 80-kVp prospective electrocardiography-gated cardiac CT with low-contrast agent (222-mg iodine per kilogram of body weight) dose. We also enrolled 30 consecutive patients who were scanned with a 120-kVp cardiac CT with filtered back projection using the standard contrast agent dose (370-mg iodine per kilogram of body weight) as a historical control group. We evaluated the radiation dose for the 2 groups. The 80-kVp images were reconstructed with filtered back projection (protocol A), hybrid iterative reconstruction (HIR, protocol B), and IMR (protocol C). We compared CT numbers, image noise, and contrast-to-noise ratio among 120-kVp protocol, protocol A, protocol B, and protocol C. In addition, we compared the noise reduction rate between HIR and IMR. Two independent readers compared image contrast, image noise, image sharpness, unfamiliar image texture, and overall image quality among the 4 protocols. The estimated effective dose (ED) of the 80-kVp protocol was 74% lower than that of the 120-kVp protocol (1.4 vs 5.4 mSv). The contrast-to-noise ratio of protocol C was significantly higher than that of protocol A. The noise reduction rate of IMR was significantly higher than that of HIR (P < 0.01). There was no significant difference in almost all qualitative image quality between 120-kVp protocol and protocol C except for image contrast. A 80-kVp protocol with IMR yields higher image quality with 74% decreased radiation dose and 40% decreased contrast agent dose as compared with a 120-kVp protocol, while decreasing more image noise compared with the 80-kVp protocol with HIR.
2011-02-01
Reductive dechlorination is a promising process for biodegradation of chlorinated solvents. The successful field evaluation and implementation of the...population. These specialized bacteria use the chlorinated ethenes as electron acceptors and gain energy for growth from the reductive...This guidance protocol addresses the use of MBTs to quantitatively assess the Dhc population at chlorinated ethene sites and aims at providing
Absolute quantification of DcR3 and GDF15 from human serum by LC-ESI MS
Lancrajan, Ioana; Schneider-Stock, Regine; Naschberger, Elisabeth; Schellerer, Vera S; Stürzl, Michael; Enz, Ralf
2015-01-01
Biomarkers are widely used in clinical diagnosis, prognosis and therapy monitoring. Here, we developed a protocol for the efficient and selective enrichment of small and low concentrated biomarkers from human serum, involving a 95% effective depletion of high-abundant serum proteins by partial denaturation and enrichment of low-abundant biomarkers by size exclusion chromatography. The recovery of low-abundance biomarkers was above 97%. Using this protocol, we quantified the tumour markers DcR3 and growth/differentiation factor (GDF)15 from 100 μl human serum by isotope dilution mass spectrometry, using 15N metabolically labelled and concatamerized fingerprint peptides for the both proteins. Analysis of three different fingerprint peptides for each protein by liquid chromatography electrospray ionization mass spectrometry resulted in comparable concentrations in three healthy human serum samples (DcR3: 27.23 ± 2.49 fmol/ml; GDF15: 98.11 ± 0.49 fmol/ml). In contrast, serum levels were significantly elevated in tumour patients for DcR3 (116.94 ± 57.37 fmol/ml) and GDF15 (164.44 ± 79.31 fmol/ml). Obtained data were in good agreement with ELISA and qPCR measurements, as well as with literature data. In summary, our protocol allows the reliable quantification of biomarkers, shows a higher resolution at low biomarker concentrations than antibody-based strategies, and offers the possibility of multiplexing. Our proof-of-principle studies in patient sera encourage the future analysis of the prognostic value of DcR3 and GDF15 for colon cancer patients in larger patient cohorts. PMID:25823874
Real-time PCR assays for detection and quantification of aflatoxin-producing molds in foods.
Rodríguez, Alicia; Rodríguez, Mar; Luque, M Isabel; Martín, Alberto; Córdoba, Juan J
2012-08-01
Aflatoxins are among the most toxic mycotoxins. Early detection and quantification of aflatoxin-producing species is crucial to improve food safety. In the present work, two protocols of real-time PCR (qPCR) based on SYBR Green and TaqMan were developed, and their sensitivity and specificity were evaluated. Primers and probes were designed from the o-methyltransferase gene (omt-1) involved in aflatoxin biosynthesis. Fifty-three mold strains representing aflatoxin producers and non-producers of different species, usually reported in food products, were used as references. All strains were tested for aflatoxins production by high-performance liquid chromatography-mass spectrometry (HPLC-MS). The functionality of the proposed qPCR method was demonstrated by the strong linear relationship of the standard curves constructed with the omt-1 gene copy number and Ct values for the different aflatoxin producers tested. The ability of the qPCR protocols to quantify aflatoxin-producing molds was evaluated in different artificially inoculated foods. A good linear correlation was obtained over the range 4 to 1 log cfu/g per reaction for all qPCR assays in the different food matrices (peanuts, spices and dry-fermented sausages). The detection limit in all inoculated foods ranged from 1 to 2 log cfu/g for SYBR Green and TaqMan assays. No significant effect was observed due to the different equipment, operator, and qPCR methodology used in the tests of repeatability and reproducibility for different foods. The proposed methods quantified with high efficiency the fungal load in foods. These qPCR protocols are proposed for use to quantify aflatoxin-producing molds in food products. Copyright © 2012 Elsevier Ltd. All rights reserved.
Kahn, Johannes; Kaul, David; Böning, Georg; Rotzinger, Roman; Freyhardt, Patrick; Schwabe, Philipp; Maurer, Martin H; Renz, Diane Miriam; Streitparth, Florian
2017-09-01
Purpose As a supra-regional level-I trauma center, we evaluated computed tomography (CT) acquisitions of polytraumatized patients for quality and dose optimization purposes. Adapted statistical iterative reconstruction [(AS)IR] levels, tube voltage reduction as well as a split-bolus contrast agent (CA) protocol were applied. Materials and Methods 61 patients were split into 3 different groups that differed with respect to tube voltage (120 - 140 kVp) and level of applied ASIR reconstruction (ASIR 20 - 50 %). The CT protocol included a native acquisition of the head followed by a single contrast-enhanced acquisition of the whole body (64-MSCT). CA (350 mg/ml iodine) was administered as a split bolus injection of 100 ml (2 ml/s), 20 ml NaCl (1 ml/s), 60 ml (4 ml/s), 40 ml NaCl (4 ml/s) with a scan delay of 85 s to detect injuries of both the arterial system and parenchymal organs in a single acquisition. Both the quantitative (SNR/CNR) and qualitative (5-point Likert scale) image quality was evaluated in parenchymal organs that are often injured in trauma patients. Radiation exposure was assessed. Results The use of IR combined with a reduction of tube voltage resulted in good qualitative and quantitative image quality and a significant reduction in radiation exposure of more than 40 % (DLP 1087 vs. 647 mGyxcm). Image quality could be improved due to a dedicated protocol that included different levels of IR adapted to different slice thicknesses, kernels and the examined area for the evaluation of head, lung, body and bone injury patterns. In synopsis of our results, we recommend the implementation of a polytrauma protocol with a tube voltage of 120 kVp and the following IR levels: cCT 5mm: ASIR 20; cCT 0.625 mm: ASIR 40; lung 2.5 mm: ASIR 30, body 5 mm: ASIR 40; body 1.25 mm: ASIR 50; body 0.625 mm: ASIR 0. Conclusion A dedicated adaptation of the CT trauma protocol (level of reduction of tube voltage and of IR) according to the examined body region (head, lung, body, bone) combined with a split bolus CA injection protocol allows for a high-quality CT examination and a relevant reduction of radiation exposure in the examination of polytraumatized patients Key Points · Dedicated adaption of the CT trauma protocol allows for an optimized examination.. · Different levels of iterative reconstruction, tube voltage and the CA injection protocol are crucial.. · A reduction of radiation exposure of more than 40 % with good image quality is possible.. Citation Format · Kahn J, Kaul D, Böning G et al. Quality and Dose Optimized CT Trauma Protocol - Recommendation from a University Level-I Trauma Center. Fortschr Röntgenstr 2017; 189: 844 - 854. © Georg Thieme Verlag KG Stuttgart · New York.
Valderrama, Katherine; Castellanos, Leonardo; Zea, Sven
2010-08-01
The sponge Discodermia dissoluta is the source of the potent antimitotic compound (+)-discodermolide. The relatively abundant and shallow populations of this sponge in Santa Marta, Colombia, allow for studies to evaluate the natural and biotechnological supply options of (+)-discodermolide. In this work, an RP-HPLC-UV methodology for the quantification of (+)-discodermolide from sponge samples was tested and validated. Our protocol for extracting this compound from the sponge included lyophilization, exhaustive methanol extraction, partitioning using water and dichloromethane, purification of the organic fraction in RP-18 cartridges and then finally retrieving the (+)-discodermolide in the methanol-water (80:20 v/v) fraction. This fraction was injected into an HPLC system with an Xterra RP-18 column and a detection wavelength of 235 nm. The calibration curve was linear, making it possible to calculate the LODs and quantification in these experiments. The intra-day and inter-day precision showed relative standard deviations lower than 5%. The accuracy, determined as the percentage recovery, was 99.4%. Nine samples of the sponge from the Bahamas, Bonaire, Curaçao and Santa Marta had concentrations of (+)-discodermolide ranging from 5.3 to 29.3 microg/g(-1) of wet sponge. This methodology is quick and simple, allowing for the quantification in sponges from natural environments, in situ cultures or dissociated cells.
Mirro, Amy E.; Brady, Samuel L.; Kaufman, Robert. A.
2016-01-01
Purpose To implement the maximum level of statistical iterative reconstruction that can be used to establish dose-reduced head CT protocols in a primarily pediatric population. Methods Select head examinations (brain, orbits, sinus, maxilla and temporal bones) were investigated. Dose-reduced head protocols using an adaptive statistical iterative reconstruction (ASiR) were compared for image quality with the original filtered back projection (FBP) reconstructed protocols in phantom using the following metrics: image noise frequency (change in perceived appearance of noise texture), image noise magnitude, contrast-to-noise ratio (CNR), and spatial resolution. Dose reduction estimates were based on computed tomography dose index (CTDIvol) values. Patient CTDIvol and image noise magnitude were assessed in 737 pre and post dose reduced examinations. Results Image noise texture was acceptable up to 60% ASiR for Soft reconstruction kernel (at both 100 and 120 kVp), and up to 40% ASiR for Standard reconstruction kernel. Implementation of 40% and 60% ASiR led to an average reduction in CTDIvol of 43% for brain, 41% for orbits, 30% maxilla, 43% for sinus, and 42% for temporal bone protocols for patients between 1 month and 26 years, while maintaining an average noise magnitude difference of 0.1% (range: −3% to 5%), improving CNR of low contrast soft tissue targets, and improving spatial resolution of high contrast bony anatomy, as compared to FBP. Conclusion The methodology in this study demonstrates a methodology for maximizing patient dose reduction and maintaining image quality using statistical iterative reconstruction for a primarily pediatric population undergoing head CT examination. PMID:27056425
Herts, Brian R; Baker, Mark E; Obuchowski, Nancy; Primak, Andrew; Schneider, Erika; Rhana, Harpreet; Dong, Frank
2013-06-01
The purpose of this article is to determine the decrease in volume CT dose index (CTDI(vol)) and dose-length product (DLP) achieved by switching from fixed quality reference tube current protocols with automatic tube current modulation to protocols adjusting the quality reference tube current, slice collimation, and peak kilovoltage according to patient weight. All adult patients who underwent CT examinations of the abdomen or abdomen and pelvis during 2010 using weight-based protocols who also underwent a CT examination in 2008 or 2009 using fixed quality reference tube current protocols were identified from the radiology information system. Protocol pages were electronically retrieved, and the CT model, examination date, scan protocol, CTDI(vol), and DLP were extracted from the DICOM header or by optical character recognition. There were 15,779 scans with dose records for 2700 patients. Changes in CTDI(vol) and DLP were compared only between examinations of the same patient and same CT system model for examinations performed in 2008 or 2009 and those performed in 2010. The final analysis consisted of 1117 comparisons in 1057 patients, and 1209 comparisons in 988 patients for CTDI(vol) and DLP, respectively. The change to a weight-based protocol resulted in a statistically significant reduction in CTDI(vol) and DLP on three MDCT system models (p < 0.001). The largest average CTDI(vol) decrease was 13.9%, and the largest average DLP decrease was 16.1% on a 64-MDCT system. Both the CTDI(vol) and DLP decreased the most for patients who weighed less than 250 lb (112.5 kg). Adjusting the CT protocol by selecting parameters according to patient weight is a viable method for reducing CT radiation dose. The largest reductions occurred in the patients weighing less than 250 lb.
Medrano-Gracia, Pau; Cowan, Brett R; Bluemke, David A; Finn, J Paul; Kadish, Alan H; Lee, Daniel C; Lima, Joao A C; Suinesiaputra, Avan; Young, Alistair A
2013-09-13
Cardiovascular imaging studies generate a wealth of data which is typically used only for individual study endpoints. By pooling data from multiple sources, quantitative comparisons can be made of regional wall motion abnormalities between different cohorts, enabling reuse of valuable data. Atlas-based analysis provides precise quantification of shape and motion differences between disease groups and normal subjects. However, subtle shape differences may arise due to differences in imaging protocol between studies. A mathematical model describing regional wall motion and shape was used to establish a coordinate system registered to the cardiac anatomy. The atlas was applied to data contributed to the Cardiac Atlas Project from two independent studies which used different imaging protocols: steady state free precession (SSFP) and gradient recalled echo (GRE) cardiovascular magnetic resonance (CMR). Shape bias due to imaging protocol was corrected using an atlas-based transformation which was generated from a set of 46 volunteers who were imaged with both protocols. Shape bias between GRE and SSFP was regionally variable, and was effectively removed using the atlas-based transformation. Global mass and volume bias was also corrected by this method. Regional shape differences between cohorts were more statistically significant after removing regional artifacts due to imaging protocol bias. Bias arising from imaging protocol can be both global and regional in nature, and is effectively corrected using an atlas-based transformation, enabling direct comparison of regional wall motion abnormalities between cohorts acquired in separate studies.
Design and assessment of engineered CRISPR-Cpf1 and its use for genome editing.
Li, Bin; Zeng, Chunxi; Dong, Yizhou
2018-05-01
Cpf1, a CRISPR endonuclease discovered in Prevotella and Francisella 1 bacteria, offers an alternative platform for CRISPR-based genome editing beyond the commonly used CRISPR-Cas9 system originally discovered in Streptococcus pyogenes. This protocol enables the design of engineered CRISPR-Cpf1 components, both CRISPR RNAs (crRNAs) to guide the endonuclease and Cpf1 mRNAs to express the endonuclease protein, and provides experimental procedures for effective genome editing using this system. We also describe quantification of genome-editing activity and off-target effects of the engineered CRISPR-Cpf1 in human cell lines using both T7 endonuclease I (T7E1) assay and targeted deep sequencing. This protocol enables rapid construction and identification of engineered crRNAs and Cpf1 mRNAs to enhance genome-editing efficiency using the CRISPR-Cpf1 system, as well as assessment of target specificity within 2 months. This protocol may also be appropriate for fine-tuning other types of CRISPR systems.
Novel SPECT Technologies and Approaches in Cardiac Imaging
Slomka, Piotr; Hung, Guang-Uei; Germano, Guido; Berman, Daniel S.
2017-01-01
Recent novel approaches in myocardial perfusion single photon emission CT (SPECT) have been facilitated by new dedicated high-efficiency hardware with solid-state detectors and optimized collimators. New protocols include very low-dose (1 mSv) stress-only, two-position imaging to mitigate attenuation artifacts, and simultaneous dual-isotope imaging. Attenuation correction can be performed by specialized low-dose systems or by previously obtained CT coronary calcium scans. Hybrid protocols using CT angiography have been proposed. Image quality improvements have been demonstrated by novel reconstructions and motion correction. Fast SPECT acquisition facilitates dynamic flow and early function measurements. Image processing algorithms have become automated with virtually unsupervised extraction of quantitative imaging variables. This automation facilitates integration with clinical variables derived by machine learning to predict patient outcome or diagnosis. In this review, we describe new imaging protocols made possible by the new hardware developments. We also discuss several novel software approaches for the quantification and interpretation of myocardial perfusion SPECT scans. PMID:29034066
Martin-Sanchez, Pedro M; Gorbushina, Anna A; Kunte, Hans-Jörg; Toepel, Jörg
2016-07-01
A wide variety of fungi and bacteria are known to contaminate fuels and fuel systems. These microbial contaminants have been linked to fuel system fouling and corrosion. The fungus Hormoconis resinae, a common jet fuel contaminant, is used in this study as a model for developing innovative risk assessment methods. A novel qPCR protocol to detect and quantify H. resinae in, and together with, total fungal contamination of fuel systems is reported. Two primer sets, targeting the markers RPB2 and ITS, were selected for their remarkable specificity and sensitivity. These primers were successfully applied on fungal cultures and diesel samples demonstrating the validity and reliability of the established qPCR protocol. This novel tool allows clarification of the current role of H. resinae in fuel contamination cases, as well as providing a technique to detect fungal outbreaks in fuel systems. This tool can be expanded to other well-known fuel-deteriorating microorganisms.
Tagg, Alexander S; Sapp, Melanie; Harrison, Jesse P; Ojeda, Jesús J
2015-06-16
Microplastics (<5 mm) have been documented in environmental samples on a global scale. While these pollutants may enter aquatic environments via wastewater treatment facilities, the abundance of microplastics in these matrices has not been investigated. Although efficient methods for the analysis of microplastics in sediment samples and marine organisms have been published, no methods have been developed for detecting these pollutants within organic-rich wastewater samples. In addition, there is no standardized method for analyzing microplastics isolated from environmental samples. In many cases, part of the identification protocol relies on visual selection before analysis, which is open to bias. In order to address this, a new method for the analysis of microplastics in wastewater was developed. A pretreatment step using 30% hydrogen peroxide (H2O2) was employed to remove biogenic material, and focal plane array (FPA)-based reflectance micro-Fourier-transform (FT-IR) imaging was shown to successfully image and identify different microplastic types (polyethylene, polypropylene, nylon-6, polyvinyl chloride, polystyrene). Microplastic-spiked wastewater samples were used to validate the methodology, resulting in a robust protocol which was nonselective and reproducible (the overall success identification rate was 98.33%). The use of FPA-based micro-FT-IR spectroscopy also provides a considerable reduction in analysis time compared with previous methods, since samples that could take several days to be mapped using a single-element detector can now be imaged in less than 9 h (circular filter with a diameter of 47 mm). This method for identifying and quantifying microplastics in wastewater is likely to provide an essential tool for further research into the pathways by which microplastics enter the environment.
EPA will provide substantial involvement in the form of technical assistance, development ofoutputs, and oversight. EPA and the recipient in program content, review of project progress, and quantification and reporting of results.
Automated lobar quantification of emphysema in patients with severe COPD.
Revel, Marie-Pierre; Faivre, Jean-Baptiste; Remy-Jardin, Martine; Deken, Valérie; Duhamel, Alain; Marquette, Charles-Hugo; Tacelli, Nunzia; Bakai, Anne-Marie; Remy, Jacques
2008-12-01
Automated lobar quantification of emphysema has not yet been evaluated. Unenhanced 64-slice MDCT was performed in 47 patients evaluated before bronchoscopic lung-volume reduction. CT images reconstructed with a standard (B20) and high-frequency (B50) kernel were analyzed using a dedicated prototype software (MevisPULMO) allowing lobar quantification of emphysema extent. Lobar quantification was obtained following (a) a fully automatic delineation of the lobar limits by the software and (b) a semiautomatic delineation with manual correction of the lobar limits when necessary and was compared with the visual scoring of emphysema severity per lobe. No statistically significant difference existed between automated and semiautomated lobar quantification (p > 0.05 in the five lobes), with differences ranging from 0.4 to 3.9%. The agreement between the two methods (intraclass correlation coefficient, ICC) was excellent for left upper lobe (ICC = 0.94), left lower lobe (ICC = 0.98), and right lower lobe (ICC = 0.80). The agreement was good for right upper lobe (ICC = 0.68) and moderate for middle lobe (IC = 0.53). The Bland and Altman plots confirmed these results. A good agreement was observed between the software and visually assessed lobar predominance of emphysema (kappa 0.78; 95% CI 0.64-0.92). Automated and semiautomated lobar quantifications of emphysema are concordant and show good agreement with visual scoring.
Hahn, Andreas; Nics, Lukas; Baldinger, Pia; Ungersböck, Johanna; Dolliner, Peter; Frey, Richard; Birkfellner, Wolfgang; Mitterhauser, Markus; Wadsak, Wolfgang; Karanikas, Georgios; Kasper, Siegfried; Lanzenberger, Rupert
2012-08-01
image- derived input functions (IDIFs) represent a promising technique for a simpler and less invasive quantification of PET studies as compared to arterial cannulation. However, a number of limitations complicate the routine use of IDIFs in clinical research protocols and the full substitution of manual arterial samples by venous ones has hardly been evaluated. This study aims for a direct validation of IDIFs and venous data for the quantification of serotonin-1A receptor binding (5-HT(1A)) with [carbonyl-(11)C]WAY-100635 before and after hormone treatment. Fifteen PET measurements with arterial and venous blood sampling were obtained from 10 healthy women, 8 scans before and 7 after eight weeks of hormone replacement therapy. Image-derived input functions were derived automatically from cerebral blood vessels, corrected for partial volume effects and combined with venous manual samples from 10 min onward (IDIF+VIF). Corrections for plasma/whole-blood ratio and metabolites were done separately with arterial and venous samples. 5-HT(1A) receptor quantification was achieved with arterial input functions (AIF) and IDIF+VIF using a two-tissue compartment model. Comparison between arterial and venous manual blood samples yielded excellent reproducibility. Variability (VAR) was less than 10% for whole-blood activity (p>0.4) and below 2% for plasma to whole-blood ratios (p>0.4). Variability was slightly higher for parent fractions (VARmax=24% at 5 min, p<0.05 and VAR<13% after 20 min, p>0.1) but still within previously reported values. IDIFs after partial volume correction had peak values comparable to AIFs (mean difference Δ=-7.6 ± 16.9 kBq/ml, p>0.1), whereas AIFs exhibited a delay (Δ=4 ± 6.4s, p<0.05) and higher peak width (Δ=15.9 ± 5.2s, p<0.001). Linear regression analysis showed strong agreement for 5-HT(1A) binding as obtained with AIF and IDIF+VIF at baseline (R(2)=0.95), after treatment (R(2)=0.93) and when pooling all scans (R(2)=0.93), with slopes and intercepts in the range of 0.97 to 1.07 and -0.05 to 0.16, respectively. In addition to the region of interest analysis, the approach yielded virtually identical results for voxel-wise quantification as compared to the AIF. Despite the fast metabolism of the radioligand, manual arterial blood samples can be substituted by venous ones for parent fractions and plasma to whole-blood ratios. Moreover, the combination of image-derived and venous input functions provides a reliable quantification of 5-HT(1A) receptors. This holds true for 5-HT(1A) binding estimates before and after treatment for both regions of interest-based and voxel-wise modeling. Taken together, the approach provides less invasive receptor quantification by full independence of arterial cannulation. This offers great potential for the routine use in clinical research protocols and encourages further investigation for other radioligands with different kinetic characteristics. Copyright © 2012 Elsevier Inc. All rights reserved.
QUALITY CONTROL - VARIABILITY IN PROTOCOLS
The EPA Risk Reduction Engineering Laboratory’s Quality Assurance Office, which published the popular pocket guide Preparing Perfect Project Plans, is now introducing another quality assurance reference aid. The document Variability in Protocols (VIP) was initially designed as a ...
Evolution of Natural Attenuation Evaluation Protocols
Traditionally the evaluation of the efficacy of natural attenuation was based on changes in contaminant concentrations and mass reduction. Statistical tools and models such as Bioscreen provided evaluation protocols which now are being approached via other vehicles including m...
Fox, Cherie; Wavra, Teresa; Drake, Diane Ash; Mulligan, Debbie; Bennett, Yvonne Pacheco; Nelson, Carla; Kirkwood, Peggy; Jones, Louise; Bader, Mary Kay
2015-05-01
Critically ill patients are at marked risk of hospital-acquired infections, which increase patients' morbidity and mortality. Registered nurses are the main health care providers of physical care, including hygiene to reduce and prevent hospital-acquired infections, for hospitalized critically ill patients. To investigate a new patient hand hygiene protocol designed to reduce hospital-acquired infection rates and improve nurses' hand-washing compliance in an intensive care unit. A preexperimental study design was used to compare 12-month rates of 2 common hospital-acquired infections, central catheter-associated bloodstream infection and catheter-associated urinary tract infection, and nurses' hand-washing compliance measured before and during use of the protocol. Reductions in 12-month infection rates were reported for both types of infections, but neither reduction was statistically significant. Mean 12-month nurse hand-washing compliance also improved, but not significantly. A hand hygiene protocol for patients in the intensive care unit was associated with reductions in hospital-acquired infections and improvements in nurses' hand-washing compliance. Prevention of such infections requires continuous quality improvement efforts to monitor lasting effectiveness as well as investigation of strategies to eliminate these infections. ©2015 American Association of Critical-Care Nurses.
2003-07-01
blood in the presence and absence of selective ( huperzine - a and Iso-OMPA), and non-selective (pyridostigmine bromide) cholinesterase inhibitors...cholinesterases after exposure to CWAs such as GD and pharmaceuticals such as huperzine - a and pyridostigmine have been determined in animals and man...activity. Since urban terrorism is on the rise, Federal,State, and local authorities need a reliable, fast, inexpensive method for confirming such an
Slimani, Sami; Robyns, Audrey; Jarraud, Sophie; Molmeret, Maëlle; Dusserre, Eric; Mazure, Céline; Facon, Jean Pierre; Lina, Gérard; Etienne, Jerome; Ginevra, Christophe
2012-02-01
A PMA (propidium monoazide) pretreatment protocol, in which PMA is applied directly to membrane filters, was developed for the PCR-based quantification (PMA-qPCR) of viable Legionella pneumophila. Using this method, the amplification of DNA from membrane-damaged L. pneumophila was strongly inhibited for samples containing a small number of dead bacteria. Copyright © 2011 Elsevier B.V. All rights reserved.
Lapin, Guilherme Abbud Franco; Hochman, Bernardo; Nishioka, Michele Akemi; Maximino, Jessica Ruivo; Chadi, Gerson; Ferreira, Lydia Masako
2015-06-01
To describe and standardize a protocol that overcomes the technical limitations of Western blot (WB) analysis in the quantification of the neuropeptides substance P (SP) and calcitonin gene-related peptide (CGRP) following nociceptive stimuli in rat skin. Male Wistar rats (Rattus norvegicus albinus) weighing 250 to 350 g were used in this study. Elements of WB analysis were adapted by using specific manipulation of samples, repeated cycles of freezing and thawing, more thorough maceration, and a more potent homogenizer; increasing lytic reagents; promoting greater inhibition of protease activity; and using polyvinylidene fluoride membranes as transfer means for skin-specific protein. Other changes were also made to adapt the WB analysis to a rat model. University research center. Western blot analysis adapted to a rat model. This research design has proven effective in collecting and preparing skin samples to quantify SP and CGRP using WB analysis in rat skin. This study described a research design that uses WB analysis as a reproducible, technically accessible, and cost-effective method for the quantification of SP and CGRP in rat skin that overcomes technical biases.
Hoofnagle, Andrew N; Whiteaker, Jeffrey R; Carr, Steven A; Kuhn, Eric; Liu, Tao; Massoni, Sam A; Thomas, Stefani N; Townsend, R Reid; Zimmerman, Lisa J; Boja, Emily; Chen, Jing; Crimmins, Daniel L; Davies, Sherri R; Gao, Yuqian; Hiltke, Tara R; Ketchum, Karen A; Kinsinger, Christopher R; Mesri, Mehdi; Meyer, Matthew R; Qian, Wei-Jun; Schoenherr, Regine M; Scott, Mitchell G; Shi, Tujin; Whiteley, Gordon R; Wrobel, John A; Wu, Chaochao; Ackermann, Brad L; Aebersold, Ruedi; Barnidge, David R; Bunk, David M; Clarke, Nigel; Fishman, Jordan B; Grant, Russ P; Kusebauch, Ulrike; Kushnir, Mark M; Lowenthal, Mark S; Moritz, Robert L; Neubert, Hendrik; Patterson, Scott D; Rockwood, Alan L; Rogers, John; Singh, Ravinder J; Van Eyk, Jennifer E; Wong, Steven H; Zhang, Shucha; Chan, Daniel W; Chen, Xian; Ellis, Matthew J; Liebler, Daniel C; Rodland, Karin D; Rodriguez, Henry; Smith, Richard D; Zhang, Zhen; Zhang, Hui; Paulovich, Amanda G
2016-01-01
For many years, basic and clinical researchers have taken advantage of the analytical sensitivity and specificity afforded by mass spectrometry in the measurement of proteins. Clinical laboratories are now beginning to deploy these work flows as well. For assays that use proteolysis to generate peptides for protein quantification and characterization, synthetic stable isotope-labeled internal standard peptides are of central importance. No general recommendations are currently available surrounding the use of peptides in protein mass spectrometric assays. The Clinical Proteomic Tumor Analysis Consortium of the National Cancer Institute has collaborated with clinical laboratorians, peptide manufacturers, metrologists, representatives of the pharmaceutical industry, and other professionals to develop a consensus set of recommendations for peptide procurement, characterization, storage, and handling, as well as approaches to the interpretation of the data generated by mass spectrometric protein assays. Additionally, the importance of carefully characterized reference materials-in particular, peptide standards for the improved concordance of amino acid analysis methods across the industry-is highlighted. The alignment of practices around the use of peptides and the transparency of sample preparation protocols should allow for the harmonization of peptide and protein quantification in research and clinical care. © 2015 American Association for Clinical Chemistry.
Schauer, Sonja; Sommer, Regina; Farnleitner, Andreas H.
2012-01-01
A new protocol for rapid, specific, and sensitive cell-based quantification of Vibrio cholerae/Vibrio mimicus in water samples was developed. The protocol is based on catalyzed reporter deposition fluorescence in situ hybridization (CARD-FISH) in combination with solid-phase cytometry. For pure cultures, we were able to quantify down to 6 V. cholerae cells on one membrane with a relative precision of 39% and down to 12 cells with a relative precision of 17% after hybridization with the horseradish peroxidase (HRP)-labeled probe Vchomim1276 (specific for V. cholerae and V. mimicus) and signal amplification. The corresponding position of the probe on the 16S rRNA is highly accessible even when labeled with HRP. For the first time, we were also able to successfully quantify V. cholerae/V. mimicus via solid-phase cytometry in extremely turbid environmental water samples collected in Austria. Cell numbers ranged from 4.5 × 101 cells ml−1 in the large saline lake Neusiedler See to 5.6 × 104 cells ml−1 in an extremely turbid shallow soda lake situated nearby. We therefore suggest CARD-FISH in combination with solid-phase cytometry as a powerful tool to quantify V. cholerae/V. mimicus in ecological studies as well as for risk assessment and monitoring programs. PMID:22885749
de Abreu, Emanuelle Maria Sávio; Machado, Carla Jorge; Pastore Neto, Mario; de Rezende Neto, João Baptista; Sanches, Marcelo Dias
2015-01-01
to investigate the effect of standardized interventions in the management of tube thoracostomy patients and to assess the independent effect of each intervention. A chest tube management protocol was assessed in a retrospective cohort study. The tube thoracostomy protocol (TTP) was implemented in August 2012, and consisted of: antimicrobial prophylaxis, chest tube insertion in the operating room (OR), admission post chest tube thoracostomy (CTT) in a hospital floor separate from the emergency department (ED), and daily respiratory therapy (RT) sessions post-CTT. The inclusion criteria were, hemodynamic stability, patients between the ages of 15 and 59 years, and injury severity score (ISS) < 17. All patients had isolated injuries to the chest wall, lung, and pleura. During the study period 92 patients were managed according to the standardized protocol. The outcomes of those patients were compared to 99 patients treated before the TTP. Multivariate logistic regression analysis was performed to assess the independent effect of each variable of the protocol on selected outcomes. Demographics, injury severity, and trauma mechanisms were similar among the groups. As expected, protocol compliance increased after the implementation of the TTP. There was a significant reduction (p<0.05) in the incidence of retained hemothoraces, empyemas, pneumonias, surgical site infections, post-procedural complications, hospital length of stay, and number of chest tube days. Respiratory therapy was independently linked to significant reduction (p<0.05) in the incidence of seven out of eight undesired outcomes after CTT. Antimicrobial prophylaxis was linked to a significant decrease (p<0.05) in retained hemothoraces, despite no significant (p<0.10) reductions in empyema and surgical site infections. Conversely, OR chest tube insertion was associated with significant (p<0.05) reduction of both complications, and also significantly decreased the incidence of pneumonias. Implementation of a TTP effectively reduced complications after CTT in trauma patients.
Hiler, Daniel J.; Barabas, Marie E.; Griffiths, Lyra M.; Dyer, Michael A.
2017-01-01
Postmitotic differentiated neurons are among the most difficult cells to reprogram into induced pluripotent stem cells (iPSCs) because they have poor viability when cultured as dissociated cells. Other protocols to reprogram postmitotic neurons have required the inactivation of the p53 tumor suppressor. We describe a method that does not require p53 inactivation and induces reprogramming in cells purified from the retinae of reprogrammable mice in aggregates with wild-type retinal cells. After the first 10 days of reprogramming, the aggregates are then dispersed and plated on irradiated feeder cells to propagate and isolate individual iPSC clones. The reprogramming efficiency of different neuronal populations at any stage of development can be quantitated using this protocol. Reprogramming retinal neurons with this protocol will take 56 days, and these retina-derived iPSCs can undergo retinal differentiation to produce retinae in 34 days. In addition, we describe a quantitative assessment of retinal differentiation from these neuron-derived iPSCs called STEM-RET. The procedure quantitates eye field specification, optic cup formation, and retinal differentiation in 3-dimensional cultures using molecular, cellular and morphological criteria. An advanced level of cell culture experience is required to carry out this protocol. PMID:27658012
Modeling and analysis of walkability in suburban neighborhoods in Las Vegas.
DOT National Transportation Integrated Search
2017-05-01
Walking has sound health benefits and can be a pleasurable experience requiring neither fuel, fare, license, nor registration. : Society also benefits by the associated reduction of motorized vehicle travel. The objective of this study was to quantif...
Substantial federal involvement will take the form of monitoring the project by EPA, participation and collaboration between EPA and the recipient in program content, review of project progress, and quantification and reporting ofresults.
Carleton, R Nicholas; Teale Sapach, Michelle J N; Oriet, Chris; LeBouthillier, Daniel M
2017-01-01
Social anxiety disorder (SAD) models posit vigilance for external social threat cues and exacerbated self-focused attention as key in disorder development and maintenance. Evidence indicates a modified dot-probe protocol may reduce symptoms of SAD; however, the efficacy when compared to a standard protocol and long-term maintenance of treatment gains remains unclear. Furthermore, the efficacy of such protocols on SAD-related constructs remains relatively unknown. The current investigation clarified these associations using a randomized control trial replicating and extending previous research. Participants with SAD (n = 113; 71% women) were randomized to complete a standard (i.e. control) or modified (i.e. active) dot-probe protocol consisting of 15-min sessions twice weekly for four weeks. Self-reported symptoms were measured at baseline, post-treatment, and 4-month and 8-month follow-ups. Hierarchical linear modeling indicated significant self-reported reductions in symptoms of social anxiety, fear of negative evaluation, trait anxiety, and depression, but no such reductions in fear of positive evaluation. Symptom changes did not differ based on condition and were maintained at 8-month follow-up. Attentional biases during the dot-probe task were not related to symptom change. Overall, our results replicate support for the efficacy of both protocols in reducing symptoms of SAD and specific related constructs, and suggest a role of exposure, expectancy, or practice effects, rather than attention modification, in effecting such reductions. The current results also support distinct relationships between fears of negative and positive evaluation and social anxiety. Further research focused on identifying the mechanisms of change in attention modification protocols appears warranted.
Linearization of the bradford protein assay.
Ernst, Orna; Zor, Tsaffrir
2010-04-12
Determination of microgram quantities of protein in the Bradford Coomassie brilliant blue assay is accomplished by measurement of absorbance at 590 nm. This most common assay enables rapid and simple protein quantification in cell lysates, cellular fractions, or recombinant protein samples, for the purpose of normalization of biochemical measurements. However, an intrinsic nonlinearity compromises the sensitivity and accuracy of this method. It is shown that under standard assay conditions, the ratio of the absorbance measurements at 590 nm and 450 nm is strictly linear with protein concentration. This simple procedure increases the accuracy and improves the sensitivity of the assay about 10-fold, permitting quantification down to 50 ng of bovine serum albumin. Furthermore, the interference commonly introduced by detergents that are used to create the cell lysates is greatly reduced by the new protocol. A linear equation developed on the basis of mass action and Beer's law perfectly fits the experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Letant, S E; Kane, S R; Murphy, G A
2008-05-30
This note presents a comparison of Most-Probable-Number Rapid Viability (MPN-RV) PCR and traditional culture methods for the quantification of Bacillus anthracis Sterne spores in macrofoam swabs generated by the Centers for Disease Control and Prevention (CDC) for a multi-center validation study aimed at testing environmental swab processing methods for recovery, detection, and quantification of viable B. anthracis spores from surfaces. Results show that spore numbers provided by the MPN RV-PCR method were in statistical agreement with the CDC conventional culture method for all three levels of spores tested (10{sup 4}, 10{sup 2}, and 10 spores) even in the presence ofmore » dirt. In addition to detecting low levels of spores in environmental conditions, the MPN RV-PCR method is specific, and compatible with automated high-throughput sample processing and analysis protocols.« less
Candida tropicalis biofilm and human epithelium invasion is highly influenced by environmental pH.
Ferreira, Carina; Gonçalves, Bruna; Vilas Boas, Diana; Oliveira, Hugo; Henriques, Mariana; Azeredo, Joana; Silva, Sónia
2016-11-01
The main goal of this study was to investigate the role of pH on Candida tropicalis virulence determinants, namely the ability to form biofilms and to colonize/invade reconstituted human vaginal epithelia. Biofilm formation was evaluated by enumeration of cultivable cells, total biomass quantification and structural analysis by scanning electron microscopy and confocal laser scanning microscopy. Candida tropicalis human vaginal epithelium colonization and invasiveness were examined qualitatively by epifluorescence microscopy and quantitatively by a novel quantitative real-time PCR protocol for Candida quantification in tissues. The results revealed that environmental pH influences C. tropicalis biofilm formation as well as the colonization and potential to invade human epithelium with intensification at neutral and alkaline conditions compared to acidic conditions. For the first time, we have demonstrated that C. tropicalis biofilm formation and invasion is highly influenced by environmental pH. © Crown copyright 2016.
Skeletal Muscle Ultrasound in Critical Care: A Tool in Need of Translation.
Mourtzakis, Marina; Parry, Selina; Connolly, Bronwen; Puthucheary, Zudin
2017-10-01
With the emerging interest in documenting and understanding muscle atrophy and function in critically ill patients and survivors, ultrasonography has transformational potential for measurement of muscle quantity and quality. We discuss the importance of quantifying skeletal muscle in the intensive care unit setting. We also identify the merits and limitations of various modalities that are capable of accurately and precisely measuring muscularity. Ultrasound is emerging as a potentially powerful tool for skeletal muscle quantification; however, there are key challenges that need to be addressed in future work to ensure useful interpretation and comparability of results across diverse observational and interventional studies. Ultrasound presents several methodological challenges, and ultimately muscle quantification combined with metabolic, nutritional, and functional markers will allow optimal patient assessment and prognosis. Moving forward, we recommend that publications include greater detail on landmarking, repeated measures, identification of muscle that was not assessable, and reproducible protocols to more effectively compare results across different studies.
Elsholtz, Fabian Henry Jürgen; Kamp, Julia Evi-Katrin; Vahldiek, Janis Lucas; Hamm, Bernd; Niehues, Stefan Markus
2018-06-18
CT-guided periradicular infiltration of the cervical spine is an effective symptomatic treatment in patients with radiculopathy-associated pain syndromes. This study evaluates the robustness and safety of a low-dose protocol on a CT scanner with iterative reconstruction software. A total of 183 patients who underwent periradicular infiltration therapy of the cervical spine were included in this study. 82 interventions were performed on a new CT scanner with a new intervention protocol using an iterative reconstruction algorithm. Spot scanning was implemented for planning and a basic low-dose setup of 80 kVp and 5 mAs was established during intermittent fluoroscopy. The comparison group included 101 prior interventions on a scanner without iterative reconstruction. The dose-length product (DLP), number of acquisitions, pain reduction on a numeric analog scale, and protocol changes to achieve a safe intervention were recorded. The median DLP for the whole intervention was 24.3 mGy*cm in the comparison group and 1.8 mGy*cm in the study group. The median pain reduction was -3 in the study group and -2 in the comparison group. A 5 mAs increase in the tube current-time product was required in 5 patients of the study group. Implementation of a new scanner and intervention protocol resulted in a 92.6 % dose reduction without a compromise in safety and pain relief. The dose needed here is more than 75 % lower than doses used for similar interventions in published studies. An increase of the tube current-time product was needed in only 6 % of interventions. · The presented ultra-low-dose protocol allows for a significant dose reduction without compromising outcome.. · The protocol includes spot scanning for planning purposes and a basic setup of 80 kVp and 5 mAs.. · The iterative reconstruction algorithm is activated during fluoroscopy.. · Elsholtz FH, Kamp JE, Vahldiek JL et al. Periradicular Infiltration of the Cervical Spine: How New CT Scanner Techniques and Protocol Modifications Contribute to the Achievement of Low-Dose Interventions. Fortschr Röntgenstr 2018; DOI: 10.1055/a-0632-3930. © Georg Thieme Verlag KG Stuttgart · New York.
Gupta, Ruma; Sundararajan, Mahesh; Gamare, Jayashree S
2017-08-01
Reduction of UO 2 2+ ions to U 4+ ions is difficult due to involvement of two axially bonded oxygen atoms, and often requires a catalyst to lower the activation barrier. The noble metal nanoparticles (NPs) exhibit high electrocatalytic activity, and could be employed for the sensitive and rapid quantifications of U0 2 2+ ions in the aqueous matrix. Therefore, the Pd, Ru, and Rh NPs decorated glassy carbon electrode were examined for their efficacy toward electrocatalytic reduction of UO 2 2+ ions and observed that Ru NPs mediate efficiently the electro-reduction of UO 2 2+ ions. The mechanism of the electroreduction of UO 2 2+ by the RuNPs/GC was studied using density functional theory calculations which pointed different approach of 5f metal ions electroreduction unlike 4p metal ions such as As(III). RuNP decorated on the glassy carbon would be hydrated, which in turn assist to adsorb the uranyl sulfates through hydrogen bonding thus facilitated electro-reduction. Differential pulse voltammetric (DPV) technique, was used for rapid and sensitive quantification of UO 2 2+ ions. The RuNPs/GC based DPV technique could be used to determine the concentration of uranyl in a few minutes with a detection limit of 1.95 ppb. The RuNPs/GC based DPV was evaluated for its analytical performance using seawater as well lake water and groundwater spiked with known amounts of UO 2 2+ .
Verhegghe, M; Rasschaert, G; Herman, L; Goossens, K; Vandaele, L; De Bleecker, K; Vlaemynck, G; Heyndrickx, M; De Block, J
2017-05-01
The aim of this study was to develop and validate 2 protocols (for use on-farm and at a central location) for the reduction of Mycobacterium avium ssp. paratuberculosis (MAP) in colostrum while preserving beneficial immunoglobulins (IgG). The on-farm protocol was based on curdling of the colostrum, where the IgG remain in the whey and the MAP bacteria are trapped in the curd. First, the colostrum was diluted with water (2 volumes colostrum to 1 volume water) and 2% rennet was added. After incubation (1 h at 32°C), the curd was cut and incubated again, after which whey and curd were separated using a cheesecloth. The curd was removed and milk powder was added to the whey. Approximately 1 log reduction in MAP counts was achieved. A reduction in total proteins and IgG was observed due to initial dilution of the colostrum. After curd formation, more than 95% of the immunoglobulins remained in the whey fraction. The semi-industrial protocol was based on centrifugation, which causes MAP to precipitate, while the IgG remain in the supernatant. This protocol was first developed in the laboratory. The colostrum was diluted with skimmed colostrum (2 volumes colostrum to 1 volume skimmed colostrum), then skimmed and centrifuged (at 15,600 × g for 30 min at room temperature). We observed on average 1.5 log reduction in the MAP counts and a limited reduction in proteins and IgG in the supernatant. To obtain a semi-industrial protocol, dairy pilot appliances were evaluated and the following changes were applied to the protocol: after 2:1 dilution as above, the colostrum was skimmed and subsequently clarified, after which the cream was heat treated and added to the supernatant. To investigate the effect of the colostrum treatment on the nutritional value and palatability of the colostrum and the IgG transfer, an animal experiment was conducted with 24 calves. Six received the dam's colostrum, 6 were given untreated purchased colostrum (control), and 2 groups of 6 calves received colostrum treated according to both of the above-mentioned methods. No significant differences were found between the test groups and the dam's colostrum group in terms of animal health, IgG uptake in the blood serum, milk, or forage uptake. Two protocols to reduce MAP in colostrum (for use on-farm or at a central location) were developed. Both methods preserve the vital IgG. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Mass Spectrometric Quantification of N-Linked Glycans by Reference to Exogenous Standards.
Mehta, Nickita; Porterfield, Mindy; Struwe, Weston B; Heiss, Christian; Azadi, Parastoo; Rudd, Pauline M; Tiemeyer, Michael; Aoki, Kazuhiro
2016-09-02
Environmental and metabolic processes shape the profile of glycoprotein glycans expressed by cells, whether in culture, developing tissues, or mature organisms. Quantitative characterization of glycomic changes associated with these conditions has been achieved historically by reductive coupling of oligosaccharides to various fluorophores following release from glycoprotein and subsequent HPLC or capillary electrophoretic separation. Such labeling-based approaches provide a robust means of quantifying glycan amount based on fluorescence yield. Mass spectrometry, on the other hand, has generally been limited to relative quantification in which the contribution of the signal intensity for an individual glycan is expressed as a percent of the signal intensity summed over the total profile. Relative quantification has been valuable for highlighting changes in glycan expression between samples; sensitivity is high, and structural information can be derived by fragmentation. We have investigated whether MS-based glycomics is amenable to absolute quantification by referencing signal intensities to well-characterized oligosaccharide standards. We report the qualification of a set of N-linked oligosaccharide standards by NMR, HPLC, and MS. We also demonstrate the dynamic range, sensitivity, and recovery from complex biological matrices for these standards in their permethylated form. Our results indicate that absolute quantification for MS-based glycomic analysis is reproducible and robust utilizing currently available glycan standards.
A Secure Authenticated Key Exchange Protocol for Credential Services
NASA Astrophysics Data System (ADS)
Shin, Seonghan; Kobara, Kazukuni; Imai, Hideki
In this paper, we propose a leakage-resilient and proactive authenticated key exchange (called LRP-AKE) protocol for credential services which provides not only a higher level of security against leakage of stored secrets but also secrecy of private key with respect to the involving server. And we show that the LRP-AKE protocol is provably secure in the random oracle model with the reduction to the computational Difie-Hellman problem. In addition, we discuss about some possible applications of the LRP-AKE protocol.
2018-01-01
Objective To compare radiation doses between conventional and chest pain protocols using dual-source retrospectively electrocardiography (ECG)-gated cardiothoracic computed tomography (CT) in children and adults and assess the effect of tube current saturation on radiation dose reduction. Materials and Methods This study included 104 patients (16.6 ± 7.7 years, range 5–48 years) that were divided into two groups: those with and those without tube current saturation. The estimated radiation doses of retrospectively ECG-gated spiral cardiothoracic CT were compared between conventional, uniphasic, and biphasic chest pain protocols acquired with the same imaging parameters in the same patients by using paired t tests. Dose reduction percentages, patient ages, volume CT dose index values, and tube current time products per rotation were compared between the two groups by using unpaired t tests. A p value < 0.05 was considered significant. Results The volume CT dose index values of the biphasic chest pain protocol (10.8 ± 3.9 mGy) were significantly lower than those of the conventional protocol (12.2 ± 4.7 mGy, p < 0.001) and those of the uniphasic chest pain protocol (12.9 ± 4.9 mGy, p < 0.001). The dose-saving effect of biphasic chest pain protocol was significantly less with a saturated tube current (4.5 ± 10.2%) than with unsaturated tube current method (14.8 ± 11.5%, p < 0.001). In 76 patients using 100 kVp, patient age showed no significant differences between the groups with and without tube current saturation in all protocols (p > 0.05); the groups with tube current saturation showed significantly higher volume CT dose index values (p < 0.01) and tube current time product per rotation (p < 0.001) than the groups without tube current saturation in all protocols. Conclusion The radiation dose of dual-source retrospectively ECG-gated spiral cardiothoracic CT can be reduced by approximately 15% by using the biphasic chest pain protocol instead of the conventional protocol in children and adults if radiation dose parameters are further optimized to avoid tube current saturation. PMID:29353996
Ocké, Marga; Brants, Henny; Dofkova, Marcela; Freisling, Heinz; van Rossum, Caroline; Ruprich, Jiri; Slimani, Nadia; Temme, Elisabeth; Trolle, Ellen; Vandevijvere, Stefanie; Huybrechts, Inge; de Boer, Evelien
2015-08-01
To test the feasibility of tools and procedures for a pan-European food consumption survey among children 0-10 years and to recommend one of two tested dietary assessment methods. Two pilot studies including 378 children were conducted in Belgium and the Czech Republic in the Pilot studies for Assessment of Nutrient intake and food Consumption among Kids in Europe. One protocol included a 3-day food diary which was checked with a parent, and data were entered afterwards using EPIC-Soft. The alternative protocol consisted of two non-consecutive 1-day food diaries followed by EPIC-Soft completion interviews. Both protocols included general and food propensity questionnaires and anthropometric measurements. The protocols were compared using evaluation questionnaires among the participating parents and study personnel. The parents found the questionnaires and instructions for filling in the food diaries understandable. Food description and food quantification was evaluated as problematic by 29 and 15% of the participants for the 3-day diaries versus 15 and 12% for the 1-day diaries. The protocol with 1-day food diaries was evaluated as less burdensome by the parents and logistically more challenging by the interviewers. Both dietary assessment methods with related tools and administration protocols were evaluated as feasible. The administration protocol with two 1-day food diaries with completion interviews offers more advantages for the future pan-European survey in children 0-10 years. The positive evaluation of feasibility of tools and materials is an important step towards harmonised food consumption data at European level among the younger age groups.
Anitua, Eduardo; Prado, Roberto; Troya, María; Zalduendo, Mar; de la Fuente, María; Pino, Ander; Muruzabal, Francisco; Orive, Gorka
2016-07-01
Plasma rich in growth factors (PRGF) is a biological therapy that uses patient's own growth factors for promoting tissue regeneration. Given the current European regulatory framework in which anticoagulant solution in blood extraction tubes could be considered as a medicinal product, a new PRGF protocol has been developed. The actual protocol (PRGF-A) and the new one (PRGF-B) have been performed and compared under Good Laboratory Practices. PRGF-A protocol uses extraction tubes with 0.9 mL of trisodium citrate as anticoagulant and 50 μL of calcium chloride/mL PRGF to activate it. The PRGF-B reduces the amount of sodium citrate and calcium chloride to 0.4 mL and to 20 μL, respectively. Basic hematological parameters, platelet function, the scaffold obtaining process, growth factors content, and the biological effect were compared between both PRGF obtaining protocols. PRGF-B protocol led to a statistically significant higher enrichment and recovery of platelets regarding to the PRGF-A. Hypotonic stress response by platelets was significantly better in the new protocol. A statistically significant decrease in the basal platelet activation status of PRGF-B compared to PRGF-A was also observed. The duration of the lag phase in the platelet aggregation assay was statistically lower for the PRGF-B protocol. Both the clotting and the clot retraction time were significantly reduced in the B protocol. A higher growth factor concentration was detected in the plasma obtained using the PRGF-B protocol. The new PRGF obtaining protocol, with a reduction in the amount of anticoagulant and activator, has even improved the actual one.
Moreira, Otacilio C; Yadon, Zaida E; Cupolillo, Elisa
2017-09-29
Cutaneous leishmaniasis (CL) is spread worldwide and is the most common manifestation of leishmaniasis. Diagnosis is performed by combining clinical and epidemiological features, and through the detection of Leishmania parasites (or DNA) in tissue specimens or trough parasite isolation in culture medium. Diagnosis of CL is challenging, reflecting the pleomorphic clinical manifestations of this disease. Skin lesions vary in severity, clinical appearance, and duration, and in some cases, they can be indistinguishable from lesions related to other diseases. Over the past few decades, PCR-based methods, including real-time PCR assays, have been developed for Leishmania detection, quantification and species identification, improving the molecular diagnosis of CL. This review provides an overview of many real-time PCR methods reported for the diagnostic evaluation of CL and some recommendations for the application of these methods for quantification purposes for clinical management and epidemiological studies. Furthermore, the use of real-time PCR for Leishmania species identification is also presented. The advantages of real-time PCR protocols are numerous, including increased sensitivity and specificity and simpler standardization of diagnostic procedures. However, despite the numerous assays described, there is still no consensus regarding the methods employed. Furthermore, the analytical and clinical validation of CL molecular diagnosis has not followed international guidelines so far. A consensus methodology comprising a DNA extraction protocol with an exogenous quality control and an internal reference to normalize parasite load is still needed. In addition, the analytical and clinical performance of any consensus methodology must be accurately assessed. This review shows that a standardization initiative is essential to guide researchers and clinical laboratories towards the achievement of a robust and reproducible methodology, which will permit further evaluation of parasite load as a surrogate marker of prognosis and monitoring of aetiological treatment, particularly in multi-centric observational studies and clinical trials. Copyright © 2017 Elsevier B.V. All rights reserved.
Haegi, Anita; Catalano, Valentina; Luongo, Laura; Vitale, Salvatore; Scotton, Michele; Ficcadenti, Nadia; Belisario, Alessandra
2013-08-01
A reliable and species-specific real-time quantitative polymerase chain reaction (qPCR) assay was developed for detection of the complex soilborne anamorphic fungus Fusarium oxysporum. The new primer pair, designed on the translation elongation factor 1-α gene with an amplicon of 142 bp, was highly specific to F. oxysporum without cross reactions with other Fusarium spp. The protocol was applied to grafted melon plants for the detection and quantification of F. oxysporum f. sp. melonis, a devastating pathogen of this cucurbit. Grafting technologies are widely used in melon to confer resistance against new virulent races of F. oxysporum f. sp. melonis, while maintaining the properties of valuable commercial varieties. However, the effects on the vascular pathogen colonization have not been fully investigated. Analyses were performed on 'Charentais-T' (susceptible) and 'Nad-1' (resistant) melon cultivars, both used either as rootstock and scion, and inoculated with F. oxysporum f. sp. melonis race 1 and race 1,2. Pathogen development was compared using qPCR and isolations from stem tissues. Early asymptomatic melon infections were detected with a quantification limit of 1 pg of fungal DNA. The qPCR protocol clearly showed that fungal development was highly affected by host-pathogen interaction (compatible or incompatible) and time (days postinoculation). The principal significant effect (P ≤ 0.01) on fungal development was due to the melon genotype used as rootstock, and this effect had a significant interaction with time and F. oxysporum f. sp. melonis race. In particular, the amount of race 1,2 DNA was significantly higher compared with that estimated for race 1 in the incompatible interaction at 18 days postinoculation. The two fungal races were always present in both the rootstock and scion of grafted plants in either the compatible or incompatible interaction.
HPLC-MRM relative quantification analysis of fatty acids based on a novel derivatization strategy.
Cai, Tie; Ting, Hu; Xin-Xiang, Zhang; Jiang, Zhou; Jin-Lan, Zhang
2014-12-07
Fatty acids (FAs) are associated with a series of diseases including tumors, diabetes, and heart diseases. As potential biomarkers, FAs have attracted increasing attention from both biological researchers and the pharmaceutical industry. However, poor ionization efficiency, extreme diversity, strict dependence on internal standards and complicated multiple reaction monitoring (MRM) optimization protocols have challenged efforts to quantify FAs. In this work, a novel derivatization strategy based on 2,4-bis(diethylamino)-6-hydrazino-1,3,5-triazine was developed to enable quantification of FAs. The sensitivity of FA detection was significantly enhanced as a result of the derivatization procedure. FA quantities as low as 10 fg could be detected by high-performance liquid chromatography coupled with triple-quadrupole mass spectrometry. General MRM conditions were developed for any FA, which facilitated the quantification and extended the application of the method. The FA quantification strategy based on HPLC-MRM was carried out using deuterated derivatization reagents. "Heavy" derivatization reagents were used as internal standards (ISs) to minimize matrix effects. Prior to statistical analysis, amounts of each FA species were normalized by their corresponding IS, which guaranteed the accuracy and reliability of the method. FA changes in plasma induced by ageing were studied using this strategy. Several FA species were identified as potential ageing biomarkers. The sensitivity, accuracy, reliability, and full coverage of the method ensure that this strategy has strong potential for both biomarker discovery and lipidomic research.
Rapid quantification of plant-powdery mildew interactions by qPCR and conidiospore counts.
Weßling, Ralf; Panstruga, Ralph
2012-08-31
The powdery mildew disease represents a valuable patho-system to study the interaction between plant hosts and obligate biotrophic fungal pathogens. Numerous discoveries have been made on the basis of the quantitative evaluation of plant-powdery mildew interactions, especially in the context of hyper-susceptible and/or resistant plant mutants. However, the presently available methods to score the pathogenic success of powdery mildew fungi are laborious and thus not well suited for medium- to high-throughput analysis. Here we present two new protocols that allow the rapid quantitative assessment of powdery mildew disease development. One procedure depends on quantitative polymerase chain reaction (qPCR)-based evaluation of fungal biomass, while the other relies on the quantification of fungal conidiospores. We validated both techniques using the powdery mildew pathogen Golovinomyces orontii on a set of hyper-susceptible and resistant Arabidopsis thaliana mutants and found that both cover a wide dynamic range of one to two (qPCR) and four to five (quantification of conidia) orders of magnitude, respectively. The two approaches yield reproducible results and are easy to perform without specialized equipment. The qPCR and spore count assays rapidly and reproducibly quantify powdery mildew pathogenesis. Our methods are performed at later stages of infection and discern mutant phenotypes accurately. The assays therefore complement currently used procedures of powdery mildew quantification and can overcome some of their limitations. In addition, they can easily be adapted to other plant-powdery mildew patho-systems.
León, Ileana R.; Schwämmle, Veit; Jensen, Ole N.; Sprenger, Richard R.
2013-01-01
The majority of mass spectrometry-based protein quantification studies uses peptide-centric analytical methods and thus strongly relies on efficient and unbiased protein digestion protocols for sample preparation. We present a novel objective approach to assess protein digestion efficiency using a combination of qualitative and quantitative liquid chromatography-tandem MS methods and statistical data analysis. In contrast to previous studies we employed both standard qualitative as well as data-independent quantitative workflows to systematically assess trypsin digestion efficiency and bias using mitochondrial protein fractions. We evaluated nine trypsin-based digestion protocols, based on standard in-solution or on spin filter-aided digestion, including new optimized protocols. We investigated various reagents for protein solubilization and denaturation (dodecyl sulfate, deoxycholate, urea), several trypsin digestion conditions (buffer, RapiGest, deoxycholate, urea), and two methods for removal of detergents before analysis of peptides (acid precipitation or phase separation with ethyl acetate). Our data-independent quantitative liquid chromatography-tandem MS workflow quantified over 3700 distinct peptides with 96% completeness between all protocols and replicates, with an average 40% protein sequence coverage and an average of 11 peptides identified per protein. Systematic quantitative and statistical analysis of physicochemical parameters demonstrated that deoxycholate-assisted in-solution digestion combined with phase transfer allows for efficient, unbiased generation and recovery of peptides from all protein classes, including membrane proteins. This deoxycholate-assisted protocol was also optimal for spin filter-aided digestions as compared with existing methods. PMID:23792921
Effect of Artificial Aging Protocols on Surface Gloss of Resin Composites
Rocha, Rafael Santos; Oliveira, Amanda Carvalho
2017-01-01
The purpose of this study was to evaluate the effect of aging protocols on surface gloss of composites. Cylindrical resin composite specimens (6 mm in diameter, 1 mm thick) were fabricated and divided into three groups (N = 60): microfilled (MiFi), nanohybrid (NaHy), and nanofilled (NaFi). Specimens were distributed into four aging subgroups: thermocycling (5° to 55°C, 15,000 cycles); ethanol immersion (15 days); brushing (10,750 cycles); and light aging (216 h). Surface gloss readings (Novo-Curve, Rhopoint TM, England) were performed at baseline (R0) and after every one-third of aging protocols (R1 to R3). Data were submitted to one-way repeated measures ANOVA and Tukey's test (5%). Overall, surface gloss alterations were detected over time (p < 0.001). Thermocycling reduced surface gloss, except for NaHy. Ethanol immersion resulted in surface gloss reduction after R1 for MiFi and NaFi, while reduction after R1 and R2 was detected for NaHy. For brushing, gloss reduction was detected after R1 and R3 for all composites. For light aging, gloss was reduced after R1 and R2 for MiFi and NaFi, while a reduction only after R1 was detected for NaHy. The studied aging protocols affect surface gloss differently, being material and aging therapy dependent. In general, the surface gloss is reduced with aging. PMID:28611843
Effect of Artificial Aging Protocols on Surface Gloss of Resin Composites.
Rocha, Rafael Santos; Oliveira, Amanda Carvalho; Caneppele, Taciana Marco Ferraz; Bresciani, Eduardo
2017-01-01
The purpose of this study was to evaluate the effect of aging protocols on surface gloss of composites. Cylindrical resin composite specimens (6 mm in diameter, 1 mm thick) were fabricated and divided into three groups ( N = 60): microfilled (MiFi), nanohybrid (NaHy), and nanofilled (NaFi). Specimens were distributed into four aging subgroups: thermocycling (5° to 55°C, 15,000 cycles); ethanol immersion (15 days); brushing (10,750 cycles); and light aging (216 h). Surface gloss readings (Novo-Curve, Rhopoint TM, England) were performed at baseline (R0) and after every one-third of aging protocols (R1 to R3). Data were submitted to one-way repeated measures ANOVA and Tukey's test (5%). Overall, surface gloss alterations were detected over time ( p < 0.001). Thermocycling reduced surface gloss, except for NaHy. Ethanol immersion resulted in surface gloss reduction after R1 for MiFi and NaFi, while reduction after R1 and R2 was detected for NaHy. For brushing, gloss reduction was detected after R1 and R3 for all composites. For light aging, gloss was reduced after R1 and R2 for MiFi and NaFi, while a reduction only after R1 was detected for NaHy. The studied aging protocols affect surface gloss differently, being material and aging therapy dependent. In general, the surface gloss is reduced with aging.
Spray drift reduction evaluations of spray nozzles using a standardized testing protocol
USDA-ARS?s Scientific Manuscript database
The development and testing of drift reduction technologies has come to the forefront of application research in the past few years in the United States. Drift reduction technologies (DRTs) can be spray nozzles, sprayer modifications, spray delivery assistance, spray property modifiers (adjuvants),...
Monti, Monia; Martini, Marta; Tedeschi, Rosemarie
2013-01-01
In this paper the validation and implementation of a Real-time PCR protocol based on ribosomal protein genes has been carried out for sensitive and specific quantification of 'Candidatus (Ca.) Phytoplasma mali' (apple proliferation phytoplasma, APP) in insects. The method combines the use of EvaGreen(®) dye as chemistry detection system and the specific primer pair rpAP15f-mod/rpAP15r3, which amplifies a fragment of 238 bp of the ribosomal protein rplV (rpl22) gene of APP. Primers specificity was demonstrated by running in the same Real-time PCR 'Ca. Phytoplasma mali' samples with phytoplasmas belonging to the same group (16SrX) as 'Ca. Phytoplasma pyri' and 'Ca. Phytoplasma prunorum', and also phytoplasmas from different groups, as 'Ca. Phytoplasma phoenicium' (16SrIX) and Flavescence dorée phytoplasma (16SrV). 'Ca. Phytoplasma mali' titre in insects was quantified using a specific approach, which relates the concentration of the phytoplasma to insect 18S rDNA. Absolute quantification of APP and insect 18S rDNA were calculated using standard curves prepared from serial dilutions of plasmids containing rplV-rpsC and a portion of 18S rDNA genes, respectively. APP titre in insects was expressed as genome units (GU) of phytoplasma per picogram (pg) of individual insect 18S rDNA. 'Ca. Phytoplasma mali' concentration in examined samples (Cacopsylla melanoneura overwintered adults) ranged from 5.94 × 10(2) to 2.51 × 10(4) GU/pg of insect 18S rDNA. Repeatability and reproducibility of the method were also evaluated by calculation of the coefficient of variation (CV%) of GU of phytoplasma and pg of 18S rDNA fragment for both assays. CV less than 14% and 9% (for reproducibility test) and less than 10 and 11% (for repeatability test) were obtained for phytoplasma and insect qPCR assays, respectively. Sensitivity of the method was also evaluated, in comparison with conventional 16S rDNA-based nested-PCR procedure. The method described has been demonstrated reliable, sensitive and specific for the quantification of 'Ca. Phytoplasma mali' in insects. The possibility to study the trend of phytoplasma titre in the vectors will allow a deepen investigation on the epidemiology of the disease. Copyright © 2013 Elsevier Ltd. All rights reserved.
Computation of Calcium Score with Dual Energy CT: A Phantom Study
Kumar, Vidhya; Min, James K.; He, Xin; Raman, Subha V.
2016-01-01
Dual energy computed tomography (DECT) improves material and tissue characterization compared to single energy CT (SECT); we sought to validate coronary calcium quantification in advancing cardiovascular DECT. In an anthropomorphic phantom, agreement between measurements was excellent, and Bland-Altman analysis demonstrated minimal bias. Compared to the known calcium mass for each phantom, calcium mass by DECT was highly accurate. Noncontrast DECT yields accurate calcium measures, and warrants consideration in cardiac protocols for additional tissue characterizations. PMID:27680414
Bozyigit, Deniz; Volk, Sebastian; Yarema, Olesya; Wood, Vanessa
2013-11-13
We implement three complementary techniques to quantify the number, energy, and electronic properties of trap states in nanocrystal (NC)-based devices. We demonstrate that, for a given technique, the ability to observe traps depends on the Fermi level position, highlighting the importance of a multitechnique approach that probes trap coupling to both the conduction and the valence bands. We then apply our protocol for characterizing traps to quantitatively explain the measured performances of PbS NC-based solar cells.
Santos, Sonia; de Moraes, Maria de Lourdes Leite; da Silva Souza Filho, Antonio Pedro; Rezende, Maria Olímpia Oliveira
2005-01-01
This article describes the assessment of possible allelopathic potential of organic extracts obtained from leaves of Canavalia ensiformis under laboratory conditions. Furthermore, a systematic evaluation of these extracts was carried out using specific protocols developed in capillary electrophoresis (CE) to determine some groups of secondary metabolites. After the identification and quantification of compounds, the effects of compounds on germination of some common weeds was investigated, which are becoming a real problem in pastures in the state of Pará, Brazil.
Quantification of light screening by anthocyanins in leaves of Berberis thunbergii.
Nichelmann, Lars; Bilger, Wolfgang
2017-12-01
Up to 40% of incident light was screened in red Berberis leaves in vivo by anthocyanins, resulting also in up to 40% reduction of light-limited photosynthesis. The biological function of anthocyanins in leaves has been strongly discussed, but the hypothesis of a screening function is favored by most authors. For an evaluation of the function as photoprotective pigments, a quantification of their screening of the mesophyll is important. Here, chlorophyll fluorescence excitation of leaves of a red and a green variety of Berberis thunbergii was used to estimate the extent of screening by anthocyanins at 545 nm and over the whole photosynthetically active wavelength range. Growth at high light (430 µmol m -2 s -1 ) resulted in 90% screening at 545 nm corresponding to 40-50% screening over the whole wavelength range, depending on the light source. The concomitant reduction of photosynthetic quantum yield was of the same size as the calculated reduction of light reaching the chloroplasts. The induction of anthocyanins in the red variety also enhanced the epoxidation state of the violaxanthin cycle under growth conditions, indicating that red leaves were suffering less from excessive irradiance. Pool sizes of violaxanthin cycle carotenoids indicated a shade acclimation of the light harvesting complexes in red leaves. The observed reduction of internal light in anthocyanic leaves has by necessity a photoprotective effect.
Rondelli, Rafaella Rezende; Dal Corso, Simone; Simões, Alexandre; Malaguti, Carla
2009-11-01
It has been well established that, in addition to the pulmonary involvement, COPD has systemic consequences that can lead to peripheral muscle dysfunction, with greater muscle fatigue, lower exercise tolerance and lower survival in these patients. In view of the negative repercussions of early muscle fatigue in COPD, the objective of this review was to discuss the principal findings in the literature on the metabolic and bioenergy determinants of muscle fatigue, its functional repercussions, as well as the methods for its identification and quantification. The anatomical and functional substrate of higher muscle fatigue in COPD appears to include lower levels of high-energy phosphates, lower mitochondrial density, early lactacidemia, higher serum ammonia and reduced muscle perfusion. These alterations can be revealed by contraction failure, decreased firing rates of motor units and increased recruitment of motor units in a given activity, which can be functionally detected by a reduction in muscle strength, power and endurance. This review article also shows that various types of muscle contraction regimens and protocols have been used in order to detect muscle fatigue in this population. With this understanding, rehabilitation strategies can be developed in order to improve the resistance to muscle fatigue in this population.
Vismarra, Alice; Barilli, Elena; Miceli, Maura; Mangia, Carlo; Bacci, Cristina; Brindani, Franco; Kramer, Laura
2017-01-24
Toxoplasmosis is a zoonotic disease caused by the protozoan Toxoplasma gondii. Ingestion of raw milk has been suggested as a risk for transmission to humans. Here the authors evaluated pre-treatment protocols for DNA extraction on T. gondii tachyzoite-spiked sheep milk with the aim of identifying the method that resulted in the most rapid and reliable polymerase chain reaction (PCR) positivity. This protocol was then used to analyse milk samples from sheep of three different farms in Southern Italy, including real time PCR for DNA quantification and PCR-restriction fragment length polymorphism for genotyping. The pre-treatment protocol using ethylenediaminetetraacetic acid and Tris-HCl to remove casein gave the best results in the least amount of time compared to the others on spiked milk samples. One sample of 21 collected from sheep farms was positive on one-step PCR, real time PCR and resulted in a Type I genotype at one locus (SAG3). Milk usually contains a low number of tachyzoites and this could be a limiting factor for molecular identification. Our preliminary data has evaluated a rapid, cost-effective and sensitive protocol to treat milk before DNA extraction. The results of the present study also confirm the possibility of T. gondii transmission through consumption of raw milk and its unpasteurised derivatives.
Kennedy, Jacob J.; Whiteaker, Jeffrey R.; Schoenherr, Regine M.; Yan, Ping; Allison, Kimberly; Shipley, Melissa; Lerch, Melissa; Hoofnagle, Andrew N.; Baird, Geoffrey Stuart; Paulovich, Amanda G.
2016-01-01
Despite a clinical, economic, and regulatory imperative to develop companion diagnostics, precious few new biomarkers have been successfully translated into clinical use, due in part to inadequate protein assay technologies to support large-scale testing of hundreds of candidate biomarkers in formalin-fixed paraffin embedded (FFPE) tissues. While the feasibility of using targeted, multiple reaction monitoring-mass spectrometry (MRM-MS) for quantitative analyses of FFPE tissues has been demonstrated, protocols have not been systematically optimized for robust quantification across a large number of analytes, nor has the performance of peptide immuno-MRM been evaluated. To address this gap, we used a test battery approach coupled to MRM-MS with the addition of stable isotope labeled standard peptides (targeting 512 analytes) to quantitatively evaluate the performance of three extraction protocols in combination with three trypsin digestion protocols (i.e. 9 processes). A process based on RapiGest buffer extraction and urea-based digestion was identified to enable similar quantitation results from FFPE and frozen tissues. Using the optimized protocols for MRM-based analysis of FFPE tissues, median precision was 11.4% (across 249 analytes). There was excellent correlation between measurements made on matched FFPE and frozen tissues, both for direct MRM analysis (R2 = 0.94) and immuno-MRM (R2 = 0.89). The optimized process enables highly reproducible, multiplex, standardizable, quantitative MRM in archival tissue specimens. PMID:27462933
Del-Valle-Soto, Carolina; Mex-Perera, Carlos; Orozco-Lugo, Aldo; Lara, Mauricio; Galván-Tejada, Giselle M; Olmedo, Oscar
2014-12-02
Wireless Sensor Networks deliver valuable information for long periods, then it is desirable to have optimum performance, reduced delays, low overhead, and reliable delivery of information. In this work, proposed metrics that influence energy consumption are used for a performance comparison among our proposed routing protocol, called Multi-Parent Hierarchical (MPH), the well-known protocols for sensor networks, Ad hoc On-Demand Distance Vector (AODV), Dynamic Source Routing (DSR), and Zigbee Tree Routing (ZTR), all of them working with the IEEE 802.15.4 MAC layer. Results show how some communication metrics affect performance, throughput, reliability and energy consumption. It can be concluded that MPH is an efficient protocol since it reaches the best performance against the other three protocols under evaluation, such as 19.3% reduction of packet retransmissions, 26.9% decrease of overhead, and 41.2% improvement on the capacity of the protocol for recovering the topology from failures with respect to AODV protocol. We implemented and tested MPH in a real network of 99 nodes during ten days and analyzed parameters as number of hops, connectivity and delay, in order to validate our Sensors 2014, 14 22812 simulator and obtain reliable results. Moreover, an energy model of CC2530 chip is proposed and used for simulations of the four aforementioned protocols, showing that MPH has 15.9% reduction of energy consumption with respect to AODV, 13.7% versus DSR, and 5% against ZTR.
This seminar will present previous work on the Tool for the Reduction and Assessment and of Chemical and other environmental Impacts (TRACI) along with interim research on the quantification of land use modifications for comprehensive impact assessment. Various research options ...
Fuller, Maria; Duplock, Stephen; Hein, Leanne K; Rigat, Brigitte A; Mahuran, Don J
2014-08-01
GM2 gangliosidosis is a group of inherited neurodegenerative disorders resulting primarily from the excessive accumulation of GM2 gangliosides (GM2) in neuronal cells. As biomarkers for categorising patients and monitoring the effectiveness of developing therapies are lacking for this group of disorders, we sought to develop methodology to quantify GM2 levels in more readily attainable patient samples such as plasma, leukocytes, and cultured skin fibroblasts. Following organic extraction, gangliosides were partitioned into the aqueous phase and isolated using C18 solid-phase extraction columns. Relative quantification of three species of GM2 was achieved using LC/ESI-MS/MS with d35GM1 18:1/18:0 as an internal standard. The assay was linear over the biological range, and all GM2 gangliosidosis patients were demarcated from controls by elevated GM2 in cultured skin fibroblast extracts. However, in leukocytes only some molecular species could be used for differentiation and in plasma only one was informative. A reduction in GM2 was easily detected in patient skin fibroblasts after a short treatment with media from normal cells enriched in secreted β-hexosaminidase. This method may show promise for measuring the effectiveness of experimental therapies for GM2 gangliosidosis by allowing quantification of a reduction in the primary storage burden. Copyright © 2014 Elsevier Inc. All rights reserved.
Watson, Paul Andrew; Watson, Luke Robert; Torress-Cook, Alfonso
2016-07-01
Environmental contamination has been associated with over half of methicillin-resistant Staphylococcus aureus (MRSA) outbreaks in hospitals. We explored if a hospital-wide environmental and patient cleaning protocol would lower hospital acquired MRSA rates and associated costs. This study evaluates the impact of implementing a hospital-wide environmental and patient cleaning protocol on the rate of MRSA infection and the potential cost benefit of the intervention. A retrospective, pre-post interventional study design was used. The intervention comprised a combination of enhanced environmental cleaning of high touch surfaces, daily washing of patients with benzalkonium chloride, and targeted isolation of patients with active infection. The rate of MRSA infection per 1000 patient days (PD) was compared with the rate after the intervention (Steiros Algorithm ® ) was implemented. A cost-benefit analysis based on the number of MRSA infections avoided was conducted. The MRSA rates decreased by 96% from 3.04 per 1000 PD to 0.11 per 1000 PD ( P <0.0001). This reduction in MRSA infections, avoided an estimated $1,655,143 in healthcare costs. Implementation of this hospital-wide protocol appears to be associated with a reduction in the rate of MRSA infection and therefore a reduction in associated healthcare costs.
A Unique Approach to Dissemination of Evidence-Based Protocols: A Successful CAUTI Reduction Pilot.
Dols, Jean Dowling; White, Sondra K; Timmons, Amy L; Bush, Michelle; Tripp, Joanne; Childers, Amanda Kay; Mathers, Nicholas; Tobias, Maria M
2016-01-01
A unique approach to disseminate an evidence-based protocol for urinary catheter management was led by a staff-driven catheter-associated urinary tract infection (CAUTI) reduction team in one hospital. The nurseeducators, faculty from a local university, and the facility's clinical nurse leader mentored the team. As an approachto reduce CAUTIs in the transplant care and intensive care units, the team developed an interdisciplinary CAUTIEducation Fair, which provided a safe, nonthreateningenvironment to unlearn prior behaviors and showcompetency in new evidence-based ones.
Rapid Quantification of Abscisic Acid by GC-MS/MS for Studies of Abiotic Stress Response.
Verslues, Paul E
2017-01-01
Drought and low water potential induce large increases in Abscisic Acid (ABA ) content of plant tissue. This increased ABA content is essential to regulate downstream stress resistance responses; however, the mechanisms regulating ABA accumulation are incompletely known. Thus, the ability to accurately quantify ABA at high throughput and low cost is important for plant stress research. We have combined and modified several previously published protocols to establish a rapid ABA analysis protocol using gas chromatography-tandem mass spectrometry (GC-MS/MS). Derivatization of ABA is performed with (trimethylsilyl)-diazomethane rather than the harder to prepare diazomethane. Sensitivity of the analysis is sufficient that small samples of low water potential treated Arabidopsis thaliana seedlings can be routinely analyzed in reverse genetic studies of putative stress regulators as well as studies of natural variation in ABA accumulation.
In Situ Quantification of [Re(CO) 3] + by Fluorescence Spectroscopy in Simulated Hanford Tank Waste
DOE Office of Scientific and Technical Information (OSTI.GOV)
Branch, Shirmir D.; French, Amanda D.; Lines, Amanda M.
A pretreatment protocol is presented that allows for the quantitative conversion and subsequent in situ spectroscopic analysis of [Re(CO)3]+ species in simulated Hanford tank waste. The protocol encompasses adding a simulated waste sample containing the non-emissive [Re(CO)3]+ species to a developer solution that enables the rapid, quantitative conversion of the non-emissive species to a luminescent species which can then be detected spectroscopically. The [Re(CO)3]+ species concentration in an alkaline, simulated Hanford tank waste supernatant can be quantified by the standard addition method. In a test case, the [Re(CO)3]+ species was measured to be at a concentration of 38.9 µM, whichmore » was a difference of 2.01% from the actual concentration of 39.7 µM.« less
Chatterson, Leslie C; Leswick, David A; Fladeland, Derek A; Hunt, Megan M; Webster, Stephen; Lim, Hyun
2014-07-01
Custom bismuth-antimony shields were previously shown to reduce fetal dose by 53% on an 8DR (detector row) CT scanner without dynamic adaptive section collimation (DASC), automatic tube current modulation (ATCM) or adaptive statistical iterative reconstruction (ASiR). The purpose of this study is to compare the effective maternal and average fetal organ dose reduction both with and without bismuth-antimony shields on a 64DR CT scanner using DASC, ATCM and ASiR during maternal CTPA. A phantom with gravid prosthesis and a bismuth-antimony shield were used. Thermoluminescent dosimeters (TLDs) measured fetal radiation dose. The average fetal organ dose and effective maternal dose were determined using 100 kVp, scanning from the lung apices to the diaphragm utilizing DASC, ATCM and ASiR on a 64DR CT scanner with and without shielding in the first and third trimester. Isolated assessment of DASC was done via comparing a new 8DR scan without DASC to a similar scan on the 64DR with DASC. Average third trimester unshielded fetal dose was reduced from 0.22 mGy ± 0.02 on the 8DR to 0.13 mGy ± 0.03 with the conservative 64DR protocol that included 30% ASiR, DASC and ATCM (42% reduction, P<0.01). Use of a shield further reduced average third trimester fetal dose to 0.04 mGy ± 0.01 (69% reduction, P<0.01). The average fetal organ dose reduction attributable to DASC alone was modest (6% reduction from 0.17 mGy ± 0.02 to 0.16 mGy ± 0.02, P=0.014). First trimester fetal organ dose on the 8DR protocol was 0.07 mGy ± 0.03. This was reduced to 0.05 mGy ± 0.03 on the 64DR protocol without shielding (30% reduction, P=0.009). Shields further reduced this dose to below accurately detectable levels. Effective maternal dose was reduced from 4.0 mSv on the 8DR to 2.5 mSv on the 64DR scanner using the conservative protocol (38% dose reduction). ASiR, ATCM and DASC combined significantly reduce effective maternal and fetal organ dose during CTPA. Shields continue to be an effective means of fetal dose reduction. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
This generic verification protocol provides a detailed method to conduct and report results from a verification test of pesticide application technologies that can be used to evaluate these technologies for their potential to reduce spray drift.
This protocol describes the Environmental Technology Verification Program's considerations and requirements for verification of emissions reduction provided by cleaner outdoor wood-fired hydronic heaters. Outdoor wood-burning units provide heat and hot water for homes and other b...
Sorenmo, Karin; Overley, B; Krick, E; Ferrara, T; LaBlanc, A; Shofer, F
2010-09-01
A dose-intensified/dose-dense chemotherapy protocol for canine lymphoma was designed and implemented at the Veterinary Hospital of the University of Pennsylvania. In this study, we describe the clinical characteristics, prognostic factors, efficacy and toxicity in 130 dogs treated with this protocol. The majority of the dogs had advanced stage disease (63.1% stage V) and sub-stage b (58.5%). The median time to progression (TTP) and lymphoma-specific survival were 219 and 323 days, respectively. These results are similar to previous less dose-intense protocols. Sub-stage was a significant negative prognostic factor for survival. The incidence of toxicity was high; 53.9 and 45% of the dogs needed dose reductions and treatment delays, respectively. Dogs that required dose reductions and treatment delays had significantly longer TTP and lymphoma-specific survival times. These results suggest that dose density is important, but likely relative, and needs to be adjusted according to the individual patient's toxicity for optimal outcome.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gill, K; Aldoohan, S; Collier, J
Purpose: Study image optimization and radiation dose reduction in pediatric shunt CT scanning protocol through the use of different beam-hardening filters Methods: A 64-slice CT scanner at OU Childrens Hospital has been used to evaluate CT image contrast-to-noise ratio (CNR) and measure effective-doses based on the concept of CT dose index (CTDIvol) using the pediatric head shunt scanning protocol. The routine axial pediatric head shunt scanning protocol that has been optimized for the intrinsic x-ray tube filter has been used to evaluate CNR by acquiring images using the ACR approved CT-phantom and radiation dose CTphantom, which was used to measuremore » CTDIvol. These results were set as reference points to study and evaluate the effects of adding different filtering materials (i.e. Tungsten, Tantalum, Titanium, Nickel and Copper filters) to the existing filter on image quality and radiation dose. To ensure optimal image quality, the scanner routine air calibration was run for each added filter. The image CNR was evaluated for different kVps and wide range of mAs values using above mentioned beam-hardening filters. These scanning protocols were run under axial as well as under helical techniques. The CTDIvol and the effective-dose were measured and calculated for all scanning protocols and added filtration, including the intrinsic x-ray tube filter. Results: Beam-hardening filter shapes energy spectrum, which reduces the dose by 27%. No noticeable changes in image low contrast detectability Conclusion: Effective-dose is very much dependent on the CTDIVol, which is further very much dependent on beam-hardening filters. Substantial reduction in effective-dose is realized using beam-hardening filters as compare to the intrinsic filter. This phantom study showed that significant radiation dose reduction could be achieved in CT pediatric shunt scanning protocols without compromising in diagnostic value of image quality.« less
Solving iTOUGH2 simulation and optimization problems using the PEST protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, S.A.; Zhang, Y.
2011-02-01
The PEST protocol has been implemented into the iTOUGH2 code, allowing the user to link any simulation program (with ASCII-based inputs and outputs) to iTOUGH2's sensitivity analysis, inverse modeling, and uncertainty quantification capabilities. These application models can be pre- or post-processors of the TOUGH2 non-isothermal multiphase flow and transport simulator, or programs that are unrelated to the TOUGH suite of codes. PEST-style template and instruction files are used, respectively, to pass input parameters updated by the iTOUGH2 optimization routines to the model, and to retrieve the model-calculated values that correspond to observable variables. We summarize the iTOUGH2 capabilities and demonstratemore » the flexibility added by the PEST protocol for the solution of a variety of simulation-optimization problems. In particular, the combination of loosely coupled and tightly integrated simulation and optimization routines provides both the flexibility and control needed to solve challenging inversion problems for the analysis of multiphase subsurface flow and transport systems.« less
Rota, Paola; Anastasia, Luigi; Allevi, Pietro
2015-05-07
The current analytical protocol used for the GC-MS determination of free or 1,7-lactonized natural sialic acids (Sias), as heptafluorobutyrates, overlooks several transformations. Using authentic reference standards and by combining GC-MS and NMR analyses, flaws in the analytical protocol were pinpointed and elucidated, thus establishing the scope and limitations of the method. It was demonstrated that (a) Sias 1,7-lactones, even if present in biological samples, decompose under the acidic hydrolysis conditions used for their release; (b) Sias 1,7-lactones are unpredicted artifacts, accidentally generated from their parent acids; (c) the N-acetyl group is quantitatively exchanged with that of the derivatizing perfluorinated anhydride; (d) the partial or complete failure of the Sias esterification-step with diazomethane leads to the incorrect quantification and structure attribution of all free Sias. While these findings prompt an urgent correction and improvement of the current analytical protocol, they could be instrumental for a critical revision of many incorrect claims reported in the literature.
Shanks, O.C.; Sivaganesan, M.; Peed, L.; Kelty, C.A.; Blackwood, A.D.; Greene, M.R.; Noble, R.T.; Bushon, R.N.; Stelzer, E.A.; Kinzelman, J.; Anan'Eva, T.; Sinigalliano, C.; Wanless, D.; Griffith, J.; Cao, Y.; Weisberg, S.; Harwood, V.J.; Staley, C.; Oshima, K.H.; Varma, M.; Haugland, R.A.
2012-01-01
The application of quantitative real-time PCR (qPCR) technologies for the rapid identification of fecal bacteria in environmental waters is being considered for use as a national water quality metric in the United States. The transition from research tool to a standardized protocol requires information on the reproducibility and sources of variation associated with qPCR methodology across laboratories. This study examines interlaboratory variability in the measurement of enterococci and Bacteroidales concentrations from standardized, spiked, and environmental sources of DNA using the Entero1a and GenBac3 qPCR methods, respectively. Comparisons are based on data generated from eight different research facilities. Special attention was placed on the influence of the DNA isolation step and effect of simplex and multiplex amplification approaches on interlaboratory variability. Results suggest that a crude lysate is sufficient for DNA isolation unless environmental samples contain substances that can inhibit qPCR amplification. No appreciable difference was observed between simplex and multiplex amplification approaches. Overall, interlaboratory variability levels remained low (<10% coefficient of variation) regardless of qPCR protocol. ?? 2011 American Chemical Society.
Phuong, Nam Ngoc; Zalouk-Vergnoux, Aurore; Kamari, Abderrahmane; Mouneyrac, Catherine; Amiard, Frederic; Poirier, Laurence; Lagarde, Fabienne
2018-03-01
Microplastics (MPs) constitute a main environmental issue due to their threat to marine organisms and so far to humans. The lack of a fast standard protocol in MP isolation and identification from living organisms bring to challenge for the science. In this paper, an optimized protocol using potassium hydroxide 10% (KOH 10%; m/v) for digestion of mussel soft tissues (Mytilus edulis) and multi-steps of sedimentation has been developed. Efficiency higher than 99.9% of organic and mineral matter elimination was shown by application on mussels sampled on the French Atlantic coast. The identification of MPs was performed by FTIR microscopy straight on the filter and the whole analysis can be compatible with a routine goal. Fourteen MPs of four different chemical natures were found and identified in 5 pools of 3 sampled mussels. Their size ranged from 30 to 200 μm. Further investigations are now needed to evaluate the potential risk of such particles within this marine bivalve species and other filter feeders.
LC-MS based analysis of endogenous steroid hormones in human hair.
Gao, Wei; Kirschbaum, Clemens; Grass, Juliane; Stalder, Tobias
2016-09-01
The quantification of endogenous steroid hormone concentrations in hair is increasingly used as a method for obtaining retrospective information on long-term integrated hormone exposure. Several different analytical procedures have been employed for hair steroid analysis, with liquid chromatography-mass spectrometry (LC-MS) being recognized as a particularly powerful analytical tool. Several methodological aspects affect the performance of LC-MS systems for hair steroid analysis, including sample preparation and pretreatment, steroid extraction, post-incubation purification, LC methodology, ionization techniques and MS specifications. Here, we critically review the differential value of such protocol variants for hair steroid hormones analysis, focusing on both analytical quality and practical feasibility issues. Our results show that, when methodological challenges are adequately addressed, LC-MS protocols can not only yield excellent sensitivity and specificity but are also characterized by relatively simple sample processing and short run times. This makes LC-MS based hair steroid protocols particularly suitable as a high-quality option for routine application in research contexts requiring the processing of larger numbers of samples. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wolfe, Marlene K; Gallandat, Karin; Daniels, Kyle; Desmarais, Anne Marie; Scheinman, Pamela; Lantagne, Daniele
2017-01-01
To prevent Ebola transmission, frequent handwashing is recommended in Ebola Treatment Units and communities. However, little is known about which handwashing protocol is most efficacious. We evaluated six handwashing protocols (soap and water, alcohol-based hand sanitizer (ABHS), and 0.05% sodium dichloroisocyanurate, high-test hypochlorite, and stabilized and non-stabilized sodium hypochlorite solutions) for 1) efficacy of handwashing on the removal and inactivation of non-pathogenic model organisms and, 2) persistence of organisms in rinse water. Model organisms E. coli and bacteriophage Phi6 were used to evaluate handwashing with and without organic load added to simulate bodily fluids. Hands were inoculated with test organisms, washed, and rinsed using a glove juice method to retrieve remaining organisms. Impact was estimated by comparing the log reduction in organisms after handwashing to the log reduction without handwashing. Rinse water was collected to test for persistence of organisms. Handwashing resulted in a 1.94-3.01 log reduction in E. coli concentration without, and 2.18-3.34 with, soil load; and a 2.44-3.06 log reduction in Phi6 without, and 2.71-3.69 with, soil load. HTH performed most consistently well, with significantly greater log reductions than other handwashing protocols in three models. However, the magnitude of handwashing efficacy differences was small, suggesting protocols are similarly efficacious. Rinse water demonstrated a 0.28-4.77 log reduction in remaining E. coli without, and 0.21-4.49 with, soil load and a 1.26-2.02 log reduction in Phi6 without, and 1.30-2.20 with, soil load. Chlorine resulted in significantly less persistence of E. coli in both conditions and Phi6 without soil load in rinse water (p<0.001). Thus, chlorine-based methods may offer a benefit of reducing persistence in rinse water. We recommend responders use the most practical handwashing method to ensure hand hygiene in Ebola contexts, considering the potential benefit of chlorine-based methods in rinse water persistence.
Preparation, Imaging, and Quantification of Bacterial Surface Motility Assays
Morales-Soto, Nydia; Anyan, Morgen E.; Mattingly, Anne E.; Madukoma, Chinedu S.; Harvey, Cameron W.; Alber, Mark; Déziel, Eric; Kearns, Daniel B.; Shrout, Joshua D.
2015-01-01
Bacterial surface motility, such as swarming, is commonly examined in the laboratory using plate assays that necessitate specific concentrations of agar and sometimes inclusion of specific nutrients in the growth medium. The preparation of such explicit media and surface growth conditions serves to provide the favorable conditions that allow not just bacterial growth but coordinated motility of bacteria over these surfaces within thin liquid films. Reproducibility of swarm plate and other surface motility plate assays can be a major challenge. Especially for more “temperate swarmers” that exhibit motility only within agar ranges of 0.4%-0.8% (wt/vol), minor changes in protocol or laboratory environment can greatly influence swarm assay results. “Wettability”, or water content at the liquid-solid-air interface of these plate assays, is often a key variable to be controlled. An additional challenge in assessing swarming is how to quantify observed differences between any two (or more) experiments. Here we detail a versatile two-phase protocol to prepare and image swarm assays. We include guidelines to circumvent the challenges commonly associated with swarm assay media preparation and quantification of data from these assays. We specifically demonstrate our method using bacteria that express fluorescent or bioluminescent genetic reporters like green fluorescent protein (GFP), luciferase (lux operon), or cellular stains to enable time-lapse optical imaging. We further demonstrate the ability of our method to track competing swarming species in the same experiment. PMID:25938934
Florin, Cécile; Garraud, Olivier; Molliex, Serge; Tardy, Brigitte; Campos, Lydia; Scherrer, Carine
2016-06-01
The Innovance VWF:Ac test (Siemens) has the particularity to assess the binding capacity of von Willebrand factor (VWF) to recombinant platelet GPIb mutated in the absence of ristocetin. Our study aimed to evaluate and validate according to standard NF EN ISO 15189 the original protocol adaptation on STA-R Evolution series analyser (Diagnostica Stago). We evaluated the performance in terms of imprecision and we validate additional parameters necessary in range B as recommended by the SH GTA 04 (Cofrac). We compared the new assay with the reference assay: ristocetin cofactor activity (VWF:RCo) performed on the BCS-XP analyser by testing retrospectively samples from 82 healthy normal subjects and 61 patients with von Willebrand disease (VWD). This new assay is consistent with objectives set in terms of imprecision with CV around 4%. Excepted limit of quantification higher, additional parameters evaluated in range B have been validated. The Innovance VWF: Ac assay allowed the detection of all deficits of VWF already detected by the VWF:RCo test on the BCS-XP. This adjustment on STA-R analyser therefore has satisfactory analytical performance criteria. Apart from the limit of quantification, this reagent can be used according to the recommendations specified in the original protocol adaptation. Its performance and compatibility with the spot measurement allow the diagnosis and therapeutic monitoring of VWD according to current requirements and guidelines.
Verhoeven, Karolien; Weltens, Caroline; Van den Heuvel, Frank
2015-01-01
Quantification of the setup errors is vital to define appropriate setup margins preventing geographical misses. The no‐action–level (NAL) correction protocol reduces the systematic setup errors and, hence, the setup margins. The manual entry of the setup corrections in the record‐and‐verify software, however, increases the susceptibility of the NAL protocol to human errors. Moreover, the impact of the skin mobility on the anteroposterior patient setup reproducibility in whole‐breast radiotherapy (WBRT) is unknown. In this study, we therefore investigated the potential of fixed vertical couch position‐based patient setup in WBRT. The possibility to introduce a threshold for correction of the systematic setup errors was also explored. We measured the anteroposterior, mediolateral, and superior–inferior setup errors during fractions 1–12 and weekly thereafter with tangential angled single modality paired imaging. These setup data were used to simulate the residual setup errors of the NAL protocol, the fixed vertical couch position protocol, and the fixed‐action–level protocol with different correction thresholds. Population statistics of the setup errors of 20 breast cancer patients and 20 breast cancer patients with additional regional lymph node (LN) irradiation were calculated to determine the setup margins of each off‐line correction protocol. Our data showed the potential of the fixed vertical couch position protocol to restrict the systematic and random anteroposterior residual setup errors to 1.8 mm and 2.2 mm, respectively. Compared to the NAL protocol, a correction threshold of 2.5 mm reduced the frequency of mediolateral and superior–inferior setup corrections with 40% and 63%, respectively. The implementation of the correction threshold did not deteriorate the accuracy of the off‐line setup correction compared to the NAL protocol. The combination of the fixed vertical couch position protocol, for correction of the anteroposterior setup error, and the fixed‐action–level protocol with 2.5 mm correction threshold, for correction of the mediolateral and the superior–inferior setup errors, was proved to provide adequate and comparable patient setup accuracy in WBRT and WBRT with additional LN irradiation. PACS numbers: 87.53.Kn, 87.57.‐s
Rondel, Caroline; Marcato-Romain, Claire-Emmanuelle; Girbal-Neuhauser, Elisabeth
2013-05-15
A colorimetric assay based on the conventional anthrone reaction was investigated for specific quantification of uronic acids (UA) in the presence of neutral sugars and/or proteins. Scanning of glucose (Glu) and glucuronic acid (GlA) was performed after the reaction with anthrone and a double absorbance reading was made, at 560 nm and at 620 nm, in order to quantify the UA and neutral sugars separately. The assay was implemented on binary or ternary solutions containing Glu, GlA and bovine serum albumin (BSA) in order to validate its specificity towards sugars and check possible interference with other biochemical components such as proteins. Statistical analysis indicated that this assay provided correct quantification of uronic sugars from 50 to 400 mg/l and of neutral sugars from 20 to 80 mg/l, in the presence of proteins with concentrations reaching 600 mg/l. The proposed protocol can be of great interest for simultaneous determination of uronic and neutral sugars in complex biological samples. In particular, it can be used to correctly quantify the Extracellular Polymeric Substances (EPS) isolated from the biological matrix of many bacterial aggregates, even in the presence of EPS extractant such as EDTA. Copyright © 2013 Elsevier Ltd. All rights reserved.
Razavi, Morteza; Leigh Anderson, N; Pope, Matthew E; Yip, Richard; Pearson, Terry W
2016-09-25
Efficient robotic workflows for trypsin digestion of human plasma and subsequent antibody-mediated peptide enrichment (the SISCAPA method) were developed with the goal of improving assay precision and throughput for multiplexed protein biomarker quantification. First, an 'addition only' tryptic digestion protocol was simplified from classical methods, eliminating the need for sample cleanup, while improving reproducibility, scalability and cost. Second, methods were developed to allow multiplexed enrichment and quantification of peptide surrogates of protein biomarkers representing a very broad range of concentrations and widely different molecular masses in human plasma. The total workflow coefficients of variation (including the 3 sequential steps of digestion, SISCAPA peptide enrichment and mass spectrometric analysis) for 5 proteotypic peptides measured in 6 replicates of each of 6 different samples repeated over 6 days averaged 3.4% within-run and 4.3% across all runs. An experiment to identify sources of variation in the workflow demonstrated that MRM measurement and tryptic digestion steps each had average CVs of ∼2.7%. Because of the high purity of the peptide analytes enriched by antibody capture, the liquid chromatography step is minimized and in some cases eliminated altogether, enabling throughput levels consistent with requirements of large biomarker and clinical studies. Copyright © 2016 Elsevier B.V. All rights reserved.
Leipert, Jan; Treitz, Christian; Leippe, Matthias; Tholey, Andreas
2017-12-01
N-acyl homoserine lactones (AHL) are small signal molecules involved in the quorum sensing of many gram-negative bacteria, and play an important role in biofilm formation and pathogenesis. Present analytical methods for identification and quantification of AHL require time-consuming sample preparation steps and are hampered by the lack of appropriate standards. By aiming at a fast and straightforward method for AHL analytics, we investigated the applicability of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). Suitable MALDI matrices, including crystalline and ionic liquid matrices, were tested and the fragmentation of different AHL in collision-induced dissociation MS/MS was studied, providing information about characteristic marker fragments ions. Employing small-scale synthesis protocols, we established a versatile and cost-efficient procedure for fast generation of isotope-labeled AHL standards, which can be used without extensive purification and yielded accurate standard curves. Quantitative analysis was possible in the low pico-molar range, with lower limits of quantification reaching from 1 to 5 pmol for different AHL. The developed methodology was successfully applied in a quantitative MALDI MS analysis of low-volume culture supernatants of Pseudomonas aeruginosa. Graphical abstract ᅟ.
Gao, Yuan; Zhang, Haijun; Zou, Lili; Wu, Ping; Yu, Zhengkun; Lu, Xianbo; Chen, Jiping
2016-04-05
Analysis of short-chain chlorinated paraffins (SCCPs) is extremely difficult because of their complex compositions with thousands of isomers and homologues. A novel analytical method, deuterodechlorination combined with high resolution gas chromatography-high resolution mass spectrometry (HRGC-HRMS), was developed. A protocol is applied in the deuterodechlorination of SCCPs with LiAlD4, and the formed deuterated n-alkanes of different alkane chains can be distinguished readily from each other on the basis of their retention time and fragment mass ([M](+)) by HRGC-HRMS. An internal standard quantification of individual SCCP congeners was achieved, in which branched C10-CPs and branched C12-CPs were used as the extraction and reaction internal standards, respectively. A maximum factor of 1.26 of the target SCCP concentrations were determined by this method, and the relative standard deviations for quantification of total SCCPs were within 10%. This method was applied to determine the congener compositions of SCCPs in commercial chlorinated paraffins and environmental and biota samples after method validation. Low-chlorinated SCCP congeners (Cl1-4) were found to account for 32.4%-62.4% of the total SCCPs. The present method provides an attractive perspective for further studies on the toxicological and environmental characteristics of SCCPs.
NASA Astrophysics Data System (ADS)
Leipert, Jan; Treitz, Christian; Leippe, Matthias; Tholey, Andreas
2017-12-01
N-acyl homoserine lactones (AHL) are small signal molecules involved in the quorum sensing of many gram-negative bacteria, and play an important role in biofilm formation and pathogenesis. Present analytical methods for identification and quantification of AHL require time-consuming sample preparation steps and are hampered by the lack of appropriate standards. By aiming at a fast and straightforward method for AHL analytics, we investigated the applicability of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). Suitable MALDI matrices, including crystalline and ionic liquid matrices, were tested and the fragmentation of different AHL in collision-induced dissociation MS/MS was studied, providing information about characteristic marker fragments ions. Employing small-scale synthesis protocols, we established a versatile and cost-efficient procedure for fast generation of isotope-labeled AHL standards, which can be used without extensive purification and yielded accurate standard curves. Quantitative analysis was possible in the low pico-molar range, with lower limits of quantification reaching from 1 to 5 pmol for different AHL. The developed methodology was successfully applied in a quantitative MALDI MS analysis of low-volume culture supernatants of Pseudomonas aeruginosa. [Figure not available: see fulltext.
Kim, Hyun Gi; Lee, Young Han; Choi, Jin-Young; Park, Mi-Suk; Kim, Myeong-Jin; Kim, Ki Whang
2015-01-01
Purpose To investigate the optimal blending percentage of adaptive statistical iterative reconstruction (ASIR) in a reduced radiation dose while preserving a degree of image quality and texture that is similar to that of standard-dose computed tomography (CT). Materials and Methods The CT performance phantom was scanned with standard and dose reduction protocols including reduced mAs or kVp. Image quality parameters including noise, spatial, and low-contrast resolution, as well as image texture, were quantitatively evaluated after applying various blending percentages of ASIR. The optimal blending percentage of ASIR that preserved image quality and texture compared to standard dose CT was investigated in each radiation dose reduction protocol. Results As the percentage of ASIR increased, noise and spatial-resolution decreased, whereas low-contrast resolution increased. In the texture analysis, an increasing percentage of ASIR resulted in an increase of angular second moment, inverse difference moment, and correlation and in a decrease of contrast and entropy. The 20% and 40% dose reduction protocols with 20% and 40% ASIR blending, respectively, resulted in an optimal quality of images with preservation of the image texture. Conclusion Blending the 40% ASIR to the 40% reduced tube-current product can maximize radiation dose reduction and preserve adequate image quality and texture. PMID:25510772
Naccarato, Attilio; Elliani, Rosangela; Cavaliere, Brunella; Sindona, Giovanni; Tagarelli, Antonio
2018-05-11
Polyamines are aliphatic amines with low molecular weight that are widely recognized as one of the most important cancer biomarkers for early diagnosis and treatment. The goal of the work herein presented is the development of a rapid and simple method for the quantification of free polyamines (i.e., putrescine, cadaverine, spermidine, spermine) and N-monoacetylated polyamines (i.e., N 1 -Acetylspermidine, N 8 -Acetylspermidine, and N 1 -Acetylspermine) in human urine. A preliminary derivatization with propyl chloroformate combined with the use of solid phase microextraction (SPME) allowed for an easy and automatable protocol involving minimal sample handling and no consumption of organic solvents. The affinity of the analytes toward five commercial SPME coatings was evaluated in univariate mode, and the best result in terms of analyte extraction was achieved using the divinylbenzene/carboxen/polydimethylsiloxane fiber. The variables affecting the performance of SPME analysis were optimized by the multivariate approach of experimental design and, in particular, using a central composite design (CCD). The optimal working conditions in terms of response values are the following: extraction temperature 40 °C, extraction time of 15 min and no addition of NaCl. Analyses were carried out by gas chromatography-triple quadrupole mass spectrometry (GC-QqQ-MS) in selected reaction monitoring (SRM) acquisition mode. The developed method was validated according to the guidelines issued by the Food and Drug Administration (FDA). The satisfactory performances reached in terms of linearity, sensitivity (LOQs between 0.01 and 0.1 μg/mL), matrix effect (68-121%), accuracy, and precision (inter-day values between -24% and +16% and in the range 3.3-28.4%, respectively) make the proposed protocol suitable to be adopted for quantification of these important biomarkers in urine samples. Copyright © 2018 Elsevier B.V. All rights reserved.
de Cremoux, P; Bieche, I; Tran-Perennou, C; Vignaud, S; Boudou, E; Asselain, B; Lidereau, R; Magdelénat, H; Becette, V; Sigal-Zafrani, B; Spyratos, F
2004-09-01
Quantitative reverse transcription-polymerase chain reaction (RT-PCR) used to detect minor changes in specific mRNA concentrations may be associated with poor reproducibility. Stringent quality control is therefore essential at each step of the protocol, including the PCR procedure. We performed inter-laboratory quality control of quantitative PCR between two independent laboratories, using in-house RT-PCR assays on a series of hormone-related target genes in a retrospective consecutive series of 79 breast tumors. Total RNA was reverse transcribed in a single center. Calibration curves were performed for five target genes (estrogen receptor (ER)alpha, ERbeta, progesterone receptor (PR), CYP19 (aromatase) and Ki 67) and for two reference genes (human acidic ribosomal phosphoprotein PO (RPLPO) and TATA box-binding protein (TBP)). Amplification efficiencies of the calibrator were determined for each run and used to calculate mRNA expression. Correlation coefficients were evaluated for each target and each reference gene. A good correlation was observed for all target and reference genes in both centers using their own protocols and kits (P < 0.0001). The correlation coefficients ranged from 0.90 to 0.98 for the various target genes in the two centers. A good correlation was observed between the level of expression of the ERalpha and the PR transcripts (P < 0.001). A weak inverse correlation was observed in both centers between ERalpha and ERbeta levels, but only when TBP was the reference gene. No other correlation was observed with other parameters. Real-time PCR assays allow convenient quantification of target mRNA transcripts and quantification of target-derived nucleic acids in clinical specimens. This study addresses the importance of inter-laboratory quality controls for the use of a panel of real-time PCR assays devoted to clinical samples and protocols and to ensure their appropriate accuracy. This can also facilitate exchanges and multicenter comparison of data.
Contractor, Kaiyumars B; Kenny, Laura M; Coombes, Charles R; Turkheimer, Federico E; Aboagye, Eric O; Rosso, Lula
2012-03-24
Quantification of kinetic parameters of positron emission tomography (PET) imaging agents normally requires collecting arterial blood samples which is inconvenient for patients and difficult to implement in routine clinical practice. The aim of this study was to investigate whether a population-based input function (POP-IF) reliant on only a few individual discrete samples allows accurate estimates of tumour proliferation using [18F]fluorothymidine (FLT). Thirty-six historical FLT-PET data with concurrent arterial sampling were available for this study. A population average of baseline scans blood data was constructed using leave-one-out cross-validation for each scan and used in conjunction with individual blood samples. Three limited sampling protocols were investigated including, respectively, only seven (POP-IF7), five (POP-IF5) and three (POP-IF3) discrete samples of the historical dataset. Additionally, using the three-point protocol, we derived a POP-IF3M, the only input function which was not corrected for the fraction of radiolabelled metabolites present in blood. The kinetic parameter for net FLT retention at steady state, Ki, was derived using the modified Patlak plot and compared with the original full arterial set for validation. Small percentage differences in the area under the curve between all the POP-IFs and full arterial sampling IF was found over 60 min (4.2%-5.7%), while there were, as expected, larger differences in the peak position and peak height.A high correlation between Ki values calculated using the original arterial input function and all the population-derived IFs was observed (R2 = 0.85-0.98). The population-based input showed good intra-subject reproducibility of Ki values (R2 = 0.81-0.94) and good correlation (R2 = 0.60-0.85) with Ki-67. Input functions generated using these simplified protocols over scan duration of 60 min estimate net PET-FLT retention with reasonable accuracy.
Interactions of Mean Climate Change and Climate Variability on Food Security Extremes
NASA Technical Reports Server (NTRS)
Ruane, Alexander C.; McDermid, Sonali; Mavromatis, Theodoros; Hudson, Nicholas; Morales, Monica; Simmons, John; Prabodha, Agalawatte; Ahmad, Ashfaq; Ahmad, Shakeel; Ahuja, Laj R.
2015-01-01
Recognizing that climate change will affect agricultural systems both through mean changes and through shifts in climate variability and associated extreme events, we present preliminary analyses of climate impacts from a network of 1137 crop modeling sites contributed to the AgMIP Coordinated Climate-Crop Modeling Project (C3MP). At each site sensitivity tests were run according to a common protocol, which enables the fitting of crop model emulators across a range of carbon dioxide, temperature, and water (CTW) changes. C3MP can elucidate several aspects of these changes and quantify crop responses across a wide diversity of farming systems. Here we test the hypothesis that climate change and variability interact in three main ways. First, mean climate changes can affect yields across an entire time period. Second, extreme events (when they do occur) may be more sensitive to climate changes than a year with normal climate. Third, mean climate changes can alter the likelihood of climate extremes, leading to more frequent seasons with anomalies outside of the expected conditions for which management was designed. In this way, shifts in climate variability can result in an increase or reduction of mean yield, as extreme climate events tend to have lower yield than years with normal climate.C3MP maize simulations across 126 farms reveal a clear indication and quantification (as response functions) of mean climate impacts on mean yield and clearly show that mean climate changes will directly affect the variability of yield. Yield reductions from increased climate variability are not as clear as crop models tend to be less sensitive to dangers on the cool and wet extremes of climate variability, likely underestimating losses from water-logging, floods, and frosts.
New Protocol Based on UHPLC-MS/MS for Quantitation of Metabolites in Xylose-Fermenting Yeasts
NASA Astrophysics Data System (ADS)
Campos, Christiane Gonçalves; Veras, Henrique César Teixeira; de Aquino Ribeiro, José Antônio; Costa, Patrícia Pinto Kalil Gonçalves; Araújo, Katiúscia Pereira; Rodrigues, Clenilson Martins; de Almeida, João Ricardo Moreira; Abdelnur, Patrícia Verardi
2017-12-01
Xylose fermentation is a bottleneck in second-generation ethanol production. As such, a comprehensive understanding of xylose metabolism in naturally xylose-fermenting yeasts is essential for prospection and construction of recombinant yeast strains. The objective of the current study was to establish a reliable metabolomics protocol for quantification of key metabolites of xylose catabolism pathways in yeast, and to apply this protocol to Spathaspora arborariae. Ultra-high performance liquid chromatography coupled to tandem mass spectrometry (UHPLC-MS/MS) was used to quantify metabolites, and afterwards, sample preparation was optimized to examine yeast intracellular metabolites. S. arborariae was cultivated using xylose as a carbon source under aerobic and oxygen-limited conditions. Ion pair chromatography (IPC) and hydrophilic interaction liquid chromatography-tandem mass spectrometry (HILIC-MS/MS) were shown to efficiently quantify 14 and 5 metabolites, respectively, in a more rapid chromatographic protocol than previously described. Thirteen and eleven metabolites were quantified in S. arborariae under aerobic and oxygen-limited conditions, respectively. This targeted metabolomics protocol is shown here to quantify a total of 19 metabolites, including sugars, phosphates, coenzymes, monosaccharides, and alcohols, from xylose catabolism pathways (glycolysis, pentose phosphate pathway, and tricarboxylic acid cycle) in yeast. Furthermore, to our knowledge, this is the first time that intracellular metabolites have been quantified in S. arborariae after xylose consumption. The results indicated that fine control of oxygen levels during fermentation is necessary to optimize ethanol production by S. arborariae. The protocol presented here may be applied to other yeast species and could support yeast genetic engineering to improve second generation ethanol production. [Figure not available: see fulltext.
SU-G-IeP2-10: Lens Dose Reduction by Patient Position Modification During Neck CT Exams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, E; Lee, C; Butman, J
Purpose: Irradiation of the lens during a neck CT may increase a patient’s risk of developing cataracts later in life. Radiologists and technologists at the National Institutes of Health Clinical Center (NIHCC) have developed new CT imaging protocols that include a reduction in scan range and modifying neck positioning using a head tilt. This study will evaluate the efficacy of this protocol in the reduction of lens dose. Methods: We retrieved CT images of five male patients who had two sets of CT images: before and after the implementation of the new protocol. The lens doses before the new protocolmore » were calculated using an in-house CT dose calculator, National Cancer Institute dosimetry system for CT (NCICT), where computational human phantoms with no head tilt are included. We also calculated the lens dose for the patient CT conducted after the new protocol by using an adult male computational phantom with the neck position deformed to match the angle of the head tilt. We also calculated the doses to other radiosensitive organs including the globes of the eye, brain, pituitary gland and salivary glands before and after head tilt. Results: Our dose calculations demonstrated that modifying neck position reduced dose to the lens by 89% on average (range: 86–96%). Globe, brain, pituitary and salivary gland doses also decreased by an average of 65% (51–95%), 38% (−8–66%), 34% (−43–84%) and 14% (13–14%), respectively. The new protocol resulted in a nearly ten-fold decrease in lens dose. Conclusion: The use of a head tilt and scan range reduction is an easy and effective method to reduce radiation exposure to the lens and other radiosensitive organs, while still allowing for the inclusion of critical neck structures in the CT image. We are expanding our study to a total of 10 males and 10 females.« less
Large differences in land use emission quantifications implied by definition discrepancies
NASA Astrophysics Data System (ADS)
Stocker, B. D.; Joos, F.
2015-03-01
The quantification of CO2 emissions from anthropogenic land use and land use change (eLUC) is essential to understand the drivers of the atmospheric CO2 increase and to inform climate change mitigation policy. Reported values in synthesis reports are commonly derived from different approaches (observation-driven bookkeeping and process-modelling) but recent work has emphasized that inconsistencies between methods may imply substantial differences in eLUC estimates. However, a consistent quantification is lacking and no concise modelling protocol for the separation of primary and secondary components of eLUC has been established. Here, we review the conceptual differences of eLUC quantification methods and apply an Earth System Model to demonstrate that what is claimed to represent total eLUC differs by up to ~20% when quantified from ESM vs. offline vegetation models. Under a future business-as-usual scenario, differences tend to increase further due to slowing land conversion rates and an increasing impact of altered environmental conditions on land-atmosphere fluxes. We establish how coupled Earth System Models may be applied to separate component fluxes of eLUC arising from the replacement of potential C sinks/sources and the land use feedback and show that secondary fluxes derived from offline vegetation models are conceptually and quantitatively not identical to either, nor their sum. Therefore, we argue that synthesis studies and global carbon budget accountings should resort to the "least common denominator" of different methods, following the bookkeeping approach where only primary land use emissions are quantified under the assumption of constant environmental boundary conditions.
Quantifying differences in land use emission estimates implied by definition discrepancies
NASA Astrophysics Data System (ADS)
Stocker, B. D.; Joos, F.
2015-11-01
The quantification of CO2 emissions from anthropogenic land use and land use change (eLUC) is essential to understand the drivers of the atmospheric CO2 increase and to inform climate change mitigation policy. Reported values in synthesis reports are commonly derived from different approaches (observation-driven bookkeeping and process-modelling) but recent work has emphasized that inconsistencies between methods may imply substantial differences in eLUC estimates. However, a consistent quantification is lacking and no concise modelling protocol for the separation of primary and secondary components of eLUC has been established. Here, we review differences of eLUC quantification methods and apply an Earth System Model (ESM) of Intermediate Complexity to quantify them. We find that the magnitude of effects due to merely conceptual differences between ESM and offline vegetation model-based quantifications is ~ 20 % for today. Under a future business-as-usual scenario, differences tend to increase further due to slowing land conversion rates and an increasing impact of altered environmental conditions on land-atmosphere fluxes. We establish how coupled Earth System Models may be applied to separate secondary component fluxes of eLUC arising from the replacement of potential C sinks/sources and the land use feedback and show that secondary fluxes derived from offline vegetation models are conceptually and quantitatively not identical to either, nor their sum. Therefore, we argue that synthesis studies should resort to the "least common denominator" of different methods, following the bookkeeping approach where only primary land use emissions are quantified under the assumption of constant environmental boundary conditions.
Quantification of upper limb position sense using an exoskeleton and a virtual reality display.
Deblock-Bellamy, Anne; Batcho, Charles Sebiyo; Mercier, Catherine; Blanchette, Andreanne K
2018-03-16
Proprioceptive sense plays a significant role in the generation and correction of skilled movements and, consequently, in most activities of daily living. We developed a new proprioception assessment protocol that enables the quantification of elbow position sense without using the opposite arm, involving active movement of the evaluated limb or relying on working memory. The aims of this descriptive study were to validate this assessment protocol by quantifying the elbow position sense of healthy adults, before using it in individuals who sustained a stroke, and to investigate its test-retest reliability. Elbow joint position sense was quantified using a robotic device and a virtual reality system. Two assessments were performed, by the same evaluator, with a one-week interval. While the participant's arms and hands were occluded from vision, the exoskeleton passively moved the dominant arm from an initial to a target position. Then, a virtual arm representation was projected on a screen placed over the participant's arm. This virtual representation and the real arm were not perfectly superimposed, however. Participants had to indicate verbally the relative position of their arm (more flexed or more extended; two-alternative forced choice paradigm) compared to the virtual representation. Each participant completed a total of 136 trials, distributed in three phases. The angular differences between the participant's arm and the virtual representation ranged from 1° to 27° and changed pseudo-randomly across trials. No feedback about results was provided to the participants during the task. A discrimination threshold was statistically extracted from a sigmoid curve fit representing the relationship between the angular difference and the percentage of successful trials. Test-retest reliability was evaluated with 3 different complementary approaches, i.e. a Bland-Altman analysis, an intraclass correlation coefficient (ICC) and a standard error of measurement (SEm). Thirty participants (24.6 years old; 17 males, 25 right-handed) completed both assessments. The mean discrimination thresholds were 7.0 ± 2.4 (mean ± standard deviation) and 5.9 ± 2.1 degrees for the first and the second assessment session, respectively. This small difference between assessments was significant (- 1.1 ± 2.2 degrees), however. The assessment protocol was characterized by a fair to good test-retest reliability (ICC = 0.47). This study demonstrated the potential of this assessment protocol to objectively quantify elbow position sense in healthy individuals. Futures studies will validate this protocol in older adults and in individuals who sustained a stroke.
Solid-phase reductive amination for glycomic analysis.
Jiang, Kuan; Zhu, He; Xiao, Cong; Liu, Ding; Edmunds, Garrett; Wen, Liuqing; Ma, Cheng; Li, Jing; Wang, Peng George
2017-04-15
Reductive amination is an indispensable method for glycomic analysis, as it tremendously facilitates glycan characterization and quantification by coupling functional tags at the reducing ends of glycans. However, traditional in-solution derivatization based approach for the preparation of reductively aminated glycans is quite tedious and time-consuming. Here, a simpler and more efficient strategy termed solid-phase reductive amination was investigated. The general concept underlying this new approach is to streamline glycan extraction, derivatization, and purification on non-porous graphitized carbon sorbents. Neutral and sialylated standard glycans were utilized to test the feasibility of the solid-phase method. As results, almost complete labeling of those glycans with four common labels of aniline, 2-aminobenzamide (2-AB), 2-aminobenzoic acid (2-AA) and 2-amino-N-(2-aminoethyl)-benzamide (AEAB) was obtained, and negligible desialylation occurred during sample preparation. The labeled glycans derived from glycoproteins showed excellent reproducibility in high performance liquid chromatography (HPLC) and matrix assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) analysis. Direct comparisons based on fluorescent absorbance and relative quantification using isotopic labeling demonstrated that the solid-phase strategy enabled 20-30% increase in sample recovery. In short, the solid-phase strategy is simple, reproducible, efficient, and sensitive for glycan analysis. This method was also successfully applied for N-glycan profiling of HEK 293 cells with MALDI-TOF MS, showing its attractive application in the high-throughput analysis of mammalian glycome. Published by Elsevier B.V.
Laboratory grown subaerial biofilms on granite: application to the study of bioreceptivity.
Vázquez-Nion, Daniel; Silva, Benita; Troiano, Federica; Prieto, Beatriz
2017-01-01
Simulated environmental colonisation of granite was induced under laboratory conditions in order to develop an experimental protocol for studying bioreceptivity. The experimental set-up proved suitable for producing subaerial biofilms by inoculating granite blocks with planktonic multi-species phototrophic cultures derived from natural biofilms. The ability of four different cultures to form biofilms was monitored over a three-month growth period via colour measurements, quantification of photosynthetic pigments and EPS, and CLSM observations. One of the cultures under study, which comprised several taxa including Bryophyta, Charophyta, Chlorophyta and Cyanobacteria, was particularly suitable as an inoculum, mainly because of its microbial richness, its rapid adaptability to the substratum and its high colonisation capacity. The use of this culture as an inoculum in the proposed experimental set-up to produce subaerial biofilms under laboratory conditions will contribute to standardising the protocols involved, thus enabling more objective assessment of the bioreceptivity of granite in further experiments.
High-throughput real-time quantitative reverse transcription PCR.
Bookout, Angie L; Cummins, Carolyn L; Mangelsdorf, David J; Pesola, Jean M; Kramer, Martha F
2006-02-01
Extensive detail on the application of the real-time quantitative polymerase chain reaction (QPCR) for the analysis of gene expression is provided in this unit. The protocols are designed for high-throughput, 384-well-format instruments, such as the Applied Biosystems 7900HT, but may be modified to suit any real-time PCR instrument. QPCR primer and probe design and validation are discussed, and three relative quantitation methods are described: the standard curve method, the efficiency-corrected DeltaCt method, and the comparative cycle time, or DeltaDeltaCt method. In addition, a method is provided for absolute quantification of RNA in unknown samples. RNA standards are subjected to RT-PCR in the same manner as the experimental samples, thus accounting for the reaction efficiencies of both procedures. This protocol describes the production and quantitation of synthetic RNA molecules for real-time and non-real-time RT-PCR applications.
The global gridded crop model intercomparison: Data and modeling protocols for Phase 1 (v1.0)
Elliott, J.; Müller, C.; Deryng, D.; ...
2015-02-11
We present protocols and input data for Phase 1 of the Global Gridded Crop Model Intercomparison, a project of the Agricultural Model Intercomparison and Improvement Project (AgMIP). The project consist of global simulations of yields, phenologies, and many land-surface fluxes using 12–15 modeling groups for many crops, climate forcing data sets, and scenarios over the historical period from 1948 to 2012. The primary outcomes of the project include (1) a detailed comparison of the major differences and similarities among global models commonly used for large-scale climate impact assessment, (2) an evaluation of model and ensemble hindcasting skill, (3) quantification ofmore » key uncertainties from climate input data, model choice, and other sources, and (4) a multi-model analysis of the agricultural impacts of large-scale climate extremes from the historical record.« less
NASA Astrophysics Data System (ADS)
Romo-Cárdenas, Gerardo S.; Sanchez-Lopez, Juan D.; Nieto-Hipolito, Juan I.; Cosio-León, María.; Luque-Morales, Priscy; Vazquez-Briseno, Mabel
2016-09-01
It has been established the importance of a constant glucose monitoring in order to keep a regular control for diabetes patients. Several medical studies accept the necessity of exploring alternatives for the traditional digital glucometer, given the pain and discomfort related to this technique, which can lead to a compromised control of the disease. Several efforts based on the application of IR spectroscopy had been done with favorable, yet not conclusive results. Therefore it's necessary to apply a comprehensive and interdisciplinary study based on the biochemical and optical properties of the glucose in the human body, in order to understand the interaction between this substance, its surroundings and IR light. These study propose a comprehensive approach of the glucose and IR light interaction, considering and combining important biochemical, physiological and optical properties, as well as some machine learning techniques for the data analysis. The results of this work would help to define the right parameters aiming to obtain an optical glucose quantification system and protocol.
González Paredes, Rosa María; García Pinto, Carmelo; Pérez Pavón, José Luis; Moreno Cordero, Bernardo
2016-09-01
A new method based on headspace programmed-temperature vaporizer gas chromatography with mass spectrometry has been developed and validated for the determination of amino acids (alanine, sarcosine, ethylglycine, valine, leucine, and proline) in human urine samples. Derivatization with ethyl chloroformate was employed successfully to determine the amino acids. The derivatization reaction conditions as well as the variables of the headspace sampling were optimized. The existence of a matrix effect was checked and the analytical characteristics of the method were determined. The limits of detection were 0.15-2.89 mg/L, and the limits of quantification were 0.46-8.67 mg/L. The instrumental repeatability was 1.6-11.5%. The quantification of the amino acids in six urine samples from healthy subjects was performed with the method developed with the one-point standard additions protocol, with norleucine as the internal standard. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Quantification of sensory and food quality: the R-index analysis.
Lee, Hye-Seong; van Hout, Danielle
2009-08-01
The accurate quantification of sensory difference/similarity between foods, as well as consumer acceptance/preference and concepts, is greatly needed to optimize and maintain food quality. The R-Index is one class of measures of the degree of difference/similarity, and was originally developed for sensory difference tests for food quality control, product development, and so on. The index is based on signal detection theory and is free of the response bias that can invalidate difference testing protocols, including categorization and same-different and A-Not A tests. It is also a nonparametric analysis, making no assumptions about sensory distributions, and is simple to compute and understand. The R-Index is also flexible in its application. Methods based on R-Index analysis have been used as detection and sensory difference tests, as simple alternatives to hedonic scaling, and for the measurement of consumer concepts. This review indicates the various computational strategies for the R-Index and its practical applications to consumer and sensory measurements in food science.
2015-09-01
the network Mac8 Medium Access Control ( Mac ) (Ethernet) address observed as destination for outgoing packets subsessionid8 Zero-based index of...15. SUBJECT TERMS tactical networks, data reduction, high-performance computing, data analysis, big data 16. SECURITY CLASSIFICATION OF: 17...Integer index of row cts_deid Device (instrument) Identifier where observation took place cts_collpt Collection point or logical observation point on
Mannheim, Julia G; Schmid, Andreas M; Pichler, Bernd J
2017-12-01
Non-invasive in vivo positron emission tomography (PET) provides high detection sensitivity in the nano- to picomolar range and in addition to other advantages, the possibility to absolutely quantify the acquired data. The present study focuses on the comparison of transmission data acquired with an X-ray computed tomography (CT) scanner or a Co-57 source for the Inveon small animal PET scanner (Siemens Healthcare, Knoxville, TN, USA), as well as determines their influences on the quantification accuracy and partial volume effect (PVE). A special focus included the impact of the performed calibration on the quantification accuracy. Phantom measurements were carried out to determine the quantification accuracy, the influence of the object size on the quantification, and the PVE for different sphere sizes, along the field of view and for different contrast ratios. An influence of the emission activity on the Co-57 transmission measurements was discovered (deviations up to 24.06 % measured to true activity), whereas no influence of the emission activity on the CT attenuation correction was identified (deviations <3 % for measured to true activity). The quantification accuracy was substantially influenced by the applied calibration factor and by the object size. The PVE demonstrated a dependency on the sphere size, the position within the field of view, the reconstruction and correction algorithms and the count statistics. Depending on the reconstruction algorithm, only ∼30-40 % of the true activity within a small sphere could be resolved. The iterative 3D reconstruction algorithms uncovered substantially increased recovery values compared to the analytical and 2D iterative reconstruction algorithms (up to 70.46 % and 80.82 % recovery for the smallest and largest sphere using iterative 3D reconstruction algorithms). The transmission measurement (CT or Co-57 source) to correct for attenuation did not severely influence the PVE. The analysis of the quantification accuracy and the PVE revealed an influence of the object size, the reconstruction algorithm and the applied corrections. Particularly, the influence of the emission activity during the transmission measurement performed with a Co-57 source must be considered. To receive comparable results, also among different scanner configurations, standardization of the acquisition (imaging parameters, as well as applied reconstruction and correction protocols) is necessary.
Profiling Abscisic Acid-Induced Changes in Fatty Acid Composition in Mosses.
Shinde, Suhas; Devaiah, Shivakumar; Kilaru, Aruna
2017-01-01
In plants, change in lipid composition is a common response to various abiotic stresses. Lipid constituents of bryophytes are of particular interest as they differ from that of flowering plants. Unlike higher plants, mosses have high content of very long-chain polyunsaturated fatty acids. Such lipids are considered to be important for survival of nonvascular plants. Here, using abscisic acid (ABA )-induced changes in lipid composition in Physcomitrella patens as an example, a protocol for total lipid extraction and quantification by gas chromatography (GC) coupled with flame ionization detector (FID) is described.
Jacomin, Anne-Claire; Nezis, Ioannis P
2016-01-01
Oogenesis is a fundamental biological process for the transmission of genetic information to the next generations. Drosophila has proven to be a valuable model for elucidating the molecular and cellular mechanisms involved in this developmental process. It has been shown that autophagy participates in the maturation of the egg chamber. Here we provide a protocol for monitoring and quantification of the autophagic process in the Drosophila germline cells using the fluorescent reporters mCherry-DmAtg8a and GFP-mCherry-DmAtg8a.
Identification of Phosphorylated Proteins on a Global Scale.
Iliuk, Anton
2018-05-31
Liquid chromatography (LC) coupled with tandem mass spectrometry (MS/MS) has enabled researchers to analyze complex biological samples with unprecedented depth. It facilitates the identification and quantification of modifications within thousands of proteins in a single large-scale proteomic experiment. Analysis of phosphorylation, one of the most common and important post-translational modifications, has particularly benefited from such progress in the field. Here, detailed protocols are provided for a few well-regarded, common sample preparation methods for an effective phosphoproteomic experiment. © 2018 by John Wiley & Sons, Inc. Copyright © 2018 John Wiley & Sons, Inc.
Multi-fidelity methods for uncertainty quantification in transport problems
NASA Astrophysics Data System (ADS)
Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.
2016-12-01
We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.
Quantifying Demyelination in NK venom treated nerve using its electric circuit model
NASA Astrophysics Data System (ADS)
Das, H. K.; Das, D.; Doley, R.; Sahu, P. P.
2016-03-01
Reduction of myelin in peripheral nerve causes critical demyelinating diseases such as chronic inflammatory demyelinating polyneuropathy, Guillain-Barre syndrome, etc. Clinical monitoring of these diseases requires rapid and non-invasive quantification of demyelination. Here we have developed formulation of nerve conduction velocity (NCV) in terms of demyelination considering electric circuit model of a nerve having bundle of axons for its quantification from NCV measurements. This approach has been validated and demonstrated with toad nerve model treated with crude Naja kaouthia (NK) venom and also shows the effect of Phospholipase A2 and three finger neurotoxin from NK-venom on peripheral nerve. This opens future scope for non-invasive clinical measurement of demyelination.
Quantifying Demyelination in NK venom treated nerve using its electric circuit model
Das, H. K.; Das, D.; Doley, R.; Sahu, P. P.
2016-01-01
Reduction of myelin in peripheral nerve causes critical demyelinating diseases such as chronic inflammatory demyelinating polyneuropathy, Guillain-Barre syndrome, etc. Clinical monitoring of these diseases requires rapid and non-invasive quantification of demyelination. Here we have developed formulation of nerve conduction velocity (NCV) in terms of demyelination considering electric circuit model of a nerve having bundle of axons for its quantification from NCV measurements. This approach has been validated and demonstrated with toad nerve model treated with crude Naja kaouthia (NK) venom and also shows the effect of Phospholipase A2 and three finger neurotoxin from NK-venom on peripheral nerve. This opens future scope for non-invasive clinical measurement of demyelination. PMID:26932543
Quantifying Demyelination in NK venom treated nerve using its electric circuit model.
Das, H K; Das, D; Doley, R; Sahu, P P
2016-03-02
Reduction of myelin in peripheral nerve causes critical demyelinating diseases such as chronic inflammatory demyelinating polyneuropathy, Guillain-Barre syndrome, etc. Clinical monitoring of these diseases requires rapid and non-invasive quantification of demyelination. Here we have developed formulation of nerve conduction velocity (NCV) in terms of demyelination considering electric circuit model of a nerve having bundle of axons for its quantification from NCV measurements. This approach has been validated and demonstrated with toad nerve model treated with crude Naja kaouthia (NK) venom and also shows the effect of Phospholipase A2 and three finger neurotoxin from NK-venom on peripheral nerve. This opens future scope for non-invasive clinical measurement of demyelination.
Gyawali, P
2018-02-01
Raw and partially treated wastewater has been widely used to maintain the global water demand. Presence of viable helminth ova and larvae in the wastewater raised significant public health concern especially when used for agriculture and aquaculture. Depending on the prevalence of helminth infections in communities, up to 1.0 × 10 3 ova/larvae can be presented per litre of wastewater and 4 gm (dry weight) of sludge. Multi-barrier approaches including pathogen reduction, risk assessment, and exposure reduction have been suggested by health regulators to minimise the potential health risk. However, with a lack of a sensitive and specific method for the quantitative detection of viable helminth ova from wastewater, an accurate health risk assessment is difficult to achieve. As a result, helminth infections are difficult to control from the communities despite two decades of global effort (mass drug administration). Molecular methods can be more sensitive and specific than currently adapted culture-based and vital stain methods. The molecular methods, however, required more and thorough investigation for its ability with accurate quantification of viable helminth ova/larvae from wastewater and sludge samples. Understanding different cell stages and corresponding gene copy numbers is pivotal for accurate quantification of helminth ova/larvae in wastewater samples. Identifying specific genetic markers including protein, lipid, and metabolites using multiomics approach could be utilized for cheap, rapid, sensitive, specific and point of care detection tools for helminth ova and larva in the wastewater.
Image-guided spatial localization of heterogeneous compartments for magnetic resonance
An, Li; Shen, Jun
2015-01-01
Purpose: Image-guided localization SPectral Localization Achieved by Sensitivity Heterogeneity (SPLASH) allows rapid measurement of signals from irregularly shaped anatomical compartments without using phase encoding gradients. Here, the authors propose a novel method to address the issue of heterogeneous signal distribution within the localized compartments. Methods: Each compartment was subdivided into multiple subcompartments and their spectra were solved by Tikhonov regularization to enforce smoothness within each compartment. The spectrum of a given compartment was generated by combining the spectra of the components of that compartment. The proposed method was first tested using Monte Carlo simulations and then applied to reconstructing in vivo spectra from irregularly shaped ischemic stroke and normal tissue compartments. Results: Monte Carlo simulations demonstrate that the proposed regularized SPLASH method significantly reduces localization and metabolite quantification errors. In vivo results show that the intracompartment regularization results in ∼40% reduction of error in metabolite quantification. Conclusions: The proposed method significantly reduces localization errors and metabolite quantification errors caused by intracompartment heterogeneous signal distribution. PMID:26328977
Singh, Upasana; Akhtar, Shamim; Mishra, Abhishek; Sarkar, Dhiman
2011-02-01
A microplate-based rapid, inexpensive and robust technique is developed by using tetrazolium salt 2,3-bis[2-methyloxy-4-nitro-5-sulfophenyl]-2H-tetrazolium-5-carboxanilide (XTT) and menadione to determine the viability of Mycobacterium tuberculosis, Mycobacterium bovis BCG and Mycobacterium smegmatis bacilli in microplate format. In general, XTT reduction is an extremely slow process which takes almost 24 h to produce a detectable signal. Menadione could drastically induce this reduction to an almost equal extent within a few minutes in a dose dependent manner. The reduction of XTT is directly proportional to the cell concentration in the presence of menadione. The standardized protocol used 200 μM of XTT and 60 μM of menadione in 250 μl of cell suspension grown either in aerobic or anaerobic conditions. The cell suspension of M. bovis BCG and M. tuberculosis were incubated for 40 min before reading the optical density at 470 nm whereas M. smegmatis was incubated for 20 min. Calculated Signal/Noise (S/N) ratios obtained by applying this protocol were 5.4, 6.4 and 9.4 using M. bovis BCG, M. tuberculosis and M. smegmatis respectively. The calculated Z' factors were >0.8 for all mycobacterium bacilli indicating the robustness of the XTT Reduction Menadione Assay (XRMA) for rapid screening of inhibitors. The assay protocol was validated by applying 10 standard anti-tubercular agents on M. tuberculosis, M. bovis BCG and M. smegmatis. The Minimum Inhibitory Concentration (MIC) values were found to be similar to reported values from Colony Forming Unit (CFU) and REMA (resazurin microplate assay) assays. Altogether, XRMA is providing a novel anti-tubercular screening protocol which could be useful in high throughput screening programs against different physiological stages of the bacilli. Copyright © 2010 Elsevier B.V. All rights reserved.
A randomized trial of immunotherapy for persistent genital warts
Jardine, David; Lu, Jieqiang; Pang, James; Palmer, Cheryn; Tu, Quanmei; Chuah, John; Frazer, Ian H.
2012-01-01
Aim To determine whether immunotherapy with HPV6 L1 virus like particles (VLPs) without adjuvant (VLP immunotherapy) reduces recurrence of genital warts following destructive therapy. Trial design A randomized placebo controlled blinded study of treatment of recurrent genital warts amenable to destructive therapy, conducted independently in Australia and China. Methods Patients received conventional destructive therapy of all evident warts together with intramuscular administration of 1, 5 or 25 µg of VLP immunotherapy, or of placebo immunotherapy (0.9% NaCl), as immunotherapy at week 0 and week 4. Primary outcome, assessed at week 8, was recurrence of visible warts. Results Of 33 protocol compliant Brisbane recipients of placebo immunotherapy, 11 were disease free at two months, and a further 9 demonstrated reduction of > 50% in total wart area. Wart area reduction following destructive treatment correlated with prior duration of disease. Among 102 protocol compliant Brisbane recipients of VLP immunotherapy, disease reduction was significantly greater than among the placebo immunotherapy (50% ± s.e.m. 7%) recipients for subjects receiving 5 µg or 25 µg of VLP immunotherapy/dose (71% ± s.e.m.7%) but not for those receiving 1 µg VLP immunotherapy/dose (42% ± 7%). Of 52 protocol compliant placebo immunotherapy recipients in Wenzhou, 37 were disease free at two months, and a further 8 had > 50% disease reduction. Prior disease duration was much shorter in Wenzhou subject (8.1 ± 1.1 mo) than in Brisbane subjects (53.7 ± 5.5 mo). No significant reduction in mean wart area was observed for the 168 Wenzhou protocol compliant subjects who also received VLP immunotherapy. Conclusions This study confirms the findings in a previous open label trial that administration of VLP immunotherapy may assist in clearance of recurrent genital warts in patients for whom destructive therapy is unsuccessful and that unsuccessful destructive therapy is more common with increasing prior disease duration. PMID:22634446
Fast, efficient error reconciliation for quantum cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buttler, W.T.; Lamoreaux, S.K.; Torgerson, J.R.
2003-05-01
We describe an error-reconciliation protocol, which we call Winnow, based on the exchange of parity and Hamming's 'syndrome' for N-bit subunits of a large dataset. The Winnow protocol was developed in the context of quantum-key distribution and offers significant advantages and net higher efficiency compared to other widely used protocols within the quantum cryptography community. A detailed mathematical analysis of the Winnow protocol is presented in the context of practical implementations of quantum-key distribution; in particular, the information overhead required for secure implementation is one of the most important criteria in the evaluation of a particular error-reconciliation protocol. The increasemore » in efficiency for the Winnow protocol is largely due to the reduction in authenticated public communication required for its implementation.« less
Multicast Delayed Authentication For Streaming Synchrophasor Data in the Smart Grid
Câmara, Sérgio; Anand, Dhananjay; Pillitteri, Victoria; Carmo, Luiz
2017-01-01
Multicast authentication of synchrophasor data is challenging due to the design requirements of Smart Grid monitoring systems such as low security overhead, tolerance of lossy networks, time-criticality and high data rates. In this work, we propose inf -TESLA, Infinite Timed Efficient Stream Loss-tolerant Authentication, a multicast delayed authentication protocol for communication links used to stream synchrophasor data for wide area control of electric power networks. Our approach is based on the authentication protocol TESLA but is augmented to accommodate high frequency transmissions of unbounded length. inf TESLA protocol utilizes the Dual Offset Key Chains mechanism to reduce authentication delay and computational cost associated with key chain commitment. We provide a description of the mechanism using two different modes for disclosing keys and demonstrate its security against a man-in-the-middle attack attempt. We compare our approach against the TESLA protocol in a 2-day simulation scenario, showing a reduction of 15.82% and 47.29% in computational cost, sender and receiver respectively, and a cumulative reduction in the communication overhead. PMID:28736582
Multicast Delayed Authentication For Streaming Synchrophasor Data in the Smart Grid.
Câmara, Sérgio; Anand, Dhananjay; Pillitteri, Victoria; Carmo, Luiz
2016-01-01
Multicast authentication of synchrophasor data is challenging due to the design requirements of Smart Grid monitoring systems such as low security overhead, tolerance of lossy networks, time-criticality and high data rates. In this work, we propose inf -TESLA, Infinite Timed Efficient Stream Loss-tolerant Authentication, a multicast delayed authentication protocol for communication links used to stream synchrophasor data for wide area control of electric power networks. Our approach is based on the authentication protocol TESLA but is augmented to accommodate high frequency transmissions of unbounded length. inf TESLA protocol utilizes the Dual Offset Key Chains mechanism to reduce authentication delay and computational cost associated with key chain commitment. We provide a description of the mechanism using two different modes for disclosing keys and demonstrate its security against a man-in-the-middle attack attempt. We compare our approach against the TESLA protocol in a 2-day simulation scenario, showing a reduction of 15.82% and 47.29% in computational cost, sender and receiver respectively, and a cumulative reduction in the communication overhead.
Plani, Natascha; Becker, Piet; van Aswegen, Helena
2013-04-01
Many patients who have suffered traumatic injuries require mechanical ventilation (MV). Weaning is the transition from ventilatory support to spontaneous breathing. The purpose of this study was to determine whether the use of a nurse and a physiotherapist-driven protocol to wean and extubate patients from MV resulted in decreased MV days and intensive care unit (ICU) length of stay (LOS). A prospective cohort of 28 patients (Phase I), weaned according to the protocol developed for the Union Hospital Trauma Unit, was matched retrospectively with a historical cohort of 28 patients (Phase II), weaned according to physician preference. Pairs in the two groups were matched for gender, age, type, and severity of injury. For mean MV days, the groups did not differ statistically significantly (p 0.3; 14.4 days vs. 16.3 days), although the reduction in MV is clinically significant in view of the complications of additional MV days. The difference of 0.2 days for ICU LOS was not statistically significant (p = 0.9; 20.8 days vs. 21.0 days) demonstrating that the reduction in MV days may not result in the reduction of ICU LOS. The rate of re-intubation was similar between the groups (Phase I = 3/28 vs. Phase II = 4/24). The use of a weaning and extubation protocol led by nursing staff and physiotherapists resulted in a clinically significant reduction in MV time, reducing risk of ventilator-associated complications. The role of physiotherapists and nursing staff in weaning and extubation from MV could be greatly expanded in South African ICUs.
Gonadotrophin-releasing hormone antagonists for assisted conception.
Al-Inany, H G; Abou-Setta, A M; Aboulghar, M
2006-07-19
Gonadotrophin-releasing hormone antagonists produce immediate suppression of gonadotrophin secretion, hence, they can be given after starting gonadotrophin administration. This has resulted in dramatic reduction in the duration of treatment cycle. Two different regimes have been described. The multiple-dose protocol involves the administration of 0.25 mg cetrorelix (or ganirelix) daily from day six to seven of stimulation, or when the leading follicle is 14 to15 mm, until human chorionic gonadotrophin (HCG) administration and the single-dose protocol involves the single administration of 3 mg cetrorelix on day seven to eight of stimulation. Assuming comparable clinical outcome, these benefits would justify a change from the standard long protocol of GnRH agonists to the new GnRH antagonist regimens. To evaluate the evidence regarding the efficacy of gonadotrophin-releasing hormone (GnRH) antagonists with the standard long protocol of GnRH agonists for controlled ovarian hyperstimulation in assisted conception. We searched Cochrane Menstrual Disorders and Subfertility Group's Specialised Register, MEDLINE and EMBASE databases from 1987 to February 2006, and handsearched bibliographies of relevant publications and reviews, and abstracts of scientific meetings. We also contacted manufacturers in the field. Randomized controlled studies comparing different protocols of GnRH antagonists with GnRH agonists in assisted conception cycles were included in this review. Two authors independently assessed trial quality and extracted data. If relevant data were missing or unclear, the authors have been consulted Twenty seven RCTs comparing the GnRH antagonist to the long protocol of GnRH agonist fulfilled the inclusion criteria. Clinical pregnancy rate was significantly lower in the antagonist group. (OR = 0.84, 95% CI = 0.72 - 0.97). The ongoing pregnancy/ live-birth rate showed the same significant lower pregnancy in the antagonist group (P = 0.03; OR 0.82, 95% CI 0.69 to 0.98).However, there was statistically significant reduction in incidence of severe OHSS with antagonist protocol. The relative risk ratio was (P = 0.01; RR 0.61, 95% CI 0.42 to 0.89). In addition, interventions to prevent OHSS (e.g. coasting, cycle cancellation) were administered more frequently in the agonist group (P = 0.03; OR 0.44, 95% CI 0.21 to 0.93). GnRH antagonist protocol is a short and simple protocol with good clinical outcome with significant reduction in incidence of severe ovarian hyperstimulation syndrome and amount of gonadotrophins but the lower pregnancy rate compared to the GnRH agonist long protocol necessitates counseling subfertile couples before recommending change from GnRH agonist to antagonist..
Dynamic whole body PET parametric imaging: II. Task-oriented statistical estimation
Karakatsanis, Nicolas A.; Lodge, Martin A.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman
2013-01-01
In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15–20cm) of a single bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical FDG patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection. PMID:24080994
Dynamic whole-body PET parametric imaging: II. Task-oriented statistical estimation.
Karakatsanis, Nicolas A; Lodge, Martin A; Zhou, Y; Wahl, Richard L; Rahmim, Arman
2013-10-21
In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15-20 cm) of a single-bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole-body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical (18)F-deoxyglucose patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30 min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole-body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection.
ABIOTIC REACTIONS MAY BE THE MOST IMPORTANT MECHANISM IN NATURAL ATTENUATION OF CHLORINATED SOLVENTS
The EPA Technical Protocol for Evaluating Natural Attenuation of Chlorinated Solvents in Ground Water was developed with the assumption that natural biological reductive dechlorination was the only important mechanism for destruction of chlorinated solvents and their reduction ...
Evaluation of a cleanser for petroleum-contaminated skin.
Phieffer, Laura S; Banks, David M; Bosse, Michael J; Meyer, Martha H; Meyer, Ralph A; Smith, Kevin
2003-12-01
Extremity injuries contaminated with petroleum products pose clinical dilemmas. This project was designed to evaluate the efficacy of a dioctyl sulfosuccinate (DS) solution for cleansing petroleum-contaminated skin. One hundred Sprague-Dawley rats were subjected to a contamination protocol followed by a cleansing procedure. Four petroleum contaminants and five cleansing solutions were selected. The protocol consisted of shaving, initial punch biopsy, contamination, precleansing punch biopsy, standardized scrub protocol, and postcleansing punch biopsy. Biopsy samples were analyzed for petroleum residue using fluorometry. The 10% DS solution had the highest reduction of crude oil, grease, and tar: 99.6 +/- 0.4% (mean +/- SD) contaminant reduction for crude oil, 99.8 +/- 0.2% for grease, and 99.8 +/- 0.2% for tar. The other cleansers showed less efficacy (p < 0.05). Concentrated DS appears to be significantly more effective at cleaning petroleum products from skin than the commonly chosen surgical and commercial cleansers.
García-Ramos, Amador; Torrejón, Alejandro; Feriche, Belén; Morales-Artacho, Antonio J; Pérez-Castilla, Alejandro; Padial, Paulino; Jaric, Slobodan
2018-02-01
This study explored the feasibility of the force-velocity relationship (F-V) to detect the acute effects of different fatigue protocols on the selective changes of the maximal capacities of upper body muscles to produce force, velocity, and power. After determining the bench press one-repetition maximum (1RM), participants' F-V relationships were assessed during the bench press throw exercise on five separate sessions after performing one of the following fatiguing protocols: 60%1RM failure, 60%1RM non-failure, 80%1RM failure, 80%1RM non-failure, and no-fatigue. In the non-failure protocols, participants performed half the maximum number of repetitions than in their respective failure protocols. The main findings revealed that (1) all F-V relationships were highly linear (median r = 0.997 and r = 0.982 for averaged across participants and individual data, respectively), (2) the fatiguing protocols were ranked based on the magnitude of power loss as follows: 60%1RM failure > 80%1RM failure > 60%1RM non-failure > 80%1RM non-failure, while (3) the assessed maximum force and velocity outputs showed a particularly prominent reduction in the protocols based on the lowest and highest levels of fatigue (i.e., 80%1RM non-failure and 60%1RM failure), respectively. The results support the use of F-V to assess the effects of fatigue on the distinctive capacities of the muscles to produce force, velocity, and power output while performing multi-joint tasks, while the assessed maximum force and velocity capacities showed a particularly prominent reduction in the protocols based on the lowest and highest levels of fatigue (i.e., 80%1RM non-failure and 60%1RM failure), respectively.
Effectiveness of a myocardial infarction protocol in reducing door-to-ballon time.
Correia, Luis Cláudio Lemos; Brito, Mariana; Kalil, Felipe; Sabino, Michael; Garcia, Guilherme; Ferreira, Felipe; Matos, Iracy; Jacobs, Peter; Ronzoni, Liliana; Noya-Rabelo, Márcia
2013-07-01
An adequate door-to-balloon time (<120 minutes) is the necessary condition for the efficacy of primary angioplasty in infarction to translate into effectiveness. To describe the effectiveness of a quality of care protocol in reducing the door-to-balloon time. Between May 2010 and August 2012, all individuals undergoing primary angioplasty in our hospital were analyzed. The door time was electronically recorded at the moment the patient took a number to be evaluated in the emergency room, which occurred prior to filling the check-in forms and to the triage. The balloon time was defined as the beginning of artery opening (introduction of the first device). The first 5 months of monitoring corresponded to the period of pre-implementation of the protocol. The protocol comprised the definition of a flowchart of actions from patient arrival at the hospital, the team's awareness raising in relation to the prioritization of time, and provision of a periodic feedback on the results and possible inadequacies. A total of 50 individuals were assessed. They were divided into five groups of 10 sequential patients (one group pre- and four groups post-protocol). The door-to-balloon time regarding the 10 cases recorded before protocol implementation was 200 ± 77 minutes. After protocol implementation, there was a progressive reduction of the door-to-balloon time to 142±78 minutes in the first 10 patients, then to 150±50 minutes, 131±37 minutes and, finally, 116±29 minutes in the three sequential groups of 10 patients, respectively. Linear regression between sequential patients and the door-to-balloon time (r = - 0.41) showed a regression coefficient of - 1.74 minutes. The protocol implementation proved effective in the reduction of the door-to-balloon time.
Bocchiardo, M; Alciati, M; Buscemi, A; Cravetto, A; Richiardi, E; Gaita, F
1995-05-01
Carotid sinus massage is a first level test when investigating the cause of syncope. It is normally performed in the supine and erect positions. However, there is no standard complete protocol. So we have devised a new protocol to evaluate the utility of carotid sinus massage in different postures and the influence of patients age on the response. Two groups of subjects were selected: a group of 167 patients (mean age 50 ys +/- 18, 105 males, 62 females) with a history of syncope without cardiovascular and neurological disease and 20 asymptomatic control subjects (mean age 52 ys +/- 13, 11 males, 9 females). Carotid sinus massage was performed supine, just after passive tilt, after 5 minutes of tilt and just after passive return to supine. If a pause > 3" was detected, the protocol was repeated after atropine i.v. injection. Borderline vasodepressor: blood pressure reduction > 30 but < 50 mm Hg without symptoms; vasodepressor: blood pressure reduction > 50 mm Hg or > 30 mm Hg with symptoms like dizziness, vertigo or syncope; cardioinhibitory: pause > 3"; mixed: cardioinhibitory with blood pressure reduction > 30 mm Hg after atropine. Carotid sinus massage gave all informations in the supine position in 14 (12%) patients, after passive tilt in 67 (57%), after 5 minutes of tilt in 30 (26%), and after return to supine in 6 (5%). The responses were: 13 (8%) borderline vasodepressor, 32 (19%) vasodepressor, 2 (1%) cardioinhibitory, 70 (42%) mixed, 50 (30%) negative. Positive responses were more frequent in patients over 45 years (90% versus 43%). In the control group only 3 (15%) positive responses were elicited (2 borderline vasodepressor, and 1 vasodepressor, all in subjects over 45). This protocol for carotid sinus massage evidenced positive responses in 70% of patients with syncope without cardiovascular and neurological disease; cardioinhibitory responses are rare (2%); positive responses are more frequent in patients over 45 years; the protocol specificity was 85%.
Hoshino, Tatsuhiko; Inagaki, Fumio
2017-01-01
Next-generation sequencing (NGS) is a powerful tool for analyzing environmental DNA and provides the comprehensive molecular view of microbial communities. For obtaining the copy number of particular sequences in the NGS library, however, additional quantitative analysis as quantitative PCR (qPCR) or digital PCR (dPCR) is required. Furthermore, number of sequences in a sequence library does not always reflect the original copy number of a target gene because of biases caused by PCR amplification, making it difficult to convert the proportion of particular sequences in the NGS library to the copy number using the mass of input DNA. To address this issue, we applied stochastic labeling approach with random-tag sequences and developed a NGS-based quantification protocol, which enables simultaneous sequencing and quantification of the targeted DNA. This quantitative sequencing (qSeq) is initiated from single-primer extension (SPE) using a primer with random tag adjacent to the 5' end of target-specific sequence. During SPE, each DNA molecule is stochastically labeled with the random tag. Subsequently, first-round PCR is conducted, specifically targeting the SPE product, followed by second-round PCR to index for NGS. The number of random tags is only determined during the SPE step and is therefore not affected by the two rounds of PCR that may introduce amplification biases. In the case of 16S rRNA genes, after NGS sequencing and taxonomic classification, the absolute number of target phylotypes 16S rRNA gene can be estimated by Poisson statistics by counting random tags incorporated at the end of sequence. To test the feasibility of this approach, the 16S rRNA gene of Sulfolobus tokodaii was subjected to qSeq, which resulted in accurate quantification of 5.0 × 103 to 5.0 × 104 copies of the 16S rRNA gene. Furthermore, qSeq was applied to mock microbial communities and environmental samples, and the results were comparable to those obtained using digital PCR and relative abundance based on a standard sequence library. We demonstrated that the qSeq protocol proposed here is advantageous for providing less-biased absolute copy numbers of each target DNA with NGS sequencing at one time. By this new experiment scheme in microbial ecology, microbial community compositions can be explored in more quantitative manner, thus expanding our knowledge of microbial ecosystems in natural environments.
Zuway, Khaled Y; Smith, Jamie P; Foster, Christopher W; Kapur, Nikil; Banks, Craig E; Sutcliffe, Oliver B
2015-09-21
The global increase in the production and abuse of cathinone-derived New Psychoactive Substances (NPSs) has developed the requirement for rapid, selective and sensitive protocols for their separation and detection. Electrochemical sensing of these compounds has been demonstrated to be an effective method for the in-field detection of these substances, either in their pure form or in the presence of common adulterants, however, the technique is limited in its ability to discriminate between structurally related cathinone-derivatives (for example: (±)-4′-methylmethcathinone (4-MMC, 2a) and (±)-4′-methyl-N-ethylmethcathinone (4-MEC, 2b) when they are both present in a mixture. In this paper we demonstrate, for the first time, the combination of HPLC-UV with amperometric detection (HPLC-AD) for the qualitative and quantitative analysis of 4-MMC and 4-MEC using either a commercially available impinging jet (LC-FC-A) or custom-made iCell channel (LC-FC-B) flow-cell system incorporating embedded graphite screen-printed macroelectrodes. The protocol offers a cost-effective, reproducible and reliable sensor platform for the simultaneous HPLC-UV and amperometric detection of the target analytes. The two systems have similar limits of detection, in terms of amperometric detection [LC-FC-A: 14.66 μg mL(-1) (2a) and 9.35 μg mL(-1) (2b); LC-FC-B: 57.92 μg mL(-1) (2a) and 26.91 μg mL(-1) (2b)], to the previously reported oxidative electrochemical protocol [39.8 μg mL(-1) (2a) and 84.2 μg mL(-1) (2b)], for two synthetic cathinones, prevalent on the recreational drugs market. Though not as sensitive as standard HPLC-UV detection, both flow cells show a good agreement, between the quantitative electroanalytical data, thereby making them suitable for the detection and quantification of 4-MMC and 4-MEC, either in their pure form or within complex mixtures. Additionally, the simultaneous HPLC-UV and amperometric detection protocol detailed herein shows a marked improvement and advantage over previously reported electroanalytical methods, which were either unable to selectively discriminate between structurally related synthetic cathinones (e.g. 4-MMC and 4-MEC) or utilised harmful and restrictive materials in their design.
Sunderland, Nicholas; Kaura, Amit; Li, Anthony; Kamdar, Ravi; Petzer, Ed; Dhillon, Para; Murgatroyd, Francis; Scott, Paul A
2016-09-01
Randomised trials have shown that empiric ICD programming, using long detection times and high detection zones, reduces device therapy in ICD recipients. However, there is less data on its effectiveness in a "real-world" setting, especially secondary prevention patients. Our aim was to evaluate the introduction of a standardised programming protocol in a real-world setting of unselected ICD recipients. We analysed 270 consecutive ICD recipients implanted in a single centre-135 implanted prior to protocol implementation (physician-led group) and 135 after (standardised group). The protocol included long arrhythmia detection times (30/40 or equivalent) and high rate detection zones (primary prevention lower treatment zone 200 bpm). Programming in the physician-led group was at the discretion of the implanter. The primary endpoint was time-to-any therapy (ATP or shocks). Secondary endpoints were time-to-inappropriate therapy and time-to-appropriate therapy. The safety endpoints were syncopal episodes, hospital admissions and death. At 12 months follow-up, 47 patients had received any ICD therapy (physician-led group, n = 31 vs. standardised group, n = 16). There was a 47 % risk reduction in any device therapy (p = 0.04) and an 86 % risk reduction in inappropriate therapy (p = 0.009) in the standardised compared to the physician-led group. There was a non-significant 30 % risk reduction in appropriate therapy (p = 0.32). Results were consistent across primary and secondary prevention patients. There were no significant differences in the rates of syncope, hospitalisation, and death. In unselected patients in a real-world setting, introduction of a standardised programming protocol, using long detection times and high detection zones, significantly reduces the burden of ICD therapy without an increase in adverse outcomes.
Computer-aided Assessment of Regional Abdominal Fat with Food Residue Removal in CT
Makrogiannis, Sokratis; Caturegli, Giorgio; Davatzikos, Christos; Ferrucci, Luigi
2014-01-01
Rationale and Objectives Separate quantification of abdominal subcutaneous and visceral fat regions is essential to understand the role of regional adiposity as risk factor in epidemiological studies. Fat quantification is often based on computed tomography (CT) because fat density is distinct from other tissue densities in the abdomen. However, the presence of intestinal food residues with densities similar to fat may reduce fat quantification accuracy. We introduce an abdominal fat quantification method in CT with interest in food residue removal. Materials and Methods Total fat was identified in the feature space of Hounsfield units and divided into subcutaneous and visceral components using model-based segmentation. Regions of food residues were identified and removed from visceral fat using a machine learning method integrating intensity, texture, and spatial information. Cost-weighting and bagging techniques were investigated to address class imbalance. Results We validated our automated food residue removal technique against semimanual quantifications. Our feature selection experiments indicated that joint intensity and texture features produce the highest classification accuracy at 95%. We explored generalization capability using k-fold cross-validation and receiver operating characteristic (ROC) analysis with variable k. Losses in accuracy and area under ROC curve between maximum and minimum k were limited to 0.1% and 0.3%. We validated tissue segmentation against reference semimanual delineations. The Dice similarity scores were as high as 93.1 for subcutaneous fat and 85.6 for visceral fat. Conclusions Computer-aided regional abdominal fat quantification is a reliable computational tool for large-scale epidemiological studies. Our proposed intestinal food residue reduction scheme is an original contribution of this work. Validation experiments indicate very good accuracy and generalization capability. PMID:24119354
Computer-aided assessment of regional abdominal fat with food residue removal in CT.
Makrogiannis, Sokratis; Caturegli, Giorgio; Davatzikos, Christos; Ferrucci, Luigi
2013-11-01
Separate quantification of abdominal subcutaneous and visceral fat regions is essential to understand the role of regional adiposity as risk factor in epidemiological studies. Fat quantification is often based on computed tomography (CT) because fat density is distinct from other tissue densities in the abdomen. However, the presence of intestinal food residues with densities similar to fat may reduce fat quantification accuracy. We introduce an abdominal fat quantification method in CT with interest in food residue removal. Total fat was identified in the feature space of Hounsfield units and divided into subcutaneous and visceral components using model-based segmentation. Regions of food residues were identified and removed from visceral fat using a machine learning method integrating intensity, texture, and spatial information. Cost-weighting and bagging techniques were investigated to address class imbalance. We validated our automated food residue removal technique against semimanual quantifications. Our feature selection experiments indicated that joint intensity and texture features produce the highest classification accuracy at 95%. We explored generalization capability using k-fold cross-validation and receiver operating characteristic (ROC) analysis with variable k. Losses in accuracy and area under ROC curve between maximum and minimum k were limited to 0.1% and 0.3%. We validated tissue segmentation against reference semimanual delineations. The Dice similarity scores were as high as 93.1 for subcutaneous fat and 85.6 for visceral fat. Computer-aided regional abdominal fat quantification is a reliable computational tool for large-scale epidemiological studies. Our proposed intestinal food residue reduction scheme is an original contribution of this work. Validation experiments indicate very good accuracy and generalization capability. Published by Elsevier Inc.
Gallandat, Karin; Daniels, Kyle; Desmarais, Anne Marie; Scheinman, Pamela; Lantagne, Daniele
2017-01-01
To prevent Ebola transmission, frequent handwashing is recommended in Ebola Treatment Units and communities. However, little is known about which handwashing protocol is most efficacious. We evaluated six handwashing protocols (soap and water, alcohol-based hand sanitizer (ABHS), and 0.05% sodium dichloroisocyanurate, high-test hypochlorite, and stabilized and non-stabilized sodium hypochlorite solutions) for 1) efficacy of handwashing on the removal and inactivation of non-pathogenic model organisms and, 2) persistence of organisms in rinse water. Model organisms E. coli and bacteriophage Phi6 were used to evaluate handwashing with and without organic load added to simulate bodily fluids. Hands were inoculated with test organisms, washed, and rinsed using a glove juice method to retrieve remaining organisms. Impact was estimated by comparing the log reduction in organisms after handwashing to the log reduction without handwashing. Rinse water was collected to test for persistence of organisms. Handwashing resulted in a 1.94–3.01 log reduction in E. coli concentration without, and 2.18–3.34 with, soil load; and a 2.44–3.06 log reduction in Phi6 without, and 2.71–3.69 with, soil load. HTH performed most consistently well, with significantly greater log reductions than other handwashing protocols in three models. However, the magnitude of handwashing efficacy differences was small, suggesting protocols are similarly efficacious. Rinse water demonstrated a 0.28–4.77 log reduction in remaining E. coli without, and 0.21–4.49 with, soil load and a 1.26–2.02 log reduction in Phi6 without, and 1.30–2.20 with, soil load. Chlorine resulted in significantly less persistence of E. coli in both conditions and Phi6 without soil load in rinse water (p<0.001). Thus, chlorine-based methods may offer a benefit of reducing persistence in rinse water. We recommend responders use the most practical handwashing method to ensure hand hygiene in Ebola contexts, considering the potential benefit of chlorine-based methods in rinse water persistence. PMID:28231311
Catalyst-free reductive amination of aromatic aldehydes with ammonium formate and Hantzsch ester.
Zhao, Pan-Pan; Zhou, Xin-Feng; Dai, Jian-Jun; Xu, Hua-Jian
2014-12-07
The protocol of the reductive amination of aromatic aldehydes using ammonium formate and Hantzsch ester is described. It is a mild, convenient, acid- and catalyst-free system applied for the synthesis of both symmetric and asymmetric aromatic secondary amines.
Auffret, Marc; Pilote, Alexandre; Proulx, Emilie; Proulx, Daniel; Vandenberg, Grant; Villemur, Richard
2011-12-15
Geosmin and 2-methylisoborneol (MIB) have been associated with off-flavour problems in fish and seafood products, generating a strong negative impact for aquaculture industries. Although most of the producers of geosmin and MIB have been identified as Streptomyces species or cyanobacteria, Streptomyces spp. are thought to be responsible for the synthesis of these compounds in indoor recirculating aquaculture systems (RAS). The detection of genes involved in the synthesis of geosmin and MIB can be a relevant indicator of the beginning of off-flavour events in RAS. Here, we report a real-time polymerase chain reaction (qPCR) protocol targeting geoA sequences that encode a germacradienol synthase involved in geosmin synthesis. New geoA-related sequences were retrieved from eleven geosmin-producing Actinomycete strains, among them two Streptomyces strains isolated from two RAS. Combined with geoA-related sequences available in gene databases, we designed primers and standards suitable for qPCR assays targeting mainly Streptomyces geoA. Using our qPCR protocol, we succeeded in measuring the level of geoA copies in sand filter and biofilters in two RAS. This study is the first to apply qPCR assays to detect and quantify the geosmin synthesis gene (geoA) in RAS. Quantification of geoA in RAS could permit the monitoring of the level of geosmin producers prior to the occurrence of geosmin production. This information will be most valuable for fish producers to manage further development of off-flavour events. Copyright © 2011 Elsevier Ltd. All rights reserved.
A Tissue-Specific Approach to the Analysis of Metabolic Changes in Caenorhabditis elegans
Pujol, Claire; Ipsen, Sabine; Brodesser, Susanne; Mourier, Arnaud; Tolnay, Markus; Frank, Stephan; Trifunović, Aleksandra
2011-01-01
The majority of metabolic principles are evolutionarily conserved from nematodes to humans. Caenorhabditis elegans has widely accelerated the discovery of new genes important to maintain organismic metabolic homeostasis. Various methods exist to assess the metabolic state in worms, yet they often require large animal numbers and tend to be performed as bulk analyses of whole worm homogenates, thereby largely precluding a detailed studies of metabolic changes in specific worm tissues. Here, we have adapted well-established histochemical methods for the use on C. elegans fresh frozen sections and demonstrate their validity for analyses of morphological and metabolic changes on tissue level in wild type and various mutant strains. We show how the worm presents on hematoxylin and eosin (H&E) stained sections and demonstrate their usefulness in monitoring and the identification of morphological abnormalities. In addition, we demonstrate how Oil-Red-O staining on frozen worm cross-sections permits quantification of lipid storage, avoiding the artifact-prone fixation and permeabilization procedures of traditional whole-mount protocols. We also adjusted standard enzymatic stains for respiratory chain subunits (NADH, SDH, and COX) to monitor metabolic states of various C. elegans tissues. In summary, the protocols presented here provide technical guidance to obtain robust, reproducible and quantifiable tissue-specific data on worm morphology as well as carbohydrate, lipid and mitochondrial energy metabolism that cannot be obtained through traditional biochemical bulk analyses of worm homogenates. Furthermore, analysis of worm cross-sections overcomes the common problem with quantification in three-dimensional whole-mount specimens. PMID:22162770
Cortesi, Marilisa; Bandiera, Lucia; Pasini, Alice; Bevilacqua, Alessandro; Gherardi, Alessandro; Furini, Simone; Giordano, Emanuele
2017-01-01
Quantifying gene expression at single cell level is fundamental for the complete characterization of synthetic gene circuits, due to the significant impact of noise and inter-cellular variability on the system's functionality. Commercial set-ups that allow the acquisition of fluorescent signal at single cell level (flow cytometers or quantitative microscopes) are expensive apparatuses that are hardly affordable by small laboratories. A protocol that makes a standard optical microscope able to acquire quantitative, single cell, fluorescent data from a bacterial population transformed with synthetic gene circuitry is presented. Single cell fluorescence values, acquired with a microscope set-up and processed with custom-made software, are compared with results that were obtained with a flow cytometer in a bacterial population transformed with the same gene circuitry. The high correlation between data from the two experimental set-ups, with a correlation coefficient computed over the tested dynamic range > 0.99, proves that a standard optical microscope- when coupled with appropriate software for image processing- might be used for quantitative single-cell fluorescence measurements. The calibration of the set-up, together with its validation, is described. The experimental protocol described in this paper makes quantitative measurement of single cell fluorescence accessible to laboratories equipped with standard optical microscope set-ups. Our method allows for an affordable measurement/quantification of intercellular variability, whose better understanding of this phenomenon will improve our comprehension of cellular behaviors and the design of synthetic gene circuits. All the required software is freely available to the synthetic biology community (MUSIQ Microscope flUorescence SIngle cell Quantification).
Peyron, Pierre-Antoine; Baccino, Éric; Nagot, Nicolas; Lehmann, Sylvain; Delaby, Constance
2017-02-01
Determination of skin wound vitality is an important issue in forensic practice. No reliable biomarker currently exists. Quantification of inflammatory cytokines in injured skin with MSD ® technology is an innovative and promising approach. This preliminary study aims to develop a protocol for the preparation and the analysis of skin samples. Samples from ante mortem wounds, post mortem wounds, and intact skin ("control samples") were taken from corpses at the autopsy. After an optimization of the pre-analytical protocol had been performed in terms of skin homogeneisation and proteic extraction, the concentration of TNF-α was measured in each sample with the MSD ® approach. Then five other cytokines of interest (IL-1β, IL-6, IL-10, IL-12p70 and IFN-γ) were simultaneously quantified with a MSD ® multiplex assay. The optimal pre-analytical conditions consist in a proteic extraction from a 6 mm diameter skin sample, in a PBS buffer with triton 0,05%. Our results show the linearity and the reproductibility of the TNF-α quantification with MSD ® , and an inter- and intra-individual variability of the concentrations of proteins. The MSD ® multiplex assay is likely to detect differential skin concentrations for each cytokine of interest. This preliminary study was used to develop and optimize the pre-analytical and analytical conditions of the MSD ® method using injured and healthy skin samples, for the purpose of looking for and identifying the cytokine, or the set of cytokines, that may be biomarkers of skin wound vitality.
Human immunodeficiency virus bDNA assay for pediatric cases.
Avila, M M; Liberatore, D; Martínez Peralta, L; Biglione, M; Libonatti, O; Coll Cárdenas, P; Hodara, V L
2000-01-01
Techniques to quantify plasma HIV-1 RNA viral load (VL) are commercially available, and they are adequate for monitoring adults infected by HIV and treated with antiretroviral drugs. Little experience on HIV VL has been reported in pediatric cases. In Argentina, the evaluation of several assays for VL in pediatrics are now being considered. To evaluate the pediatric protocol for bDNA assay in HIV-infected children, 25 samples from HIV-infected children (according to CDC criteria for pediatric AIDS) were analyzed by using Quantiplex HIV RNA 2.0 Assay (Chiron Corporation) following the manufacturer's recommendations in a protocol that uses 50 microliters of patient's plasma (sensitivity: 10,000 copies/ml). When HIV-RNA was not detected, samples were run with the 1 ml standard bDNA protocol (sensitivity: 500 HIV-RNA c/ml). Nine samples belonged to infants under 12 months of age (group A) and 16 were over 12 months (group B). All infants under one year of age had high HIV-RNA copies in plasma. VL ranged from 30,800 to 2,560,000 RNA copies/ml (median = 362,000 c/ml) for group A and < 10,000 to 554,600 c/ml (median = < 10,000) for group B. Only 25% of children in group B had detectable HIV-RNA. By using the standard test of quantification, none of the patients had non detectable HIV-RNA, ranging between 950 and 226,200 c/ml for group B (median = 23,300 RNA c/ml). The suggested pediatric protocol could be useful in children under 12 months of age, but 1 ml standard protocol must be used for older children. Samples with undetectable results from children under one year of age should be repeated using the standard protocol.
Evaluation of telomere length in human cardiac tissues using cardiac quantitative FISH.
Sharifi-Sanjani, Maryam; Meeker, Alan K; Mourkioti, Foteini
2017-09-01
Telomere length has been correlated with various diseases, including cardiovascular disease and cancer. The use of currently available telomere-length measurement techniques is often restricted by the requirement of a large amount of cells (Southern-based techniques) or the lack of information on individual cells or telomeres (PCR-based methods). Although several methods have been used to measure telomere length in tissues as a whole, the assessment of cell-type-specific telomere length provides valuable information on individual cell types. The development of fluorescence in situ hybridization (FISH) technologies enables the quantification of telomeres in individual chromosomes, but the use of these methods is dependent on the availability of isolated cells, which prevents their use with fixed archival samples. Here we describe an optimized quantitative FISH (Q-FISH) protocol for measuring telomere length that bypasses the previous limitations by avoiding contributions from undesired cell types. We have used this protocol on small paraffin-embedded cardiac-tissue samples. This protocol describes step-by-step procedures for tissue preparation, permeabilization, cardiac-tissue pretreatment and hybridization with a Cy3-labeled telomeric repeat complementing (CCCTAA) 3 peptide nucleic acid (PNA) probe coupled with cardiac-specific antibody staining. We also describe how to quantify telomere length by means of the fluorescence intensity and area of each telomere within individual nuclei. This protocol provides comparative cell-type-specific telomere-length measurements in relatively small human cardiac samples and offers an attractive technique to test hypotheses implicating telomere length in various cardiac pathologies. The current protocol (from tissue collection to image procurement) takes ∼28 h along with three overnight incubations. We anticipate that the protocol could be easily adapted for use on different tissue types.
NASA Astrophysics Data System (ADS)
Cook, G. D.; Liedloff, A. C.; Richards, A. E.; Meyer, M.
2016-12-01
Australia is the only OECD country with a significant area of tropical savannas within it borders. Approximately 220 000 km2 of these savannas burn every year releasing 2 to 4 % of Australia's accountable greenhouse gas emissions. Reduction in uncertainty in the quantification of these emissions of methane and nitrous has been fundamental to improving both the national GHG inventory and developing approaches to better manage land to reduce these emissions. Projects to reduce pyrogenic emissions have been adopted across 30% of Australia's high rainfall savannas. Recent work has focussed on quantifying the additional benefit of increased carbon stocks in fine fuel and coarse woody debris (CWD) resulting from improvements in fire management. An integrated set of equations have been developed to enable seemless quantification of emissions and sequestration in these frequently burnt savannas. These show that increases in carbon stored in fine fuel and CWD comprises about 3 times the emissions abatement from improvements in fire management that have been achieved in a project area of 28 000 km2. Future work is focussing on improving the understanding of spatial and temporal variation in fire behaviour across Australia's savanna biome, improvements in quantification of carbon dynamics of CWD and improved quantification of the effects of fire on carbon dynamics in soils of the savannas.
Variation of quantitative emphysema measurements from CT scans
NASA Astrophysics Data System (ADS)
Keller, Brad M.; Reeves, Anthony P.; Henschke, Claudia I.; Barr, R. Graham; Yankelevitz, David F.
2008-03-01
Emphysema is a lung disease characterized by destruction of the alveolar air sacs and is associated with long-term respiratory dysfunction. CT scans allow for imaging of the anatomical basis of emphysema, and several measures have been introduced for the quantification of the extent of disease. In this paper we compare these measures for repeatability over time. The measures of interest in this study are emphysema index, mean lung density, histogram percentile, and the fractal dimension. To allow for direct comparisons, the measures were normalized to a 0-100 scale. These measures have been computed for a set of 2,027 scan pairs in which the mean interval between scans was 1.15 years (σ: 93 days). These independent pairs were considered with respect to three different scanning conditions (a) 223 pairs where both were scanned with a 5 mm slice thickness protocol, (b) 695 with the first scanned with the 5 mm protocol and the second with a 1.25 mm protocol, and (c) 1109 pairs scanned both times using a 1.25 mm protocol. We found that average normalized emphysema index and histogram percentiles scores increased by 5.9 and 11 points respectively, while the fractal dimension showed stability with a mean difference of 1.2. We also found, a 7 point bias introduced for emphysema index under condition (b), and that the fractal dimension measure is least affected by scanner parameter changes.
Ziegler, Jörg; Abel, Steffen
2014-12-01
A new method for the determination of amino acids is presented. It combines established methods for the derivatization of primary and secondary amino groups with 9-fluorenylmethoxycarbonyl chloride (Fmoc-Cl) with the subsequent amino acid specific detection of the derivatives by LC-ESI-MS/MS using multiple reaction monitoring (MRM). The derivatization proceeds within 5 min, and the resulting amino acid derivatives can be rapidly purified from matrix by solid-phase extraction (SPE) on HR-X resin and separated by reversed-phase HPLC. The Fmoc derivatives yield several amino acid specific fragment ions which opened the possibility to select amino acid specific MRM transitions. The method was applied to all 20 proteinogenic amino acids, and the quantification was performed using L-norvaline as standard. A limit of detection as low as 1 fmol/µl with a linear range of up to 125 pmol/µl could be obtained. Intraday and interday precisions were lower than 10 % relative standard deviations for most of the amino acids. Quantification using L-norvaline as internal standard gave very similar results compared to the quantification using deuterated amino acid as internal standards. Using this protocol, it was possible to record the amino acid profiles of only a single root from Arabidopsis thaliana seedlings and to compare it with the amino acid profiles of 20 dissected root meristems (200 μm).
Biofilm Quantification on Nasolacrimal Silastic Stents After Dacryocystorhinostomy.
Murphy, Jae; Ali, Mohammed Javed; Psaltis, Alkis James
2015-01-01
Biofilms are now recognized as potential factors in the pathogenesis of chronic inflammatory and infective diseases. The aim of this study was to examine the presence of biofilms and quantify their biomass on silastic nasolacrimal duct stents inserted after dacryocystorhinostomy (DCR). A prospective study was performed on a series of patients undergoing DCR with O'Donoghue stent insertion. After removal, the stents were subjected to biofilm analysis using standard protocols of confocal laser scanning microscopy (CLSM) and scanning electron microscopy. These stents were compared against negative controls and positive in vitro ones established using Staphylococcus aureus strain ATCC 25923. Biofilm quantification was performed using the COMSTAT2 software and the total biofilm biomass was calculated. A total of nine consecutive patient samples were included in this prospective study. None of the patients had any evidence of postoperative infection. All the stents demonstrated evidence of biofilm formation using both imaging modalities. The presence of various different sized organisms within a common exopolysaccharide matrix on CLSM suggested the existence of polymicrobial communities. The mean biomass of patient samples was 0.9385 μm³/μm² (range: 0.3901-1.9511 μm³/μm²). This is the first study to report the quantification of biomass on lacrimal stents. The presence of biofilms on lacrimal stents after DCR is a common finding but this need not necessarily translate to postoperative clinical infection.
Development of a method for measuring femoral torsion using real-time ultrasound.
Hafiz, Eliza; Hiller, Claire E; Nicholson, Leslie L; Nightingale, E Jean; Clarke, Jillian L; Grimaldi, Alison; Eisenhuth, John P; Refshauge, Kathryn M
2014-07-01
Excessive femoral torsion has been associated with various musculoskeletal and neurological problems. To explore this relationship, it is essential to be able to measure femoral torsion in the clinic accurately. Computerized tomography (CT) and magnetic resonance imaging (MRI) are thought to provide the most accurate measurements but CT involves significant radiation exposure and MRI is expensive. The aim of this study was to design a method for measuring femoral torsion in the clinic, and to determine the reliability of this method. Details of design process, including construction of a jig, the protocol developed and the reliability of the method are presented. The protocol developed used ultrasound to image a ridge on the greater trochanter, and a customized jig placed on the femoral condyles as reference points. An inclinometer attached to the customized jig allowed quantification of the degree of femoral torsion. Measurements taken with this protocol had excellent intra- and inter-rater reliability (ICC2,1 = 0.98 and 0.97, respectively). This method of measuring femoral torsion also permitted measurement of femoral torsion with a high degree of accuracy. This method is applicable to the research setting and, with minor adjustments, will be applicable to the clinical setting.
Lu, Jingrang; Gerke, Tammie L; Buse, Helen Y; Ashbolt, Nicholas J
2014-12-01
A quantitative polymerase chain reaction assay (115 bp amplicon) specific to Escherichia coli K12 with an ABI(TM) internal control was developed based on sequence data encoding the rfb gene cluster. Assay specificity was evaluated using three E. coli K12 strains (ATCC W3110, MG1655 & DH1), 24 non-K12 E. coli and 23 bacterial genera. The biofilm detection limit was 10(3) colony-forming units (CFU) E. coli K12 mL(-1), but required a modified protocol, which included a bio-blocker Pseudomonas aeruginosa with ethylenediaminetetraacetic acid buffered to pH 5 prior to cell lysis/DNA extraction. The novel protocol yielded the same sensitivity for drinking water biofilms associated with Fe3O4 (magnetite)-coated SiO2 (quartz) grains and biofilm-surface iron corrosion products from a drinking water distribution system. The novel DNA extraction protocol and specific E. coli K12 assay are sensitive and robust enough for detection and quantification within iron drinking water pipe biofilms, and are particularly well suited for studying enteric bacterial interactions within biofilms.
Garcia Hejl, Carine; Ramirez, Jose Manuel; Vest, Philippe; Chianea, Denis; Renard, Christophe
2014-09-01
Laboratories working towards accreditation by the International Standards Organization (ISO) 15189 standard are required to demonstrate the validity of their analytical methods. The different guidelines set by various accreditation organizations make it difficult to provide objective evidence that an in-house method is fit for the intended purpose. Besides, the required performance characteristics tests and acceptance criteria are not always detailed. The laboratory must choose the most suitable validation protocol and set the acceptance criteria. Therefore, we propose a validation protocol to evaluate the performance of an in-house method. As an example, we validated the process for the detection and quantification of lead in whole blood by electrothermal absorption spectrometry. The fundamental parameters tested were, selectivity, calibration model, precision, accuracy (and uncertainty of measurement), contamination, stability of the sample, reference interval, and analytical interference. We have developed a protocol that has been applied successfully to quantify lead in whole blood by electrothermal atomic absorption spectrometry (ETAAS). In particular, our method is selective, linear, accurate, and precise, making it suitable for use in routine diagnostics.
Brignole-Baudouin, Françoise; Baudouin, Christophe; Aragona, Pasquale; Rolando, Maurizio; Labetoulle, Marc; Pisella, Pierre Jean; Barabino, Stefano; Siou-Mermet, Raphaele; Creuzot-Garcher, Catherine
2011-11-01
To determine whether oral supplementation with omega-3 and omega-6 fatty acids can reduce conjunctival epithelium expression of the inflammatory marker human leucocyte antigen-DR (HLA-DR) in patients with dry eye syndrome (DES). This 3-month, double-masked, parallel-group, controlled study was conducted in nine centres, in France and Italy. Eligible adult patients with mild to moderate DES were randomized to receive a placebo containing medium-chain triglycerides or treatment supplement containing omega-3 and omega-6 fatty acids, vitamins and zinc. Treatment regimen was three capsules daily. Impression cytology (IC) was performed at baseline and at month 3 to assess the percentage of cells expressing HLA-DR and to evaluate fluorescence intensity, an alternate measure of HLA-DR. Dry eye symptoms and objective signs were also evaluated. Analyses were performed on the full analysis set (FAS) and per-protocol set (PPS). In total, 138 patients were randomized; 121 patients with available IC were included in the FAS, and of these, 106 patients had no major protocol deviations (PPS). In the PPS, there was a significant reduction in the percentage of HLA-DR-positive cells in the fatty acids group (p = 0.021). Expression of HLA-DR as measured by fluorescence intensity quantification was also significantly reduced in the fatty acids group [FAS (p = 0.041); PPS (p = 0.017)]. No significant difference was found for the signs and symptoms, but there was a tendency for improvement in patients receiving the fatty acids treatment. This study demonstrates that supplementation with omega-3 and omega-6 fatty acids can reduce expression of HLA-DR conjunctival inflammatory marker and may help improve DES symptoms. © 2011 The Authors. Acta Ophthalmologica © 2011 Acta Ophthalmologica Scandinavica Foundation.
Touron-Bodilis, A; Pougnard, C; Frenkiel-Lebossé, H; Hallier-Soulier, S
2011-08-01
This study was designed to evaluate the usefulness of quantification by real-time PCR as a management tool to monitor concentrations of Legionella spp. and Legionella pneumophila in industrial cooling systems and its ability to anticipate culture trends by the French standard method (AFNOR T90-431). Quantifications of Legionella bacteria were achieved by both methods on samples from nine cooling systems with different water qualities. Proportion of positive samples for L. pneumophila quantified by PCR was clearly lower in deionized or river waters submitted to a biocide treatment than in raw river waters, while positive samples for Legionella spp. were quantified for almost all the samples. For some samples containing PCR inhibitors, high quantification limits (up to 4·80 × 10(5) GU l(-1) ) did not allow us to quantify L. pneumophila, when they were quantified by culture. Finally, the monitoring of concentrations of L. pneumophila by both methods showed similar trends for 57-100% of the samples. These results suggest that, if some methodological steps designed to reduce inhibitory problems and thus decrease the quantification limits, could be developed to quantify Legionella in complex waters, the real-time PCR could be a valuable complementary tool to monitor the evolution of L. pneumophila concentrations. This study shows the possibility of using real-time PCR to monitor L. pneumophila proliferations in cooling systems and the importance to adapt nucleic acid extraction and purification protocols to raw waters. Journal of Applied Microbiology © 2011 The Society for Applied Microbiology. No claim to French Government works.
Apparent exchange rate for breast cancer characterization.
Lasič, Samo; Oredsson, Stina; Partridge, Savannah C; Saal, Lao H; Topgaard, Daniel; Nilsson, Markus; Bryskhe, Karin
2016-05-01
Although diffusion MRI has shown promise for the characterization of breast cancer, it has low specificity to malignant subtypes. Higher specificity might be achieved if the effects of cell morphology and molecular exchange across cell membranes could be disentangled. The quantification of exchange might thus allow the differentiation of different types of breast cancer cells. Based on differences in diffusion rates between the intra- and extracellular compartments, filter exchange spectroscopy/imaging (FEXSY/FEXI) provides non-invasive quantification of the apparent exchange rate (AXR) of water between the two compartments. To test the feasibility of FEXSY for the differentiation of different breast cancer cells, we performed experiments on several breast epithelial cell lines in vitro. Furthermore, we performed the first in vivo FEXI measurement of water exchange in human breast. In cell suspensions, pulsed gradient spin-echo experiments with large b values and variable pulse duration allow the characterization of the intracellular compartment, whereas FEXSY provides a quantification of AXR. These experiments are very sensitive to the physiological state of cells and can be used to establish reliable protocols for the culture and harvesting of cells. Our results suggest that different breast cancer subtypes can be distinguished on the basis of their AXR values in cell suspensions. Time-resolved measurements allow the monitoring of the physiological state of cells in suspensions over the time-scale of hours, and reveal an abrupt disintegration of the intracellular compartment. In vivo, exchange can be detected in a tumor, whereas, in normal tissue, the exchange rate is outside the range experimentally accessible for FEXI. At present, low signal-to-noise ratio and limited scan time allows the quantification of AXR only in a region of interest of relatively large tumors. © 2016 The Authors. NMR in Biomedicine published by John Wiley & Sons Ltd.
McCollough, Cynthia H; Ulzheimer, Stefan; Halliburton, Sandra S; Shanneik, Kaiss; White, Richard D; Kalender, Willi A
2007-05-01
To develop a consensus standard for quantification of coronary artery calcium (CAC). A standard for CAC quantification was developed by a multi-institutional, multimanufacturer international consortium of cardiac radiologists, medical physicists, and industry representatives. This report specifically describes the standardization of scan acquisition and reconstruction parameters, the use of patient size-specific tube current values to achieve a prescribed image noise, and the use of the calcium mass score to eliminate scanner- and patient size-based variations. An anthropomorphic phantom containing calibration inserts and additional phantom rings were used to simulate small, medium-size, and large patients. The three phantoms were scanned by using the recommended protocols for various computed tomography (CT) systems to determine the calibration factors that relate measured CT numbers to calcium hydroxyapatite density and to determine the tube current values that yield comparable noise values. Calculation of the calcium mass score was standardized, and the variance in Agatston, volume, and mass scores was compared among CT systems. Use of the recommended scanning parameters resulted in similar noise for small, medium-size, and large phantoms with all multi-detector row CT scanners. Volume scores had greater interscanner variance than did Agatston and calcium mass scores. Use of a fixed calcium hydroxyapatite density threshold (100 mg/cm(3)), as compared with use of a fixed CT number threshold (130 HU), reduced interscanner variability in Agatston and calcium mass scores. With use of a density segmentation threshold, the calcium mass score had the smallest variance as a function of patient size. Standardized quantification of CAC yielded comparable image noise, spatial resolution, and mass scores among different patient sizes and different CT systems and facilitated reduced radiation dose for small and medium-size patients.
Frébet, Elise; Abraham, Julie; Geneviève, Franck; Lepelley, Pascale; Daliphard, Sylvie; Bardet, Valérie; Amsellem, Sophie; Guy, Julien; Mullier, Francois; Durrieu, Francoise; Venon, Marie-Dominique; Leleu, Xavier; Jaccard, Arnaud; Faucher, Jean-Luc; Béné, Marie C; Feuillard, Jean
2011-05-01
Flow cytometry is the sole available technique for quantification of tumor plasma-cells in plasma-cell disorders, but so far, no consensus technique has been proposed. Here, we report on a standardized, simple, robust five color flow cytometry protocol developed to characterize and quantify bone marrow tumor plasma-cells, validated in a multicenter manner. CD36 was used to exclude red blood cell debris and erythroblasts, CD38 and CD138 to detect plasma-cells, immunoglobulin light chains, CD45, CD56, CD19, and CD117 + CD34 to simultaneously characterize abnormal plasma-cells and quantify bone marrow precursors. This approach was applied in nine centers to 229 cases, including 25 controls. Tumor plasma-cells were detected in 96.8% of cases, all exhibiting an immunoglobulin peak over 1g/L. Calculation of a plasma-cells/precursors (PC/P) ratio allowed quantification of the plasma-cell burden independently from bone marrow hemodilution. The PC/P ratio yielded the best results in terms of sensitivity (81%) and specificity (84%) for differential diagnosis between MGUS and myeloma, when compared with other criteria. Combination of both the PC/P ratio and percentage of abnormal plasma-cells allowed the best differential diagnosis, but these criteria were discordant in 25% cases. Indirect calculation of CD19 negative PC/R ratio gave the best results in terms of sensitivity (87%). This standardized multiparameter flow cytometric approach allows for the detection and quantification of bone marrow tumor plasma-cell infiltration in nearly all cases of MGUS and myeloma, independently of debris and hemodilution. This approach may also prove useful for the detection of minimal residual disease. Copyright © 2010 International Clinical Cytometry Society.
Ermacora, Alessia; Hrnčiřík, Karel
2014-01-01
Substantial progress has been recently made in the development and optimisation of analytical methods for the quantification of 2-MCPD, 3-MCPD and glycidyl esters in oils and fats, and there are a few methods currently available that allow a reliable quantification of these contaminants in bulk oils and fats. On the other hand, no standard method for the analysis of foodstuffs has yet been established. The aim of this study was the development and validation of a new method for the simultaneous quantification of 2-MCPD, 3-MCPD and glycidyl esters in oil-based food products. The developed protocol includes a first step of liquid-liquid extraction and purification of the lipophilic substances of the sample, followed by the application of a previously developed procedure based on acid transesterification, for the indirect quantification of these contaminants in oils and fats. The method validation was carried out on food products (fat-based spreads, creams, margarine, mayonnaise) manufactured in-house, in order to control the manufacturing process and account for any food matrix-analyte interactions (the sample spiking was carried out on the single components used for the formulations rather than the final products). The method showed good accuracy (the recoveries ranged from 97% to 106% for bound 3-MCPD and 2-MCPD and from 88% to 115% for bound glycidol) and sensitivity (the LOD was 0.04 and 0.05 mg kg(-1) for bound MCPD and glycidol, respectively). Repeatability and reproducibility were satisfactory (RSD below 2% and 5%, respectively) for all analytes. The levels of salts and surface-active compounds in the formulation were found to have no impact on the accuracy and the other parameters of the method.
Quantification of phytochelatins in Chlamydomonas reinhardtii using ferrocene-based derivatization.
Bräutigam, Anja; Bomke, Susanne; Pfeifer, Thorben; Karst, Uwe; Krauss, Gerd-Joachim; Wesenberg, Dirk
2010-08-01
A method for the identification and quantification of canonic and isoforms of phytochelatins (PCs) from Chlamydomonas reinhardtii was developed. After disulfide reduction with tris(2-carboxyethyl)phosphine (TCEP) PCs were derivatized with ferrocenecarboxylic acid (2-maleimidoyl)ethylamide (FMEA) in order to avoid oxidation of the free thiol functions during analysis. Liquid chromatography (LC) coupled to electrospray mass spectrometry (ESI-MS) and inductively coupled plasma-mass spectrometry (ICP-MS) was used for rapid and quantitative analysis of the precolumn derivatized PCs. PC(2-4), CysGSH, CysPC(2-4), CysPC(2)desGly, CysPC(2)Glu and CysPC(2)Ala were determined in the algal samples depending on the exposure of the cells to cadmium ions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Battaglia, G.; Yeh, S.Y.; O'Hearn, E.
1987-09-01
This study examines the effects of repeated systemic administration (20 mg/kg s.c., twice daily for 4 days) of 3,4-methylenedioxymethamphetamine (MDMA) and 3,4-methylenedioxyamphetamine (MDA) on levels of brain monoamines, their metabolites and on the density of monoamine uptake sites in various regions of rat brain. Marked reductions (30-60%) in the concentration of 5-hydroxyindoleacetic acid were observed in cerebral cortex, hippocampus, striatum, hypothalamus and midbrain at 2 weeks after a 4-day treatment regimen of MDMA or MDA; less consistent reductions in serotonin (5-HT) content were observed in these brain regions. In addition, both MDMA and MDA caused comparable and substantial reductions (50-75%)more » in the density of (/sup 3/H)paroxetine-labeled 5-HT uptake sites in all brain regions examined. In contrast, neither MDMA nor MDA caused any widespread or long-term changes in the content of the catecholaminergic markers (i.e., norepinephrine, dopamine, 3,4 dihydroxyphenylacetic acid and homovanillic acid) or in the number of (/sup 3/H)mazindol-labeled norepinephrine or dopamine uptake sites in the brain regions examined. These data demonstrate that MDMA and MDA cause long-lasting neurotoxic effects with respect to both the functional and structural integrity of serotonergic neurons in brain. Furthermore, our measurement of reductions in the density of 5-HT uptake sites provides a means for quantification of the neurodegenerative effects of MDMA and MDA on presynaptic 5-HT terminals.« less
SU-E-P-03: Implementing a Low Dose Lung Screening CT Program Meeting Regulatory Requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaFrance, M; Marsh, S; O'Donnell, G
Purpose: To provide information pertaining to IROC Houston QA Center's (RPC) credentialing process for institutions participating in NCI-sponsored clinical trials. Purpose: Provide guidance to the Radiology Departments with the intent of implementing a Low Dose CT Screening Program using different CT Scanners with multiple techniques within the framework of the required state regulations. Method: State Requirements for the purpose of implementing a Low Dose CT Lung Protocol required working with the Radiology and Pulmonary Department in setting up a Low Dose Screening Protocol designed to reduce the radiation burden to the patients enrolled. Radiation dose measurements (CTDIvol) for various CTmore » manufacturers (Siemens16, Siemens 64, Philips 64, and Neusoft128) for three different weight based protocols. All scans were reviewed by the Radiologist. Prior to starting a low dose lung screening protocol, information had to be submitted to the state for approval. Performing a Healing Arts protocol requires extensive information. This not only includes name and address of the applicant but a detailed description of the disease, the x-ray examination and the population to be examined. The unit had to be tested by a qualified expert using the technique charts. The credentials of all the operators, the supervisors and the Radiologists had to be submitted to the state. Results: All the appropriate documentation was sent to the state for review. The measured results between the Low Dose Protocol versus the default Adult Chest Protocol showed that there was a dose reduction of 65% for small (100-150 lb.) patient, 75% for the Medium patient (151-250 lbs.), and a 55% reduction for the Large patient ( over 250 lbs.). Conclusion: Measured results indicated that the Low Dose Protocol indeed lowered the screening patient's radiation dose and the institution was able to submit the protocol to the State's regulators.« less
Sentinel Lymph Node Biopsy: Quantification of Lymphedema Risk Reduction
2006-10-01
dimensional internal mammary lymphoscintigraphy: implications for radiation therapy treatment planning for breast carcinoma. Int J Radiat Oncol Biol Phys...techniques based on conventional photon beams, intensity modulated photon beams and proton beams for therapy of intact breast. Radiother Oncol. Feb...Harris JR. Three-dimensional internal mammary lymphoscintigraphy: implications for radiation therapy treatment planning for breast carcinoma. Int J
USDA-ARS?s Scientific Manuscript database
The introduction of drift reduction technology (DRT) guidelines by the U. S. Environmental Protection Agency (EPA) has established testing protocols for nozzles, agrochemicals, application parameters, and combinations thereof for applying agrochemicals by certified individuals in the United States....
Functionalization of Organotrifluoroborates: Reductive Amination
Cooper, David J.
2010-01-01
Herein we report the conversion of aldehyde-containing potassium and tetrabutylammonium organotrifluoroborates to the corresponding amines through reductive amination protocols. Potassium formate facilitated by catalytic palladium acetate, sodium triacetoxyborohydride, and pyridine borane have all served as effective hydride donors, reducing the initially formed imines or iminium ions to provide the corresponding amines. PMID:18412389
Rasotto, Chiara; Bergamin, Marco; Sieverdes, John C; Gobbo, Stefano; Alberton, Cristine L; Neunhaeuserer, Daniel; Maso, Stefano; Zaccaria, Marco; Ermolao, Andrea
2015-02-01
The aim of this study was to evaluate a tailored physical activity protocol performed in a work environment with a group of female workers employed in manual precision tasks to reduce upper limb pain. Sixty female subjects were randomly assigned to an intervention group or a control group. The IG was administered of a 6-month, twice-a-week, tailored exercise program, whereas the CG received no intervention. The IG showed a reduction on shoulder pain accompanied by increases on the range of motion measures. In addition, reductions in upper limb pain and neck disability were detected with concomitant increases in grip strength. This study indicated positive effects of a tailored workplace exercise protocol in female workers exposed to moderate risk for work-related musculoskeletal disorders, showing clinically meaningful reductions of pain symptoms and disability on upper limb and neck regions.
Roy, Priyabrata; Bodhak, Chandan; Pramanik, Animesh
2017-02-01
A one-pot three-component protocol has been developed for the synthesis of amino ester-embedded benzimidazoles under metal-free neutral conditions. Sequentially, the methodology involves coupling of an amino ester with 1-fluoro-2-nitrobenzene, reduction of the coupled nitroarene by sodium dithionite, and cyclization of the corresponding diamine with an aldehyde.
Building America House Simulation Protocols
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendron, Robert; Engebrecht, Cheryn
2010-09-01
The House Simulation Protocol document was developed to track and manage progress toward Building America's multi-year, average whole-building energy reduction research goals for new construction and existing homes, using a consistent analytical reference point. This report summarizes the guidelines for developing and reporting these analytical results in a consistent and meaningful manner for all home energy uses using standard operating conditions.
ERIC Educational Resources Information Center
Birch & Davis Associates, Inc., Silver Spring, MD.
A substantial knowledge base exists on reduction of tobacco use by youth. Effective prevention in this area can have major health and economic benefits. Information from research and prevention practice, organized by means of the Prevention Enhancement Protocols System (PEPS), is provided in the form of guidelines and recommendations for planning…
The introduction of a protocol for the use of biobrane for facial burns in children.
Rogers, A D; Adams, S; Rode, H
2011-01-01
BIOBRANE HAS BECOME AN INDISPENSIBLE DRESSING WITH THREE ESTABLISHED INDICATIONS IN ACUTE BURNS CARE AT OUR INSTITUTION: (1) as the definitive dressing of superficial partial thickness facial burns, (2) after tangential excision of deep burns when autograft or cadaver skin is unavailable, and (3) for graft reduction. This paper details our initial experience of Biobrane for the management of superficial partial thickness facial burns in children and the protocol that was compiled for its optimal use. A retrospective analysis of theatre records, case notes and photographs was performed to evaluate our experience with Biobrane over a one-year period. Endpoints included length of stay, analgesic requirements, time to application of Biobrane, healing times, and aesthetic results. Historical controls were used to compare the results with our previous standard of care. 87 patients with superficial partial thickness burns of the face had Biobrane applied during this period. By adhering to the protocol we were able to demonstrate significant reductions in hospital stay, healing time, analgesic requirements, nursing care, with excellent cosmetic results. The protocol is widely accepted by all involved in the optimal management of these patients, including parents, anaesthetists, and nursing staff.
Head CT: Image quality improvement with ASIR-V using a reduced radiation dose protocol for children.
Kim, Hyun Gi; Lee, Ho-Joon; Lee, Seung-Koo; Kim, Hyun Ji; Kim, Myung-Joon
2017-09-01
To investigate the quality of images reconstructed with adaptive statistical iterative reconstruction V (ASIR-V), using pediatric head CT protocols. A phantom was scanned at decreasing 20% mA intervals using our standard pediatric head CT protocols. Each study was then reconstructed at 10% ASIR-V intervals. After the phantom study, we reduced mA by 10% in the protocol for <3-year-old patients and applied 30% ASIR-V and by 30% in the protocol for 3- to 15-year-old patients and applied 40% ASIR-V. Increasing the percentage of ASIR-V resulted in lower noise and higher contrast-to-noise ratio (CNR) and preserved spatial resolution in the phantom study. Compared to a conventional-protocol, reduced-dose protocol with ASIR-V achieved 12.8% to 34.0% of dose reduction and showed images of lower noise (9.22 vs. 10.73, P = 0.043) and higher CNR in different levels (centrum semiovale, 2.14 vs. 1.52, P = 0.003; basal ganglia, 1.46 vs. 1.07, P = 0.001; and cerebellum, 2.18 vs. 1.33, P < 0.001). Qualitative analysis showed higher gray-white matter differentiation and sharpness and preserved overall diagnostic quality in the images with ASIR-V. Use of ASIR-V allowed a 12.8% to 34.0% dose reduction in each age group with potential to improve image quality. • It is possible to reduce radiation dose and improve image quality with ASIR-V. • We improved noise and CNR and decreased radiation dose. • Sharpness improved with ASIR-V. • Total radiation dose was decreased by 12.8% to 34.0%.
Bova, G Steven; Eltoum, Isam A; Kiernan, John A; Siegal, Gene P; Frost, Andra R; Best, Carolyn J M; Gillespie, John W; Su, Gloria H; Emmert-Buck, Michael R
2005-02-01
Isolation of well-preserved pure cell populations is a prerequisite for sound studies of the molecular basis of any tissue-based biological phenomenon. This article reviews current methods for obtaining anatomically specific signals from molecules isolated from tissues, a basic requirement for productive linking of phenotype and genotype. The quality of samples isolated from tissue and used for molecular analysis is often glossed over or omitted from publications, making interpretation and replication of data difficult or impossible. Fortunately, recently developed techniques allow life scientists to better document and control the quality of samples used for a given assay, creating a foundation for improvement in this area. Tissue processing for molecular studies usually involves some or all of the following steps: tissue collection, gross dissection/identification, fixation, processing/embedding, storage/archiving, sectioning, staining, microdissection/annotation, and pure analyte labeling/identification and quantification. We provide a detailed comparison of some current tissue microdissection technologies, and provide detailed example protocols for tissue component handling upstream and downstream from microdissection. We also discuss some of the physical and chemical issues related to optimal tissue processing, and include methods specific to cytology specimens. We encourage each laboratory to use these as a starting point for optimization of their overall process of moving from collected tissue to high quality, appropriately anatomically tagged scientific results. In optimized protocols is a source of inefficiency in current life science research. Improvement in this area will significantly increase life science quality and productivity. The article is divided into introduction, materials, protocols, and notes sections. Because many protocols are covered in each of these sections, information relating to a single protocol is not contiguous. To get the greatest benefit from this article, readers are advised to read through the entire article first, identify protocols appropriate to their laboratory for each step in their workflow, and then reread entries in each section pertaining to each of these single protocols.
Rahman, Shafiq; Griffin, Michelle; Naik, Anish; Szarko, Matthew; Butler, Peter E M
2018-02-15
Decellularized scaffolds can induce chondrogenic differentiation of stem cells. This study compares different methods to optimise the decellularization of auricular cartilage. The process consisted of an initial 12 hour dry freeze thaw which froze the cartilage specimens in an empty tube at -20 °C. Samples were allowed to thaw at room temperature followed by submersion in phosphate buffer solution in which they were frozen at -20 °C for a 12 hour period. They were then allowed to thaw at room temperature as before. Protocol A subsequently involved subjecting specimens to both deoxyribonuclease and sodium deoxycholate. Protocol B and C were adaptations of this using 0.25% trypsin (7 cycles) and a 0.5 molar solution of ethylenediaminetetraacetic acid (3 hours for each cycle) respectively as additional steps. Trypsin accelerated the decellularization process with a reduction in DNA content from 55.4 ng/μL (native) to 17.3 ng/μL (P-value < 0.05) after 14 days. Protocol B showed a faster reduction in DNA content when compared with protocol A. In comparison to protocol C after 14 days, trypsin also showed greater decellularization with a mean difference of 11.7 ng/μL (P-value < 0.05). Histological analysis with H&E and DAPI confirmed depletion of cells at 14 days with trypsin.
Representativeness of laboratory sampling procedures for the analysis of trace metals in soil.
Dubé, Jean-Sébastien; Boudreault, Jean-Philippe; Bost, Régis; Sona, Mirela; Duhaime, François; Éthier, Yannic
2015-08-01
This study was conducted to assess the representativeness of laboratory sampling protocols for purposes of trace metal analysis in soil. Five laboratory protocols were compared, including conventional grab sampling, to assess the influence of sectorial splitting, sieving, and grinding on measured trace metal concentrations and their variability. It was concluded that grinding was the most important factor in controlling the variability of trace metal concentrations. Grinding increased the reproducibility of sample mass reduction by rotary sectorial splitting by up to two orders of magnitude. Combined with rotary sectorial splitting, grinding increased the reproducibility of trace metal concentrations by almost three orders of magnitude compared to grab sampling. Moreover, results showed that if grinding is used as part of a mass reduction protocol by sectorial splitting, the effect of sieving on reproducibility became insignificant. Gy's sampling theory and practice was also used to analyze the aforementioned sampling protocols. While the theoretical relative variances calculated for each sampling protocol qualitatively agreed with the experimental variances, their quantitative agreement was very poor. It was assumed that the parameters used in the calculation of theoretical sampling variances may not correctly estimate the constitutional heterogeneity of soils or soil-like materials. Finally, the results have highlighted the pitfalls of grab sampling, namely, the fact that it does not exert control over incorrect sampling errors and that it is strongly affected by distribution heterogeneity.
Whole farm quantification of GHG emissions within smallholder farms in developing countries
NASA Astrophysics Data System (ADS)
Seebauer, Matthias
2014-03-01
The IPCC has compiled the best available scientific methods into published guidelines for estimating greenhouse gas emissions and emission removals from the land-use sector. In order to evaluate existing GHG quantification tools to comprehensively quantify GHG emissions and removals in smallholder conditions, farm scale quantification was tested with farm data from Western Kenya. After conducting a cluster analysis to identify different farm typologies GHG quantification was exercised using the VCS SALM methodology complemented with IPCC livestock emission factors and the cool farm tool. The emission profiles of four farm clusters representing the baseline conditions in the year 2009 are compared with 2011 where farmers adopted sustainable land management practices (SALM). The results demonstrate the variation in both the magnitude of the estimated GHG emissions per ha between different smallholder farm typologies and the emissions estimated by applying two different accounting tools. The farm scale quantification further shows that the adoption of SALM has a significant impact on emission reduction and removals and the mitigation benefits range between 4 and 6.5 tCO2 ha-1 yr-1 with significantly different mitigation benefits depending on typologies of the crop-livestock systems, their different agricultural practices, as well as adoption rates of improved practices. However, the inherent uncertainty related to the emission factors applied by accounting tools has substantial implications for reported agricultural emissions. With regard to uncertainty related to activity data, the assessment confirms the high variability within different farm types as well as between different parameters surveyed to comprehensively quantify GHG emissions within smallholder farms.
A study of a self diagnostic platform for the detection of A2 biomarker for Leishmania donovani
NASA Astrophysics Data System (ADS)
Roche, Philip J. R.; Cheung, Maurice C.; Najih, Mohamed; McCall, Laura-Isobel; Fakih, Ibrahim; Chodavarapu, Vamsy P.; Ward, Brian; Ndao, Momar; Kirk, Andrew G.
2012-03-01
Visceral leishmaniasis (L.donovani) is a protozoan infection that attacks mononuclear phagocytes and causes the liver and spleen damage that can cause death. The investigation presented is a proof of concept development applying a plasmonic diagnostic platform with simple microfluidic sample delivery and optical readout. An immune-assay method is applied to the quantification of A2 protein, a highly immunogenic biomarker for the pathogen. Quantification of A2 was performed in the ng/ml range, analysis by ELISA suggested that a limit of 0.1ng/ml of A2 is approximate to 1 pathogen per ml and the sensing system shows the potential to deliver a similar level of quantification. Significant reduction in assay complexity as further enzyme linked enhancement is not required when applying a plasmonic methodology to an immunoassay. The basic instrumentation required for a portable device and potential dual optical readout where both plasmonic and photoluminescent response are assessed and investigated including consideration of the application of the device to testing where non-literate communication of results is considered and issues of performance are addressed.
Lorenz, Dominic; Erasmy, Nicole; Akil, Youssef; Saake, Bodo
2016-04-20
A new method for the chemical characterization of xylans is presented, to overcome the difficulties in quantification of 4-O-methyl-α-D-glucuronic acid (meGlcA). In this regard, the hydrolysis behavior of xylans from beech and birch wood was investigated to obtain the optimum conditions for hydrolysis, using sulfuric acid. Due to varying linkage strengths and degradation, no general method for complete hydrolysis can be designed. Therefore, partial hydrolysis was applied, yielding monosaccharides and small meGlcA containing oligosaccharides. For a new method by HPAEC-UV/VIS, these samples were reductively aminated by 2-aminobenzoic acid. By quantification of monosaccharides and oligosaccharides, as well as comparison with borate-HPAEC and (13)C NMR-spectroscopy, we revealed that the concentrations meGlcA are significantly underestimated compared to conventional methods. The detected concentrations are 85.4% (beech) and 76.3% (birch) higher with the new procedure. Furthermore, the quantified concentrations of xylose were 9.3% (beech) and 6.5% (birch) higher by considering the unhydrolyzed oligosaccharides as well. Copyright © 2015 Elsevier Ltd. All rights reserved.
Delwing-de Lima, Daniela; Ulbricht, Ariene Sampaio Souza Farias; Werlang-Coelho, Carla; Delwing-Dal Magro, Débora; Joaquim, Victor Hugo Antonio; Salamaia, Eloise Mariani; de Quevedo, Silvana Rodrigues; Desordi, Larissa
2017-12-08
We evaluated the effects of moderate-intensity continuous training (MICT) and high-intensity interval training (HIIT) protocols on the alterations in oxidative stress parameters caused by a high-fat diet (HFD), in the blood and liver of rats. The HFD enhanced thiobarbituric acid reactive substances (TBA-RS) and protein carbonyl content, while reducing total sulfhydryl content and catalase (CAT) and glutathione peroxidase (GSH-Px) activities in the blood. Both training protocols prevented an increase in TBA-RS and protein carbonyl content, and prevented a reduction in CAT. HIIT protocol enhanced SOD activity. In the liver, HFD didn't alter TBA-RS, total sulfhydryl content or SOD, but increased protein carbonyl content and CAT and decreased GSH-Px. The exercise protocols prevented the increase in protein carbonyl content and the MICT protocol prevented an alteration in CAT. In conclusion, HFD elicits oxidative stress in the blood and liver and both protocols prevented most of the alterations in the oxidative stress parameters.
McDaniel, David; Weiss, Robert; Weiss, Margaret; Mazur, Chris; Griffen, Charmaine
2014-09-01
Multiple devices are currently on the market that employ radiofrequency to non-invasively treat skin laxity and wrinkle reduction. The study device was a unique monopolar radiofrequency device FDA cleared for the treatment of wrinkles and rhytids. The delivery system allows constant monitoring of the real-time local skin impedance changes, which allows radiofrequency energy to be more uniformly dosed over an entire treatment area. The objective was to validate effectiveness of a modified treatment protocol for a unique monopolar radiofrequency device, which has been engineered with greater power and self-monitoring circuitry. Twenty-four female subjects received bilateral monopolar radiofrequency treatments to the mid and lower face from the sub malar region to the submentum. Subjects completed 1 and 3 month follow ups with digital imaging. Skin biopsies (on 4 subjects) and ultrasound measurements (on 12 subjects) were completed. Assessments demonstrated a reduction in skin laxity of 35%, a reduction in fine lines/wrinkles of 42%, and a reduction in the appearance of global photodamage of 33%. Expert photograding demonstrated 92% of subjects showing at least a mild improvement in skin laxity at three months post treatment. 50 MHz ultrasound measurements in 12 subjects showed an increase of 19% in skin density. Histology showed a marked increase in dermal collagen and elastin fibers in two subjects who demonstrated a clinically noticeable reduction in skin laxity and minimal changes in two subjects who demonstrated minimal clinical improvements. There were no significant adverse events reported. This modified radiofrequency device and treatment protocol was well tolerated and produced improvements in the appearance of skin laxity and overall anti-aging effects in the majority of subjects. Objective measurements including ultrasound and histology help explain the clinical outcome.
da Silva, Wesley Pereira; de Oliveira, Luiz Henrique; Santos, André Luiz Dos; Ferreira, Valdir Souza; Trindade, Magno Aparecido Gonçalves
2018-06-01
A procedure based on liquid-liquid extraction (LLE) and phase separation using magnetically stirred salt-induced high-temperature liquid-liquid extraction (PS-MSSI-HT-LLE) was developed to extract and pre-concentrate ciprofloxacin (CIPRO) and enrofloxacin (ENRO) from animal food samples before electroanalysis. Firstly, simple LLE was used to extract the fluoroquinolones (FQs) from animal food samples, in which dilution was performed to reduce interference effects to below a tolerable threshold. Then, adapted PS-MSSI-HT-LLE protocols allowed re-extraction and further pre-concentration of target analytes in the diluted acid samples for simultaneous electrochemical quantification at low concentration levels. To improve the peak separation, in simultaneous detection, a baseline-corrected second-order derivative approach was processed. These approaches allowed quantification of target FQs from animal food samples spiked at levels of 0.80 to 2.00 µmol L -1 in chicken meat, with recovery values always higher than 80.5%, as well as in milk samples spiked at 4.00 µmol L -1 , with recovery values close to 70.0%. Copyright © 2018 Elsevier Ltd. All rights reserved.
Dotette: Programmable, high-precision, plug-and-play droplet pipetting.
Fan, Jinzhen; Men, Yongfan; Hao Tseng, Kuo; Ding, Yi; Ding, Yunfeng; Villarreal, Fernando; Tan, Cheemeng; Li, Baoqing; Pan, Tingrui
2018-05-01
Manual micropipettes are the most heavily used liquid handling devices in biological and chemical laboratories; however, they suffer from low precision for volumes under 1 μ l and inevitable human errors. For a manual device, the human errors introduced pose potential risks of failed experiments, inaccurate results, and financial costs. Meanwhile, low precision under 1 μ l can cause severe quantification errors and high heterogeneity of outcomes, becoming a bottleneck of reaction miniaturization for quantitative research in biochemical labs. Here, we report Dotette, a programmable, plug-and-play microfluidic pipetting device based on nanoliter liquid printing. With automated control, protocols designed on computers can be directly downloaded into Dotette, enabling programmable operation processes. Utilizing continuous nanoliter droplet dispensing, the precision of the volume control has been successfully improved from traditional 20%-50% to less than 5% in the range of 100 nl to 1000 nl. Such a highly automated, plug-and-play add-on to existing pipetting devices not only improves precise quantification in low-volume liquid handling and reduces chemical consumptions but also facilitates and automates a variety of biochemical and biological operations.
Nebot, Carolina; Regal, Patricia; Miranda, Jose Manuel; Fente, Cristina; Cepeda, Alberto
2013-12-01
Sulfonamides are antimicrobial agents widely employed in animal production and their residues in food could be an important risk to human health. In the dairy industry, large quantities of milk are monitored daily for the presence of sulfonamides. A simple and low-cost extraction protocol followed by a liquid chromatographic-tandem mass spectrometry method was developed for the simultaneous detection of nine sulfonamides in whole milk. The method was validated at the maximum residue limits established by European legislation. The limits of quantification obtained for most sulfonamides were between 12.5 and 25 μg kg(-1), detection capabilities ranged from 116 to 145 μg kg(-1), and recoveries, at 100 μg kg(-1), were greater than 89±12.5%. The method was employed to analyse 100 raw whole bovine milk samples collected from dairy farms in the northwest region of Spain. All of the samples were found to be compliant, but two were positive; one for sulfadiazine and the other for sulfamethoxipyridazine. Copyright © 2013 Elsevier Ltd. All rights reserved.
Smit, C A J; Haverkamp, G L G; de Groot, S; Stolwijk-Swuste, J M; Janssen, T W J
2012-08-01
Ten participants underwent two electrical stimulation (ES) protocols applied using a custom-made electrode garment with built-in electrodes. Interface pressure was measured using a force-sensitive area. In one protocol, both the gluteal and hamstring (g+h) muscles were activated, in the other gluteal (g) muscles only. To study and compare the effects of electrically induced activation of g+h muscles versus g muscles only on sitting pressure distribution in individuals with a spinal cord injury (SCI). Ischial tuberosities interface pressure (ITs pressure) and pressure gradient. In all participants, both protocols of g and g+h ES-induced activation caused a significant decrease in IT pressure. IT pressure after g+h muscles activation was reduced significantly by 34.5% compared with rest pressure, whereas a significant reduction of 10.2% after activation of g muscles only was found. Pressure gradient reduced significantly only after stimulation of g+h muscles (49.3%). g+h muscles activation showed a decrease in pressure relief (Δ IT) over time compared with g muscles only. Both protocols of surface ES-induced of g and g+h activation gave pressure relief from the ITs. Activation of both g+h muscles in SCI resulted in better IT pressure reduction in sitting individuals with a SCI than activation of g muscles only. ES might be a promising method in preventing pressure ulcers (PUs) on the ITs in people with SCI. Further research needs to show which pressure reduction is sufficient in preventing PUs.
Cetuximab-induced skin exanthema: prophylactic and reactive skin therapy are equally effective.
Wehler, Thomas C; Graf, Claudine; Möhler, Markus; Herzog, Jutta; Berger, Martin R; Gockel, Ines; Lang, Hauke; Theobald, Matthias; Galle, Peter R; Schimanski, Carl C
2013-10-01
Treatment with cetuximab is accompanied by the development of an acneiform follicular skin exanthema in more than 80 % of patients. Severe exanthema (grade III/IV) develops in about 9-19 % of patients with the necessity of cetuximab dose reduction or cessation. The study presented was a retrospective analysis of 50 gastrointestinal cancer patients treated with cetuximab in combination with either FOLFIRI or FOLFOX. One cohort of 15 patients received an in-house reactive skin protocol upon development of an exanthema. A second cohort of 15 patients received a skin prophylaxis starting with the first dose of cetuximab before clinical signs of toxicity. A third historic group of 20 patients had received no skin prophylaxis or reactive treatment. 19/20 patients of the historic group developed a skin exanthema. Grade III/IV exanthema was observed six times. Forty percent discontinued cetuximab therapy. The average time to exanthema onset was 14.7 days. Applying the reactive skin protocol after the first occurrence of an exanthema, the exanthema was downgraded as follows: No patients developed grade IV° exanthema, and two patients developed a grade II/III exanthema. In the majority of cases, the reactive skin protocol controlled the exanthema (grade 0-I°). No dose reductions in cetuximab were necessary. Applying the prophylactic skin protocol starting at the beginning of cetuximab application was not superior to the reactive skin protocol. Cetuximab-induced skin exanthema can be coped with a reactive protocol equally effective as compared to a prophylactic skin treatment. A prospective study with higher patient numbers is planned.
Soto-Muñoz, Lourdes; Teixidó, Neus; Usall, Josep; Viñas, Inmaculada; Crespo-Sempere, Ana; Torres, Rosario
2014-06-16
Dilution plating is the quantification method commonly used to estimate the population level of postharvest biocontrol agents, but this method does not permit a distinction among introduced and indigenous strains. Recently, molecular techniques based on DNA amplification such as quantitative real-time PCR (qPCR) have been successfully applied for their high strain-specific detection level. However, the ability of qPCR to distinguish viable and nonviable cells is limited. A promising strategy to avoid this issue relies on the use of nucleic acid intercalating dyes, such as propidium monoazide (PMA), as a sample pretreatment prior to the qPCR. The objective of this study was to optimize a protocol based on PMA pre-treatment samples combined with qPCR to distinguish and quantify viable cells of the biocontrol agent P. agglomerans CPA-2 applied as a postharvest treatment on orange. The efficiency of PMA-qPCR method under the established conditions (30μM PMA for 20min of incubation followed by 30min of LED light exposure) was evaluated on an orange matrix. Results showed no difference in CFU or cells counts of viable cells between PMA-qPCR and dilution plating. Samples of orange matrix inoculated with a mixture of viable/dead cells showed 5.59log10 CFU/ml by dilution plating, 8.25log10 cells/ml by qPCR, and 5.93log10 cells/ml by PMA-qPCR. Furthermore, samples inoculated with heat-killed cells were not detected by dilution plating and PMA-qPCR, while by qPCR was of 8.16log10 cells/ml. The difference in quantification cycles (Cq) among qPCR and PMA-qPCR was approximately 16cycles, which means a reduction of 65,536 fold of the dead cells detected. In conclusion, PMA-qPCR method is a suitable tool for quantify viable CPA-2 cells, which could be useful to estimate the ability of this antagonist to colonize the orange surface. Copyright © 2014 Elsevier B.V. All rights reserved.
Reductive amination with zinc powder in aqueous media
Imperio, Daniela; Penoni, Andrea; Palmisano, Giovanni
2011-01-01
Summary Zinc powder in aqueous alkaline media was employed to perform reductive amination of aldehydes with primary amines. The corresponding secondary amines were obtained in good yields along with minor amounts of hydrodimerization byproducts. The protocol is a green alternative to the use of complex hydrides in chlorinated or highly flammable solvents. PMID:21915212
Traumatic Incident Reduction I: Traumatized Women Inmates: Particulars of Practice and Research.
ERIC Educational Resources Information Center
Valentine, Pamela Vest
2000-01-01
Traumatic Incident Reduction (TIR) is an intervention that proved effective in reducing symptoms of depression, anxiety, and posttraumatic stress disorder and in increasing self-efficacy in women inmates (N=123). Discusses both the treatment protocol and the research design of this study. Offers suggestions for practice and research related to…
Gaston, Kirk W; Limbach, Patrick A
2014-01-01
The analysis of ribonucleic acids (RNA) by mass spectrometry has been a valuable analytical approach for more than 25 years. In fact, mass spectrometry has become a method of choice for the analysis of modified nucleosides from RNA isolated out of biological samples. This review summarizes recent progress that has been made in both nucleoside and oligonucleotide mass spectral analysis. Applications of mass spectrometry in the identification, characterization and quantification of modified nucleosides are discussed. At the oligonucleotide level, advances in modern mass spectrometry approaches combined with the standard RNA modification mapping protocol enable the characterization of RNAs of varying lengths ranging from low molecular weight short interfering RNAs (siRNAs) to the extremely large 23 S rRNAs. New variations and improvements to this protocol are reviewed, including top-down strategies, as these developments now enable qualitative and quantitative measurements of RNA modification patterns in a variety of biological systems. PMID:25616408
Saha, Tanumoy; Rathmann, Isabel; Galic, Milos
2017-07-11
Filopodia are dynamic, finger-like cellular protrusions associated with migration and cell-cell communication. In order to better understand the complex signaling mechanisms underlying filopodial initiation, elongation and subsequent stabilization or retraction, it is crucial to determine the spatio-temporal protein activity in these dynamic structures. To analyze protein function in filopodia, we recently developed a semi-automated tracking algorithm that adapts to filopodial shape-changes, thus allowing parallel analysis of protrusion dynamics and relative protein concentration along the whole filopodial length. Here, we present a detailed step-by-step protocol for optimized cell handling, image acquisition and software analysis. We further provide instructions for the use of optional features during image analysis and data representation, as well as troubleshooting guidelines for all critical steps along the way. Finally, we also include a comparison of the described image analysis software with other programs available for filopodia quantification. Together, the presented protocol provides a framework for accurate analysis of protein dynamics in filopodial protrusions using image analysis software.
Modern bioanalysis of proteins by electrophoretic techniques.
Krizkova, Sona; Ryvolova, Marketa; Masarik, Michal; Zitka, Ondrej; Adam, Vojtech; Hubalek, Jaromir; Eckschlager, Tomas; Kizek, Rene
2014-01-01
In 1957, protein rich in cysteine able to bind cadmium was isolated from horse kidney and named as metallothionein according to its structural properties. Further, this protein and metallothionein-like proteins have been found in tissues of other animal species, yeasts, fungi and plants. MT is as a potential cancer marker in the focus of interest, and its properties, functions, and behavior under various conditions are intensively studied. Our protocol describes separation of two major mammalian isoforms of MT (MT-1 and MT-2) using capillary electrophoresis (CE) coupled with UV detector. This protocol enables separation of MT isoforms and studying of their basic behavior as well as their quantification with detection limit in units of ng per μL. Sodium borate buffer (20 mM, pH 9.5) was optimized as a background electrolyte, and the separation was carried out in fused silica capillary with internal diameter of 75 μm and electric field intensity of 350 V/cm. Optimal detection wavelength was 254 nm.
In Situ Quantification of [Re(CO)3]+ by Fluorescence Spectroscopy in Simulated Hanford Tank Waste.
Branch, Shirmir D; French, Amanda D; Lines, Amanda M; Rapko, Brian M; Heineman, William R; Bryan, Samuel A
2018-02-06
A pretreatment protocol is presented that allows for the quantitative conversion and subsequent in situ spectroscopic analysis of [Re(CO) 3 ] + species in simulated Hanford tank waste. In this test case, the nonradioactive metal rhenium is substituted for technetium (Tc-99), a weak beta emitter, to demonstrate proof of concept for a method to measure a nonpertechnetate form of technetium in Hanford tank waste. The protocol encompasses adding a simulated waste sample containing the nonemissive [Re(CO) 3 ] + species to a developer solution that enables the rapid, quantitative conversion of the nonemissive species to a luminescent species which can then be detected spectroscopically. The [Re(CO) 3 ] + species concentration in an alkaline, simulated Hanford tank waste supernatant can be quantified by the standard addition method. In a test case, the [Re(CO) 3 ] + species was measured to be at a concentration of 38.9 μM, which was a difference of 2.01% from the actual concentration of 39.7 μM.
Gaston, Kirk W; Limbach, Patrick A
2014-01-01
The analysis of ribonucleic acids (RNA) by mass spectrometry has been a valuable analytical approach for more than 25 years. In fact, mass spectrometry has become a method of choice for the analysis of modified nucleosides from RNA isolated out of biological samples. This review summarizes recent progress that has been made in both nucleoside and oligonucleotide mass spectral analysis. Applications of mass spectrometry in the identification, characterization and quantification of modified nucleosides are discussed. At the oligonucleotide level, advances in modern mass spectrometry approaches combined with the standard RNA modification mapping protocol enable the characterization of RNAs of varying lengths ranging from low molecular weight short interfering RNAs (siRNAs) to the extremely large 23 S rRNAs. New variations and improvements to this protocol are reviewed, including top-down strategies, as these developments now enable qualitative and quantitative measurements of RNA modification patterns in a variety of biological systems.
Detection of proteins using a colorimetric bio-barcode assay.
Nam, Jwa-Min; Jang, Kyung-Jin; Groves, Jay T
2007-01-01
The colorimetric bio-barcode assay is a red-to-blue color change-based protein detection method with ultrahigh sensitivity. This assay is based on both the bio-barcode amplification method that allows for detecting miniscule amount of targets with attomolar sensitivity and gold nanoparticle-based colorimetric DNA detection method that allows for a simple and straightforward detection of biomolecules of interest (here we detect interleukin-2, an important biomarker (cytokine) for many immunodeficiency-related diseases and cancers). The protocol is composed of the following steps: (i) conjugation of target capture molecules and barcode DNA strands onto silica microparticles, (ii) target capture with probes, (iii) separation and release of barcode DNA strands from the separated probes, (iv) detection of released barcode DNA using DNA-modified gold nanoparticle probes and (v) red-to-blue color change analysis with a graphic software. Actual target detection and quantification steps with premade probes take approximately 3 h (whole protocol including probe preparations takes approximately 3 days).
Cultivation, detection, and ecophysiology of anaerobic ammonium-oxidizing bacteria.
Kartal, Boran; Geerts, Wim; Jetten, Mike S M
2011-01-01
Anaerobic ammonium-oxidizing (anammox) bacteria oxidize ammonium with nitrite under anoxic conditions. The anammox process is currently used to remove ammonium from wastewater and contributes significantly to the loss of fixed nitrogen from the oceans. In this chapter, we focus on the ecophysiology of anammox bacteria and describe new methodologies to grow these microorganisms. Now, it is possible to enrich anammox bacteria up to 95% with a membrane bioreactor that removes forces of selection for fast settling aggregates and facilitates the growth of planktonic cells. The biomass from this system has a high anaerobic ammonium oxidation rate (50 fmol NH(4)(+) · cell(-1) day(-1)) and is suitable for many ecophysiological and molecular experiments. A high throughput Percoll density gradient centrifugation protocol may be applied on this biomass for further enrichment (>99.5%) of anammox bacteria. Furthermore, we provide an up-to-date list of commonly used primers and introduce protocols for quantification and detection of functional genes of anammox bacteria in their natural environment. Copyright © 2011 Elsevier Inc. All rights reserved.
Microscopic quantification of bacterial invasion by a novel antibody-independent staining method.
Agerer, Franziska; Waeckerle, Stephanie; Hauck, Christof R
2004-10-01
Microscopic discrimination between extracellular and invasive, intracellular bacteria is a valuable technique in microbiology and immunology. We describe a novel fluorescence staining protocol, called FITC-biotin-avidin (FBA) staining, which allows the differentiation between extracellular and intracellular bacteria and is independent of specific antibodies directed against the microorganisms. FBA staining of eukaryotic cells infected with Gram-negative bacteria of the genus Neisseria or the Gram-positive pathogen Staphylococcus aureus are employed to validate the novel technique. The quantitative evaluation of intracellular pathogens by the FBA staining protocol yields identical results compared to parallel samples stained with conventional, antibody-dependent methods. FBA staining eliminates the need for cell permeabilization resulting in robust and rapid detection of invasive microbes. Taken together, FBA staining provides a reliable and convenient alternative for the differential detection of intracellular and extracellular bacteria and should be a valuable technical tool for the quantitative analysis of the invasive properties of pathogenic bacteria and other microorganisms.
Mutch, Sarah A.; Gadd, Jennifer C.; Fujimoto, Bryant S.; Kensel-Hammes, Patricia; Schiro, Perry G.; Bajjalieh, Sandra M.; Chiu, Daniel T.
2013-01-01
This protocol describes a method to determine both the average number and variance of proteins in the few to tens of copies in isolated cellular compartments, such as organelles and protein complexes. Other currently available protein quantification techniques either provide an average number but lack information on the variance or are not suitable for reliably counting proteins present in the few to tens of copies. This protocol entails labeling the cellular compartment with fluorescent primary-secondary antibody complexes, TIRF (total internal reflection fluorescence) microscopy imaging of the cellular compartment, digital image analysis, and deconvolution of the fluorescence intensity data. A minimum of 2.5 days is required to complete the labeling, imaging, and analysis of a set of samples. As an illustrative example, we describe in detail the procedure used to determine the copy number of proteins in synaptic vesicles. The same procedure can be applied to other organelles or signaling complexes. PMID:22094731
Variability of microchip capillary electrophoresis with conductivity detection.
Tantra, Ratna; Robinson, Kenneth; Sikora, Aneta
2014-02-01
Microfluidic CE with conductivity detection platforms could have an impact on the future development of smaller, faster and portable devices. However, for the purpose of reliable identification and quantification, there is a need to understand the degree of irreproducibility associated with the analytical technique. In this study, a protocol was developed to remove baseline drift problems sometimes observed in such devices. The protocol, which consisted of pre-conditioning steps prior to analysis, was used to further assess measurement variability from 24 individual microchips fabricated from six separate batches of glass substrate. Results show acceptable RSD percentage for retention time measurements but large variability in their corresponding peak areas (with some microchips having variability of ∼50%). Sources of variability were not related to substrate batch but possibly to a number of factors such as applied voltage fluctuations or variations in microchannel quality, for example surface roughness that will subsequently affect microchannel dimensions. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Tibial plafond fractures: limited incision reduction with percutaneous fixation.
Salton, Heather L; Rush, Shannon; Schuberth, John
2007-01-01
This study was a retrospective review of 18 patients with 19 pilon fractures treated with limited incision reduction and percutaneous plate fixation of the tibia. Patients were treated with either a 1- or 2-stage protocol. The latter consisted of placement of an external fixator followed by definitive reduction. The emphasis of analysis was placed on the identification of complications to the soft tissue envelope or bone-healing problems within the first 6 months after surgery. A major complication was defined as an unplanned operation within the first 6 months. Minor complications were any superficial wound defects that did not require operative intervention to resolve or any malunion or delayed union. With this protocol, no major complications were encountered. Minor complications were identified in 4 patients (4 fractures) of which 2 were minor wound problems. One patient developed a malunion, and the other had a delayed union. Four patients requested removal of prominent hardware. These results indicate that limited incision reduction and percutaneous plate fixation lead to safe methods of stabilization. The authors also provide guidance and strategies for the consistent execution of this technique.
NASA Astrophysics Data System (ADS)
Abercromby, Andrew F. J.; Conkin, Johnny; Gernhardt, Michael L.
2015-04-01
NASA's plans for future human exploration missions utilize a new atmosphere of 56.5 kPa (8.2 psia), 34% O2, 66% N2 to enable rapid extravehicular activity (EVA) capability with minimal gas losses; however, existing EVA prebreathe protocols to mitigate risk of decompression sickness (DCS) are not applicable to the new exploration atmosphere. We provide preliminary analysis of a 15-min prebreathe protocol and examine the potential benefits of intermittent recompression (IR) and an abbreviated N2 purge on crew time and gas consumables usage. A probabilistic model of decompression stress based on an established biophysical model of DCS risk was developed, providing significant (p<0.0001) prediction and goodness-of-fit with 84 cases of DCS in 668 human altitude exposures including a variety of pressure profiles. DCS risk for a 15-min prebreathe protocol was then estimated under different exploration EVA scenarios. Estimated DCS risk for all EVA scenarios modeled using the 15-min prebreathe protocol ranged between 6.1% and 12.1%. Supersaturation in neurological tissues (5- and 10-min half-time compartments) is prevented and tissue tensions in faster half-time compartments (≤40 min), where the majority of whole-body N2 is located, are reduced to about the levels (30.0 vs. 27.6 kPa) achieved during a standard Shuttle prebreathe protocol. IR reduced estimated DCS risk from 9.7% to 7.9% (1.8% reduction) and from 8.4% to 6.1% (2.3% reduction) for the scenarios modeled; the penalty of N2 reuptake during IR may be outweighed by the benefit of decreased bubble size. Savings of 75% of purge gas and time (0.22 kg gas and 6 min of crew time per person per EVA) are achievable by abbreviating the EVA suit purge to 20% N2 vs. 5% N2 at the expense of an increase in estimated DCS risk from 9.7% to 12.1% (2.4% increase). A 15-min prebreathe protocol appears feasible using the new exploration atmosphere. IR between EVAs may enable reductions in suit purge and prebreathe requirements, decompression stress, and/or suit operating pressures. Ground trial validation is required before operational implementation.
Cetuximab-induced skin exanthema: Improvement by a reactive skin therapy.
Schimanski, Carl C; Moehler, Markus; Zimmermann, Tim; Wörns, Markus A; Steinbach, Alma; Baum, Michael; Galle, Peter R
2010-01-01
More than 80% of patients treated with cetuximab develop an acneiform follicular skin exanthema. Grade 3 exanthema develops in 9-19% of these cases, bearing the risk of cetuximab dose-reduction or cessation. We retrospectively analysed a cohort of 20 patients treated with cetuximab and an in-house reactive skin protocol upon development of an exanthema. The reactive skin protocol was built up as follows: grade 1 exanthema: topical cleansing syndet (Dermowas®) + topical metronidazole cream (Rosiced®); grade 2 exanthema: grade 1 treatment + oral minocycline 50 mg twice per day; grade 3 exanthema: grade 2 treatment + topical corticoid (Dermatop®) + topical nadifloxacin (Nadixa®). As soon as a grade 3 had improved to a grade less than or equal to 2, the application of the topical corticoid was ceased. During the initial 12 weeks of therapy with cetuximab, all patients developed a skin exanthema (20/20; 100%). Of these, 2 patients (10%) developed a grade 3 exanthema, 10 patients (50%) experienced a grade 2 and 8 patients (40%) a grade 1 exanthema. Time to onset ranged from 1 to 4 weeks, with the average time to onset being 2.8 weeks. Applying the reactive skin protocol after the first occurrence of an exanthema, the grade of exanthema was downgraded as follows: no patients (0%) had a persisting grade 3 exanthema, while only 2 patients (10%) experienced a persisting grade 2 exanthema and 8 patients (40%) a persisting grade 1 exanthema. In the majority of cases (10 patients; 50%), the reactive skin protocol completely controlled the exanthema (grade 0). The average time to exanthema reduction by one grade was 9.5 days. No dose reductions of cetuximab were necessary. Cetuximab-induced skin exanthema is effectively managed by applying our reactive protocol. The simple protocol is based on a topical cleansing syndet and topical metronidazole and is to be intensified by the addition of oral minocycline, a topical corticoid and topical nadifloxacine, in cases of high-grade exanthema. More comprehensive results are expected from a prospective study with higher patient numbers that is currently being planned.
Study of boron detection limit using the in-air PIGE set-up at LAMFI-USP
NASA Astrophysics Data System (ADS)
Moro, M. V.; Silva, T. F.; Trindade, G. F.; Added, N.; Tabacniks, M. H.
2014-11-01
The quantification of small amounts of boron in materials is of extreme importance in different areas of materials science. Boron is an important contaminant and also a silicon dopant in the semiconductor industry. Boron is also extensively used in nuclear power plants, either for neutron shielding or for safety control and boron is an essential nutrient for life, either vegetable or animal. The production of silicon solar cells, by refining metallurgical-grade silicon (MG-Si) requires the control and reduction of several silicon contaminants to very low concentration levels. Boron is one of the contaminants of solar-grade silicon (SG-Si) that must be controlled and quantified at sub-ppm levels. In the metallurgical purification, boron quantification is usually made by Inductive Coupled Plasma Mass Spectrometry, (ICP-MS) but the results need to be verified by an independent analytical method. In this work we present the results of the analysis of silicon samples by Particle Induced Gamma-Ray Emission (PIGE) aiming the quantification of low concentrations of boron. PIGE analysis was carried out using the in-air external beam line of the Laboratory for Materials Analysis with Ion Beans (LAMFI-USP) by the 10B ( p ,αγ(7Be nuclear reaction, and measuring the 429 keV γ-ray. The in-air PIGE measurements at LAMFI have a quantification limit of the order of 1016 at/cm2.
Koncsos, Gábor; Varga, Zoltán V; Baranyai, Tamás; Ferdinandy, Péter; Schulz, Rainer; Giricz, Zoltán; Boengler, Kerstin
In the heart, subsarcolemmal (SSM), interfibrillar (IFM) and perinuclear mitochondria represent three subtypes of mitochondria. The most commonly used protease during IFM isolation is the nagarse, however, its effect on the detection of mitochondrial proteins is still unclear. Therefore, we investigated whether nagarse treatment influences the quantification of mitochondrial proteins. SSM and IFM were isolated from hearts of mice and rats. During IFM isolation, nagarse activity was either stopped by centrifugation (common protocol, IFM+N) or inhibited by phenylmethylsulfonyl fluoride (PMSF, IFM+N+I). The amounts of proteins located in different mitochondrial compartments (outer membrane: mitofusin 1 (MFN1) and 2 (MFN2); intermembrane space: p66shc; inner membrane (connexin 43 (Cx43)), and of protein deglycase DJ-1 were determined by Western blot. MFN2 and Cx43 were found predominantly in SSM isolated from mouse and rat hearts. MFN1 and p66shc were present in similar amounts in SSM and IFM+N, whereas the level of DJ-1 was higher in IFM+N compared to SSM. In IFM+N+I samples from mice, the amount of MFN2, but not that of Cx43 increased. Nagarse or nagarse inhibition by PMSF had no effect on oxygen consumption of SSM or IFM. Whereas the use of the common protocol indicates the localization of MFN2 predominantly in SSM, the inhibition of nagarse by PMSF increases the signal of MFN2 in IFM to that of in SSM, indicating an underestimation of MFN2 in IFM. Therefore, protease sensitivity should be considered when assessing distribution of mitochondrial proteins using nagarse-based isolation. Copyright © 2018 Elsevier Inc. All rights reserved.
Prest, E I; Hammes, F; Kötzsch, S; van Loosdrecht, M C M; Vrouwenvelder, J S
2013-12-01
Flow cytometry (FCM) is a rapid, cultivation-independent tool to assess and evaluate bacteriological quality and biological stability of water. Here we demonstrate that a stringent, reproducible staining protocol combined with fixed FCM operational and gating settings is essential for reliable quantification of bacteria and detection of changes in aquatic bacterial communities. Triplicate measurements of diverse water samples with this protocol typically showed relative standard deviation values and 95% confidence interval values below 2.5% on all the main FCM parameters. We propose a straightforward and instrument-independent method for the characterization of water samples based on the combination of bacterial cell concentration and fluorescence distribution. Analysis of the fluorescence distribution (or so-called fluorescence fingerprint) was accomplished firstly through a direct comparison of the raw FCM data and subsequently simplified by quantifying the percentage of large and brightly fluorescent high nucleic acid (HNA) content bacteria in each sample. Our approach enables fast differentiation of dissimilar bacterial communities (less than 15 min from sampling to final result), and allows accurate detection of even small changes in aquatic environments (detection above 3% change). Demonstrative studies on (a) indigenous bacterial growth in water, (b) contamination of drinking water with wastewater, (c) household drinking water stagnation and (d) mixing of two drinking water types, univocally showed that this FCM approach enables detection and quantification of relevant bacterial water quality changes with high sensitivity. This approach has the potential to be used as a new tool for application in the drinking water field, e.g. for rapid screening of the microbial water quality and stability during water treatment and distribution in networks and premise plumbing. Copyright © 2013 Elsevier Ltd. All rights reserved.
Amendola, Alessandra; Bloisi, Maria; Marsella, Patrizia; Sabatini, Rosella; Bibbò, Angela; Angeletti, Claudio; Capobianchi, Maria Rosaria
2011-09-01
Numerous studies investigating clinical significance of HIV-1 minimal residual viremia (MRV) suggest potential utility of assays more sensitive than those routinely used to monitor viral suppression. However currently available methods, based on different technologies, show great variation in detection limit and input plasma volume, and generally suffer from lack of standardization. In order to establish new tools suitable for routine quantification of minimal residual viremia in patients under virological suppression, some modifications were introduced into standard procedure of the Abbott RealTime HIV-1 assay leading to a "modified" and an "ultrasensitive" protocols. The following modifications were introduced: calibration curve extended towards low HIV-1 RNA concentration; 4 fold increased sample volume by concentrating starting material; reduced volume of internal control; adoption of "open-mode" software for quantification. Analytical performances were evaluated using the HIV-1 RNA Working Reagent 1 for NAT assays (NIBSC). Both tests were applied to clinical samples from virologically suppressed patients. The "modified" and the "ultrasensitive" configurations of the assay reached a limit of detection of 18.8 (95% CI: 11.1-51.0 cp/mL) and 4.8 cp/mL (95% CI: 2.6-9.1 cp/mL), respectively, with high precision and accuracy. In clinical samples from virologically suppressed patients, "modified" and "ultrasensitive" protocols allowed to detect and quantify HIV RNA in 12.7% and 46.6%, respectively, of samples resulted "not-detectable", and in 70.0% and 69.5%, respectively, of samples "detected <40 cp/mL" in the standard assay. The "modified" and "ultrasensitive" assays are precise and accurate, and easily adoptable in routine diagnostic laboratories for measuring MRV. Copyright © 2011 Elsevier B.V. All rights reserved.
Using image analysis for quantitative assessment of needle bladder rust disease of Norway spruce.
Ganthaler, A; Losso, A; Mayr, S
2018-06-01
High elevation spruce forests of the European Alps are frequently infected by the needle rust Chrysomyxa rhododendri , a pathogen causing remarkable defoliation, reduced tree growth and limited rejuvenation. Exact quantification of the disease severity on different spatial scales is crucial for monitoring, management and resistance breeding activities. Based on the distinct yellow discolouration of attacked needles, it was investigated whether image analysis of digital photographs can be used to quantify disease severity and to improve phenotyping compared to conventional assessment in terms of time, effort and application range. The developed protocol for preprocessing and analysis of digital RGB images enabled identification of disease symptoms and healthy needle areas on images obtained in ground surveys (total number of analysed images n = 62) and by the use of a semiprofessional quadcopter ( n = 13). Obtained disease severities correlated linearly with results obtained by manual counting of healthy and diseased needles for all approaches, including images of individual branches with natural background ( R 2 = 0.87) and with black background ( R 2 = 0.95), juvenile plants ( R 2 = 0.94), and top views and side views of entire tree crowns of adult trees ( R 2 = 0.98 and 0.88, respectively). Results underline that a well-defined signal related to needle bladder rust symptoms of Norway spruce can be extracted from images recorded by standard digital cameras and using drones. The presented protocol enables precise and time-efficient quantification of disease symptoms caused by C. rhododendri and provides several advantages compared to conventional assessment by manual counting or visual estimations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sporty, J; Kabir, M M; Turteltaub, K
A robust redox extraction protocol for quantitative and reproducible metabolite isolation and recovery has been developed for simultaneous measurement of nicotinamide adenine dinucleotide (NAD) and its reduced form, NADH, from Saccharomyces cerevisiae. Following culture in liquid media, approximately 10{sup 8} yeast cells were harvested by centrifugation and then lysed under non-oxidizing conditions by bead blasting in ice-cold, nitrogen-saturated 50-mM ammonium acetate. To enable protein denaturation, ice cold nitrogen-saturated CH{sub 3}CN + 50-mM ammonium acetate (3:1; v:v) was added to the cell lysates. After sample centrifugation to pellet precipitated proteins, organic solvent removal was performed on supernatants by chloroform extraction. Themore » remaining aqueous phase was dried and resuspended in 50-mM ammonium acetate. NAD and NADH were separated by HPLC and quantified using UV-VIS absorbance detection. Applicability of this procedure for quantifying NAD and NADH levels was evaluated by culturing yeast under normal (2% glucose) and calorie restricted (0.5% glucose) conditions. NAD and NADH contents are similar to previously reported levels in yeast obtained using enzymatic assays performed separately on acid (for NAD) and alkali (for NADH) extracts. Results demonstrate that it is possible to perform a single preparation to reliably and robustly quantitate both NAD and NADH contents in the same sample. Robustness of the protocol suggests it will be (1) applicable to quantification of these metabolites in mammalian and bacterial cell cultures; and (2) amenable to isotope labeling strategies to determine the relative contribution of specific metabolic pathways to total NAD and NADH levels in cell cultures.« less
Dunning, F Mark; Piazza, Timothy M; Zeytin, Füsûn N; Tucker, Ward C
2014-03-03
Accurate detection and quantification of botulinum neurotoxin (BoNT) in complex matrices is required for pharmaceutical, environmental, and food sample testing. Rapid BoNT testing of foodstuffs is needed during outbreak forensics, patient diagnosis, and food safety testing while accurate potency testing is required for BoNT-based drug product manufacturing and patient safety. The widely used mouse bioassay for BoNT testing is highly sensitive but lacks the precision and throughput needed for rapid and routine BoNT testing. Furthermore, the bioassay's use of animals has resulted in calls by drug product regulatory authorities and animal-rights proponents in the US and abroad to replace the mouse bioassay for BoNT testing. Several in vitro replacement assays have been developed that work well with purified BoNT in simple buffers, but most have not been shown to be applicable to testing in highly complex matrices. Here, a protocol for the detection of BoNT in complex matrices using the BoTest Matrix assays is presented. The assay consists of three parts: The first part involves preparation of the samples for testing, the second part is an immunoprecipitation step using anti-BoNT antibody-coated paramagnetic beads to purify BoNT from the matrix, and the third part quantifies the isolated BoNT's proteolytic activity using a fluorogenic reporter. The protocol is written for high throughput testing in 96-well plates using both liquid and solid matrices and requires about 2 hr of manual preparation with total assay times of 4-26 hr depending on the sample type, toxin load, and desired sensitivity. Data are presented for BoNT/A testing with phosphate-buffered saline, a drug product, culture supernatant, 2% milk, and fresh tomatoes and includes discussion of critical parameters for assay success.
Cortes-Rodicio, J; Sanchez-Merino, G; Garcia-Fidalgo, M A; Tobalina-Larrea, I
To identify those textural features that are insensitive to both technical and biological factors in order to standardise heterogeneity studies on 18 F-FDG PET imaging. Two different studies were performed. First, nineteen series from a cylindrical phantom filled with different 18 F-FDG activity concentration were acquired and reconstructed using three different protocols. Seventy-two texture features were calculated inside a circular region of interest. The variability of each feature was obtained. Second, the data for 15 patients showing non-pathological liver were acquired. Anatomical and physiological features such as patient's weight, height, body mass index, metabolic active volume, blood glucose level, SUV and SUV standard deviation were also recorded. A liver covering region of interest was delineated and low variability textural features calculated in each patient. Finally, a multivariate Spearman's correlation analysis between biological factors and texture features was performed. Only eight texture features analysed show small variability (<5%) with activity concentration and reconstruction protocol making them suitable for heterogeneity quantification. On the other hand, there is a high statistically significant correlation between MAV and entropy (P<0.05). Entropy feature is, indeed, correlated (P<0.05) with all patient parameters, except body mass index. The textural features that are correlated with neither technical nor biological factors are run percentage, short-zone emphasis and intensity, making them suitable for quantifying functional changes or classifying patients. Other textural features are correlated with technical and biological factors and are, therefore, a source of errors if used for this purpose. Copyright © 2016 Elsevier España, S.L.U. y SEMNIM. All rights reserved.
Logan, Ryan W.; McCulley, Walter D.; Seggio, Joseph A.; Rosenwasser, Alan M.
2011-01-01
Background Alcohol withdrawal is associated with behavioral and chronobiological disturbances that may persist during protracted abstinence. We previously reported that C57BL/6J (B6) mice show marked but temporary reductions in running-wheel activity, and normal free-running circadian rhythms, following a 4-day chronic intermittent ethanol vapor (CIE) exposure (16 hours of ethanol vapor exposure alternating with 8 hours of withdrawal). In the present experiments, we extend these observations in two ways: (1) by examining post-CIE locomotor activity in C3H/HeJ (C3H) mice, an inbred strain characterized by high sensitivity to ethanol withdrawal, and (2) by directly comparing the responses of B6 and C3H mice to a longer-duration CIE protocol. Methods In Experiment 1, C3H mice were exposed to the same 4-day CIE protocol used in our previous study with B6 mice (referred to here as the 1-cycle CIE protocol). In Experiment 2, C3H and B6 mice were exposed to three successive 4-day CIE cycles, each separated by 2 days of withdrawal (the 3-cycle CIE protocol). Running-wheel activity was monitored prior to and following CIE, and post-CIE activity was recorded in constant darkness to allow assessment of free-running circadian period and phase. Results C3H mice displayed pronounced reductions in running-wheel activity that persisted for the duration of the recording period (up to 30 days) following both 1-cycle (Experiment 1) and 3-cycle (Experiment 2) CIE protocols. In contrast, B6 mice showed reductions in locomotor activity that persisted for about one week following the 3-cycle CIE protocol, similar to the results of our previous study using a 1-cycle protocol in this strain. Additionally, C3H mice showed significant shortening of free-running period following the 3-cycle, but not the 1-cycle, CIE protocol, while B6 mice showed normal free-running rhythms. Conclusions These results reveal genetic differences in the persistence of ethanol withdrawal-induced hypo-locomotion. In addition, chronobiological alterations during extended abstinence may depend on both genetic susceptibility and an extended prior withdrawal history. The present data establish a novel experimental model for long-term behavioral and circadian disruptions associated with ethanol withdrawal. PMID:22013893
Guerrero-Barajas, Claudia; Ordaz, Alberto; García-Solares, Selene Montserrat; Garibay-Orijel, Claudio; Bastida-González, Fernando; Zárate-Segura, Paola Berenice
2015-01-01
The importance of microbial sulfate reduction relies on the various applications that it offers in environmental biotechnology. Engineered sulfate reduction is used in industrial wastewater treatment to remove large concentrations of sulfate along with the chemical oxygen demand (COD) and heavy metals. The most common approach to the process is with anaerobic bioreactors in which sulfidogenic sludge is obtained through adaptation of predominantly methanogenic granular sludge to sulfidogenesis. This process may take a long time and does not always eliminate the competition for substrate due to the presence of methanogens in the sludge. In this work, we propose a novel approach to obtain sulfidogenic sludge in which hydrothermal vents sediments are the original source of microorganisms. The microbial community developed in the presence of sulfate and volatile fatty acids is wide enough to sustain sulfate reduction over a long period of time without exhibiting inhibition due to sulfide. This protocol describes the procedure to generate the sludge from the sediments in an upflow anaerobic sludge blanket (UASB) type of reactor. Furthermore, the protocol presents the procedure to demonstrate the capability of the sludge to remove by reductive dechlorination a model of a highly toxic organic pollutant such as trichloroethylene (TCE). The protocol is divided in three stages: (1) the formation of the sludge and the determination of its sulfate reducing activity in the UASB, (2) the experiment to remove the TCE by the sludge, and (3) the identification of microorganisms in the sludge after the TCE reduction. Although in this case the sediments were taken from a site located in Mexico, the generation of a sulfidogenic sludge by using this procedure may work if a different source of sediments is taken since marine sediments are a natural pool of microorganisms that may be enriched in sulfate reducing bacteria. PMID:26555802
Guerrero-Barajas, Claudia; Ordaz, Alberto; García-Solares, Selene Montserrat; Garibay-Orijel, Claudio; Bastida-González, Fernando; Zárate-Segura, Paola Berenice
2015-10-15
The importance of microbial sulfate reduction relies on the various applications that it offers in environmental biotechnology. Engineered sulfate reduction is used in industrial wastewater treatment to remove large concentrations of sulfate along with the chemical oxygen demand (COD) and heavy metals. The most common approach to the process is with anaerobic bioreactors in which sulfidogenic sludge is obtained through adaptation of predominantly methanogenic granular sludge to sulfidogenesis. This process may take a long time and does not always eliminate the competition for substrate due to the presence of methanogens in the sludge. In this work, we propose a novel approach to obtain sulfidogenic sludge in which hydrothermal vents sediments are the original source of microorganisms. The microbial community developed in the presence of sulfate and volatile fatty acids is wide enough to sustain sulfate reduction over a long period of time without exhibiting inhibition due to sulfide. This protocol describes the procedure to generate the sludge from the sediments in an upflow anaerobic sludge blanket (UASB) type of reactor. Furthermore, the protocol presents the procedure to demonstrate the capability of the sludge to remove by reductive dechlorination a model of a highly toxic organic pollutant such as trichloroethylene (TCE). The protocol is divided in three stages: (1) the formation of the sludge and the determination of its sulfate reducing activity in the UASB, (2) the experiment to remove the TCE by the sludge, and (3) the identification of microorganisms in the sludge after the TCE reduction. Although in this case the sediments were taken from a site located in Mexico, the generation of a sulfidogenic sludge by using this procedure may work if a different source of sediments is taken since marine sediments are a natural pool of microorganisms that may be enriched in sulfate reducing bacteria.
Using analogues to quantify geological uncertainty in stochastic reserve modelling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wells, B.; Brown, I.
1995-08-01
The petroleum industry seeks to minimize exploration risk by employing the best possible expertise, methods and tools. Is it possible to quantify the success of this process of risk reduction? Due to inherent uncertainty in predicting geological reality and due to changing environments for hydrocarbon exploration, it is not enough simply to record the proportion of successful wells drilled; in various parts of the world it has been noted that pseudo-random drilling would apparently have been as successful as the actual drilling programme. How, then, should we judge the success of risk reduction? For many years the E&P industry hasmore » routinely used Monte Carlo modelling to generate a probability distribution for prospect reserves. One aspect of Monte Carlo modelling which has received insufficient attention, but which is essential for quantifying risk reduction, is the consistency and repeatability with which predictions can be made. Reducing the subjective element inherent in the specification of geological uncertainty allows better quantification of uncertainty in the prediction of reserves, in both exploration and appraisal. Building on work reported at the AAPG annual conventions in 1994 and 1995, the present paper incorporates analogue information with uncertainty modelling. Analogues provide a major step forward in the quantification of risk, but their significance is potentially greater still. The two principal contributors to uncertainty in field and prospect analysis are the hydrocarbon life-cycle and the geometry of the trap. These are usually treated separately. Combining them into a single model is a major contribution to the reduction risk. This work is based in part on a joint project with Oryx Energy UK Ltd., and thanks are due in particular to Richard Benmore and Mike Cooper.« less
Van Meter, Kimberly J.; Basu, Nandita B.
2015-01-01
Nutrient legacies in anthropogenic landscapes, accumulated over decades of fertilizer application, lead to time lags between implementation of conservation measures and improvements in water quality. Quantification of such time lags has remained difficult, however, due to an incomplete understanding of controls on nutrient depletion trajectories after changes in land-use or management practices. In this study, we have developed a parsimonious watershed model for quantifying catchment-scale time lags based on both soil nutrient accumulations (biogeochemical legacy) and groundwater travel time distributions (hydrologic legacy). The model accurately predicted the time lags observed in an Iowa watershed that had undergone a 41% conversion of area from row crop to native prairie. We explored the time scales of change for stream nutrient concentrations as a function of both natural and anthropogenic controls, from topography to spatial patterns of land-use change. Our results demonstrate that the existence of biogeochemical nutrient legacies increases time lags beyond those due to hydrologic legacy alone. In addition, we show that the maximum concentration reduction benefits vary according to the spatial pattern of intervention, with preferential conversion of land parcels having the shortest catchment-scale travel times providing proportionally greater concentration reductions as well as faster response times. In contrast, a random pattern of conversion results in a 1:1 relationship between percent land conversion and percent concentration reduction, irrespective of denitrification rates within the landscape. Our modeling framework allows for the quantification of tradeoffs between costs associated with implementation of conservation measures and the time needed to see the desired concentration reductions, making it of great value to decision makers regarding optimal implementation of watershed conservation measures. PMID:25985290
Van Meter, Kimberly J; Basu, Nandita B
2015-01-01
Nutrient legacies in anthropogenic landscapes, accumulated over decades of fertilizer application, lead to time lags between implementation of conservation measures and improvements in water quality. Quantification of such time lags has remained difficult, however, due to an incomplete understanding of controls on nutrient depletion trajectories after changes in land-use or management practices. In this study, we have developed a parsimonious watershed model for quantifying catchment-scale time lags based on both soil nutrient accumulations (biogeochemical legacy) and groundwater travel time distributions (hydrologic legacy). The model accurately predicted the time lags observed in an Iowa watershed that had undergone a 41% conversion of area from row crop to native prairie. We explored the time scales of change for stream nutrient concentrations as a function of both natural and anthropogenic controls, from topography to spatial patterns of land-use change. Our results demonstrate that the existence of biogeochemical nutrient legacies increases time lags beyond those due to hydrologic legacy alone. In addition, we show that the maximum concentration reduction benefits vary according to the spatial pattern of intervention, with preferential conversion of land parcels having the shortest catchment-scale travel times providing proportionally greater concentration reductions as well as faster response times. In contrast, a random pattern of conversion results in a 1:1 relationship between percent land conversion and percent concentration reduction, irrespective of denitrification rates within the landscape. Our modeling framework allows for the quantification of tradeoffs between costs associated with implementation of conservation measures and the time needed to see the desired concentration reductions, making it of great value to decision makers regarding optimal implementation of watershed conservation measures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thor, Daniel; Brismar, Torkel B., E-mail: torkel.brismar@gmail.com; Fischer, Michael A.
Purpose: To evaluate the potential of low tube voltage dual source (DS) single energy (SE) and dual energy (DE) computed tomography (CT) to reduce contrast media (CM) dose in adult abdominal examinations of various sizes while maintaining soft tissue and iodine contrast-to-noise ratio (CNR). Methods: Four abdominal phantoms simulating a body mass index of 16 to 35 kg/m{sup 2} with four inserted syringes of 0, 2, 4, and 8 mgI/ml CM were scanned using a 64-slice DS-CT scanner. Six imaging protocols were used; one single source (SS) reference protocol (120 kV, 180 reference mAs), four low kV SE protocols (70more » and 80 kV using both SS and DS), and one DE protocol at 80/140 kV. Potential CM reduction with unchanged CNRs relative to the 120 kV protocol was calculated along with the corresponding increase in radiation dose. Results: The potential contrast media reductions were determined to be approximately 53% for DS 70 kV, 51% for SS 70 kV, 44% for DS 80 kV, 40% for SS 80 kV, and 20% for DE (all differences were significant, P < 0.05). Constant CNR could be achieved by using DS 70 kV for small to medium phantom sizes (16–26 kg/m{sup 2}) and for all sizes (16–35 kg/m{sup 2}) when using DS 80 kV and DE. Corresponding radiation doses increased by 60%–107%, 23%–83%, and 6%–12%, respectively. Conclusions: DS single energy CT can be used to reduce CM dose by 44%–53% with maintained CNR in adult abdominal examinations at the cost of an increased radiation dose. DS dual-energy CT allows reduction of CM dose by 20% at similar radiation dose as compared to a standard 120 kV single source.« less
Lucas, Rebekah A. I.; Pearson, James; Schlader, Zachary J.; Crandall, Craig G.
2016-01-01
This study tested the hypothesis that baroreceptor unloading during passive hyperthermia contributes to increases in ventilation and decreases in end-tidal partial pressure of carbon dioxide (PET,CO2) during that exposure. Two protocols were performed, in which healthy subjects underwent passive hyperthermia (increasing intestinal temperature by ~1.8°C) to cause a sustained increase in ventilation and reduction in PET,CO2. Upon attaining hyperthermic hyperventilation, in protocol 1 (n = 10; three females) a bolus (19 ± 2 ml kg−1) of warm (~38°C) isotonic saline was rapidly (5–10 min) infused intravenously to restore reductions in central venous pressure, whereas in protocol 2 (n = 11; five females) phenylephrine was infused intravenously (60–120 μg min−1) to return mean arterial pressure to normothermic levels. In protocol 1, hyperthermia increased ventilation (by 2.2 ± 1.7 l min−1, P < 0.01), while reducing PET,CO2 (by 4 ± 3 mmHg, P = 0.04) and central venous pressure (by 5 ± 1 mmHg, P <0.01). Saline infusion increased central venous pressure by 5 ± 1 mmHg (P < 0.01), restoring it to normothermic values, but did not change ventilation or PET,CO2 (P > 0.05). In protocol 2, hyperthermia increased ventilation (by 5.0 ± 2.7l min−1, P <0.01) and reduced PET ,CO2 (by 5 ± 2 mmHg, P < 0.01) and mean arterial pressure (by 9 ± 7 mmHg, P <0.01). Phenylephrine infusion increased mean arterial pressure by 12 ± 3 mmHg (P < 0.01), restoring it to normothermic values, but did not change ventilation or PET,CO2 (P > 0.05). The absence of a reduction in ventilation upon reloading the cardiopulmonary and arterial baroreceptors to pre-hyperthermic levels indicates that baroreceptor unloading with hyperthermia is unlikely to contribute to hyperthermic hyperventilation in humans. PMID:26299270
2017-01-01
To improve point-of-care quantification using microchip capillary electrophoresis (MCE), the chip-to-chip variabilities inherent in disposable, single-use devices must be addressed. This work proposes to integrate an internal standard (ISTD) into the microchip by adding it to the background electrolyte (BGE) instead of the sample—thus eliminating the need for additional sample manipulation, microchip redesigns, and/or system expansions required for traditional ISTD usage. Cs and Li ions were added as integrated ISTDs to the BGE, and their effects on the reproducibility of Na quantification were explored. Results were then compared to the conclusions of our previous publication which used Cs and Li as traditional ISTDs. The in-house fabricated microchips, electrophoretic protocols, and solution matrixes were kept constant, allowing the proposed method to be reliably compared to the traditional method. Using the integrated ISTDs, both Cs and Li improved the Na peak area reproducibility approximately 2-fold, to final RSD values of 2.2–4.7% (n = 900). In contrast (to previous work), Cs as a traditional ISTD resulted in final RSDs of 2.5–8.8%, while the traditional Li ISTD performed poorly with RSDs of 6.3–14.2%. These findings suggest integrated ISTDs are a viable method to improve the precision of disposable MCE devices—giving matched or superior results to the traditional method in this study while neither increasing system cost nor complexity. PMID:28192985
A critical view on microplastic quantification in aquatic organisms.
Vandermeersch, Griet; Van Cauwenberghe, Lisbeth; Janssen, Colin R; Marques, Antonio; Granby, Kit; Fait, Gabriella; Kotterman, Michiel J J; Diogène, Jorge; Bekaert, Karen; Robbens, Johan; Devriese, Lisa
2015-11-01
Microplastics, plastic particles and fragments smaller than 5mm, are ubiquitous in the marine environment. Ingestion and accumulation of microplastics have previously been demonstrated for diverse marine species ranging from zooplankton to bivalves and fish, implying the potential for microplastics to accumulate in the marine food web. In this way, microplastics can potentially impact food safety and human health. Although a few methods to quantify microplastics in biota have been described, no comparison and/or intercalibration of these techniques have been performed. Here we conducted a literature review on all available extraction and quantification methods. Two of these methods, involving wet acid destruction, were used to evaluate the presence of microplastics in field-collected mussels (Mytilus galloprovincialis) from three different "hotspot" locations in Europe (Po estuary, Italy; Tagus estuary, Portugal; Ebro estuary, Spain). An average of 0.18±0.14 total microplastics g(-1) w.w. for the Acid mix Method and 0.12±0.04 total microplastics g(-1) w.w. for the Nitric acid Method was established. Additionally, in a pilot study an average load of 0.13±0.14 total microplastics g(-1) w.w. was recorded in commercial mussels (Mytilus edulis and M. galloprovincialis) from five European countries (France, Italy, Denmark, Spain and The Netherlands). A detailed analysis and comparison of methods indicated the need for further research to develop a standardised operating protocol for microplastic quantification and monitoring. Copyright © 2015 Elsevier Inc. All rights reserved.
A critical view on microplastic quantification in aquatic organisms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vandermeersch, Griet, E-mail: griet.vandermeersch@ilvo.vlaanderen.be; Van Cauwenberghe, Lisbeth; Janssen, Colin R.
Microplastics, plastic particles and fragments smaller than 5 mm, are ubiquitous in the marine environment. Ingestion and accumulation of microplastics have previously been demonstrated for diverse marine species ranging from zooplankton to bivalves and fish, implying the potential for microplastics to accumulate in the marine food web. In this way, microplastics can potentially impact food safety and human health. Although a few methods to quantify microplastics in biota have been described, no comparison and/or intercalibration of these techniques have been performed. Here we conducted a literature review on all available extraction and quantification methods. Two of these methods, involving wetmore » acid destruction, were used to evaluate the presence of microplastics in field-collected mussels (Mytilus galloprovincialis) from three different “hotspot” locations in Europe (Po estuary, Italy; Tagus estuary, Portugal; Ebro estuary, Spain). An average of 0.18±0.14 total microplastics g{sup −1} w.w. for the Acid mix Method and 0.12±0.04 total microplastics g{sup −1} w.w. for the Nitric acid Method was established. Additionally, in a pilot study an average load of 0.13±0.14 total microplastics g{sup −1} w.w. was recorded in commercial mussels (Mytilus edulis and M. galloprovincialis) from five European countries (France, Italy, Denmark, Spain and The Netherlands). A detailed analysis and comparison of methods indicated the need for further research to develop a standardised operating protocol for microplastic quantification and monitoring.« less
Open-path FTIR data reduction algorithm with atmospheric absorption corrections: the NONLIN code
NASA Astrophysics Data System (ADS)
Phillips, William; Russwurm, George M.
1999-02-01
This paper describes the progress made to date in developing, testing, and refining a data reduction computer code, NONLIN, that alleviates many of the difficulties experienced in the analysis of open path FTIR data. Among the problems that currently effect FTIR open path data quality are: the inability to obtain a true I degree or background, spectral interferences of atmospheric gases such as water vapor and carbon dioxide, and matching the spectral resolution and shift of the reference spectra to a particular field instrument. This algorithm is based on a non-linear fitting scheme and is therefore not constrained by many of the assumptions required for the application of linear methods such as classical least squares (CLS). As a result, a more realistic mathematical model of the spectral absorption measurement process can be employed in the curve fitting process. Applications of the algorithm have proven successful in circumventing open path data reduction problems. However, recent studies, by one of the authors, of the temperature and pressure effects on atmospheric absorption indicate there exist temperature and water partial pressure effects that should be incorporated into the NONLIN algorithm for accurate quantification of gas concentrations. This paper investigates the sources of these phenomena. As a result of this study a partial pressure correction has been employed in NONLIN computer code. Two typical field spectra are examined to determine what effect the partial pressure correction has on gas quantification.
Quantitative analysis of the major constituents of St John's wort with HPLC-ESI-MS.
Chandrasekera, Dhammitha H; Welham, Kevin J; Ashton, David; Middleton, Richard; Heinrich, Michael
2005-12-01
A method was developed to profile the major constituents of St John's wort extracts using high-performance liquid chromatography-electrospray mass spectrometry (HPLC-ESI-MS). The objective was to simultaneously separate, identify and quantify hyperforin, hypericin, pseudohypericin, rutin, hyperoside, isoquercetrin, quercitrin and chlorogenic acid using HPLC-MS. Quantification was performed using an external standardisation method with reference standards. The method consisted of two protocols: one for the analysis of flavonoids and glycosides and the other for the analysis of the more lipophilic hypericins and hyperforin. Both protocols used a reverse phase Luna phenyl hexyl column. The separation of the flavonoids and glycosides was achieved within 35 min and that of the hypericins and hyperforin within 9 min. The linear response range in ESI-MS was established for each compound and all had linear regression coefficient values greater than 0.97. Both protocols proved to be very specific for the constituents analysed. MS analysis showed no other signals within the analyte peaks. The method was robust and applicable to alcoholic tinctures, tablet/capsule extracts in various solvents and herb extracts. The method was applied to evaluate the phytopharmaceutical quality of St John's wort preparations available in the UK in order to test the method and investigate if they contain at least the main constituents and at what concentrations.
Design of primers and probes for quantitative real-time PCR methods.
Rodríguez, Alicia; Rodríguez, Mar; Córdoba, Juan J; Andrade, María J
2015-01-01
Design of primers and probes is one of the most crucial factors affecting the success and quality of quantitative real-time PCR (qPCR) analyses, since an accurate and reliable quantification depends on using efficient primers and probes. Design of primers and probes should meet several criteria to find potential primers and probes for specific qPCR assays. The formation of primer-dimers and other non-specific products should be avoided or reduced. This factor is especially important when designing primers for SYBR(®) Green protocols but also in designing probes to ensure specificity of the developed qPCR protocol. To design primers and probes for qPCR, multiple software programs and websites are available being numerous of them free. These tools often consider the default requirements for primers and probes, although new research advances in primer and probe design should be progressively added to different algorithm programs. After a proper design, a precise validation of the primers and probes is necessary. Specific consideration should be taken into account when designing primers and probes for multiplex qPCR and reverse transcription qPCR (RT-qPCR). This chapter provides guidelines for the design of suitable primers and probes and their subsequent validation through the development of singlex qPCR, multiplex qPCR, and RT-qPCR protocols.
Refined protocols of tamoxifen injection for inducible DNA recombination in mouse astroglia.
Jahn, Hannah M; Kasakow, Carmen V; Helfer, Andreas; Michely, Julian; Verkhratsky, Alexei; Maurer, Hans H; Scheller, Anja; Kirchhoff, Frank
2018-04-12
Inducible DNA recombination of floxed alleles in vivo by liver metabolites of tamoxifen (TAM) is an important tool to study gene functions. Here, we describe protocols for optimal DNA recombination in astrocytes, based on the GLAST-Cre ERT2 /loxP system. In addition, we demonstrate that quantification of genomic recombination allows to determine the proportion of cell types in various brain regions. We analyzed the presence and clearance of TAM and its metabolites (N-desmethyl-tamoxifen, 4-hydroxytamoxifen and endoxifen) in brain and serum of mice by liquid chromatographic-high resolution-tandem mass spectrometry (LC-HR-MS/MS) and assessed optimal injection protocols by quantitative RT-PCR of several floxed target genes (p2ry1, gria1, gabbr1 and Rosa26-tdTomato locus). Maximal recombination could be achieved in cortex and cerebellum by single daily injections for five and three consecutive days, respectively. Furthermore, quantifying the loss of floxed alleles predicted the percentage of GLAST-positive cells (astroglia) per brain region. We found that astrocytes contributed 20 to 30% of the total cell number in cortex, hippocampus, brainstem and optic nerve, while in the cerebellum Bergmann glia, velate astrocytes and white matter astrocytes accounted only for 8% of all cells.
Rius, Cristina; Attaf, Meriem; Tungatt, Katie; Bianchi, Valentina; Legut, Mateusz; Bovay, Amandine; Donia, Marco; Thor Straten, Per; Peakman, Mark; Svane, Inge Marie; Ott, Sascha; Connor, Tom; Szomolay, Barbara; Dolton, Garry; Sewell, Andrew K
2018-04-01
Peptide-MHC (pMHC) multimers, usually used as streptavidin-based tetramers, have transformed the study of Ag-specific T cells by allowing direct detection, phenotyping, and enumeration within polyclonal T cell populations. These reagents are now a standard part of the immunology toolkit and have been used in many thousands of published studies. Unfortunately, the TCR-affinity threshold required for staining with standard pMHC multimer protocols is higher than that required for efficient T cell activation. This discrepancy makes it possible for pMHC multimer staining to miss fully functional T cells, especially where low-affinity TCRs predominate, such as in MHC class II-restricted responses or those directed against self-antigens. Several recent, somewhat alarming, reports indicate that pMHC staining might fail to detect the majority of functional T cells and have prompted suggestions that T cell immunology has become biased toward the type of cells amenable to detection with multimeric pMHC. We use several viral- and tumor-specific pMHC reagents to compare populations of human T cells stained by standard pMHC protocols and optimized protocols that we have developed. Our results confirm that optimized protocols recover greater populations of T cells that include fully functional T cell clonotypes that cannot be stained by regular pMHC-staining protocols. These results highlight the importance of using optimized procedures that include the use of protein kinase inhibitor and Ab cross-linking during staining to maximize the recovery of Ag-specific T cells and serve to further highlight that many previous quantifications of T cell responses with pMHC reagents are likely to have considerably underestimated the size of the relevant populations. Copyright © 2018 The Authors.
Efficient ultrafiltration-based protocol to deplete extracellular vesicles from fetal bovine serum
Kornilov, Roman; Puhka, Maija; Mannerström, Bettina; Hiidenmaa, Hanna; Peltoniemi, Hilkka; Siljander, Pia; Seppänen-Kaijansinkko, Riitta; Kaur, Sippy
2018-01-01
ABSTRACT Fetal bovine serum (FBS) is the most commonly used supplement in studies involving cell-culture experiments. However, FBS contains large numbers of bovine extracellular vesicles (EVs), which hamper the analyses of secreted EVs from the cell type of preference and, thus, also the downstream analyses. Therefore, a prior elimination of EVs from FBS is crucial. However, the current methods of EV depletion by ultracentrifugation are cumbersome and the commercial alternatives expensive. In this study, our aim was to develop a protocol to completely deplete EVs from FBS, which may have wide applicability in cell-culture applications. We investigated different EV-depleted FBS prepared by our novel ultrafiltration-based protocol, by conventionally used overnight ultracentrifugation, or commercially available depleted FBS, and compared them with regular FBS. All sera were characterized by nanoparticle tracking analysis, electron microscopy, Western blotting and RNA quantification. Next, adipose-tissue mesenchymal stem cells (AT-MSCs) and cancer cells were grown in the media supplemented with the three different EV-depleted FBS and compared with cells grown in regular FBS media to assess the effects on cell proliferation, stress, differentiation and EV production. The novel ultrafiltration-based protocol depleted EVs from FBS clearly more efficiently than ultracentrifugation and commercial methods. Cell proliferation, stress, differentiation and EV production of AT-MSCs and cancer cell lines were similarly maintained in all three EV-depleted FBS media up to 96 h. In summary, our ultrafiltration protocol efficiently depletes EVs, is easy to use and maintains cell growth and metabolism. Since the method is also cost-effective and easy to standardize, it could be used in a wide range of cell-culture applications helping to increase comparability of EV research results between laboratories. PMID:29410778
Han, Yongming; Chen, Antony; Cao, Junji; Fung, Kochy; Ho, Fai; Yan, Beizhan; Zhan, Changlin; Liu, Suixin; Wei, Chong; An, Zhisheng
2013-01-01
Quantifying elemental carbon (EC) content in geological samples is challenging due to interferences of crustal, salt, and organic material. Thermal/optical analysis, combined with acid pretreatment, represents a feasible approach. However, the consistency of various thermal/optical analysis protocols for this type of samples has never been examined. In this study, urban street dust and soil samples from Baoji, China were pretreated with acids and analyzed with four thermal/optical protocols to investigate how analytical conditions and optical correction affect EC measurement. The EC values measured with reflectance correction (ECR) were found always higher and less sensitive to temperature program than the EC values measured with transmittance correction (ECT). A high-temperature method with extended heating times (STN120) showed the highest ECT/ECR ratio (0.86) while a low-temperature protocol (IMPROVE-550), with heating time adjusted for sample loading, showed the lowest (0.53). STN ECT was higher than IMPROVE ECT, in contrast to results from aerosol samples. A higher peak inert-mode temperature and extended heating times can elevate ECT/ECR ratios for pretreated geological samples by promoting pyrolyzed organic carbon (PyOC) removal over EC under trace levels of oxygen. Considering that PyOC within filter increases ECR while decreases ECT from the actual EC levels, simultaneous ECR and ECT measurements would constrain the range of EC loading and provide information on method performance. Further testing with standard reference materials of common environmental matrices supports the findings. Char and soot fractions of EC can be further separated using the IMPROVE protocol. The char/soot ratio was lower in street dusts (2.2 on average) than in soils (5.2 on average), most likely reflecting motor vehicle emissions. The soot concentrations agreed with EC from CTO-375, a pure thermal method.
An Organic Decontamination Method for Sampling Devices used in Life-detection Studies
NASA Technical Reports Server (NTRS)
Eigenbrode, Jennifer; Maule, Jake; Wainwright, Norm; Steele, Andrew; Amundsen, Hans E.F.
2008-01-01
Organic decontamination of sampling and storage devices are crucial steps for life-detection, habitability, and ecological investigations of extremophiles living in the most inhospitable niches of Earth, Mars and elsewhere. However, one of the main stumbling blocks for Mars-analogue life-detection studies in terrestrial remote field-sites is the capability to clean instruments and sampling devices to organic levels consistent with null values. Here we present a new seven-step, multi-reagent cleaning and decontamination protocol that was adapted and tested on a glacial ice-coring device and on a rover-guided scoop used for sediment sampling both deployed multiple times during two field seasons of the Arctic Mars Analog Svalbard Expedition AMASE). The effectiveness of the protocols for both devices was tested by (1)in situ metabolic measurements via APT, (2)in situ lipopolysacchride (LPS) quantifications via low-level endotoxin assays, and(3) laboratory-based molecular detection via gas chromatography-mass spectrometry. Our results show that the combination and step-wise application of disinfectants with oxidative and solvation properties for sterilization are effective at removing cellular remnants and other organic traces to levels necessary for molecular organic- and life-detection studies. The validation of this seven-step protocol - specifically for ice sampling - allows us to proceed with confidence in kmskia4 analogue investigations of icy environments. However, results from a rover scoop test showed that this protocol is also suitable for null-level decontamination of sample acquisition devices. Thus, this protocol may be applicable to a variety of sampling devices and analytical instrumentation used for future astrobiology missions to Enceladus, and Europa, as well as for sample-return missions.
NASA Astrophysics Data System (ADS)
McDougald, Wendy A.; Collins, Richard; Green, Mark; Tavares, Adriana A. S.
2017-10-01
Obtaining accurate quantitative measurements in preclinical Positron Emission Tomography/Computed Tomography (PET/CT) imaging is of paramount importance in biomedical research and helps supporting efficient translation of preclinical results to the clinic. The purpose of this study was two-fold: (1) to investigate the effects of different CT acquisition protocols on PET/CT image quality and data quantification; and (2) to evaluate the absorbed dose associated with varying CT parameters. Methods: An air/water quality control CT phantom, tissue equivalent material phantom, an in-house 3D printed phantom and an image quality PET/CT phantom were imaged using a Mediso nanoPET/CT scanner. Collected data was analyzed using PMOD software, VivoQuant software and National Electric Manufactures Association (NEMA) software implemented by Mediso. Measured Hounsfield Unit (HU) in collected CT images were compared to the known HU values and image noise was quantified. PET recovery coefficients (RC), uniformity and quantitative bias were also measured. Results: Only less than 2% and 1% of CT acquisition protocols yielded water HU values < -80 and air HU values < -840, respectively. Four out of eleven CT protocols resulted in more than 100 mGy absorbed dose. Different CT protocols did not impact PET uniformity and RC, and resulted in <4% overall bias relative to expected radioactive concentration. Conclusion: Preclinical CT protocols with increased exposure times can result in high absorbed doses to the small animals. These should be avoided, as they do not contributed towards improved microPET/CT image quantitative accuracy and could limit longitudinal scanning of small animals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sommer, Karsten, E-mail: sommerk@uni-mainz.de, E-mail: Schreiber-L@ukw.de; Bernat, Dominik; Schmidt, Regine
Purpose: The extent to which atherosclerotic plaques affect contrast agent (CA) transport in the coronary arteries and, hence, quantification of myocardial blood flow (MBF) using magnetic resonance imaging (MRI) is unclear. The purpose of this work was to evaluate the influence of plaque induced stenosis both on CA transport and on the accuracy of MBF quantification. Methods: Computational fluid dynamics simulations in a high-detailed realistic vascular model were employed to investigate CA bolus transport in the coronary arteries. The impact of atherosclerosis was analyzed by inserting various medium- to high-grade stenoses in the vascular model. The influence of stenosis morphologymore » was examined by varying the stenosis shapes but keeping the area reduction constant. Errors due to CA bolus transport were analyzed using the tracer-kinetic model MMID4. Results: Dispersion of the CA bolus was found in all models and for all outlets, but with a varying magnitude. The impact of stenosis was complex: while high-grade stenoses amplified dispersion, mild stenoses reduced the effect. Morphology was found to have a marked influence on dispersion for a small number of outlets in the post-stenotic region. Despite this marked influence on the concentration–time curves, MBF errors were less affected by stenosis. In total, MBF was underestimated by −7.9% to −44.9%. Conclusions: The presented results reveal that local hemodynamics in the coronary vasculature appears to have a direct impact on CA bolus dispersion. Inclusion of atherosclerotic plaques resulted in a complex alteration of this effect, with both degree of area reduction and stenosis morphology affecting the amount of dispersion. This strong influence of vascular transport effects impairs the accuracy of MRI-based MBF quantification techniques and, potentially, other bolus-based perfusion measurement techniques like computed tomography perfusion imaging.« less
NASA Astrophysics Data System (ADS)
Badawy, B.; Fletcher, C. G.
2017-12-01
The parameterization of snow processes in land surface models is an important source of uncertainty in climate simulations. Quantifying the importance of snow-related parameters, and their uncertainties, may therefore lead to better understanding and quantification of uncertainty within integrated earth system models. However, quantifying the uncertainty arising from parameterized snow processes is challenging due to the high-dimensional parameter space, poor observational constraints, and parameter interaction. In this study, we investigate the sensitivity of the land simulation to uncertainty in snow microphysical parameters in the Canadian LAnd Surface Scheme (CLASS) using an uncertainty quantification (UQ) approach. A set of training cases (n=400) from CLASS is used to sample each parameter across its full range of empirical uncertainty, as determined from available observations and expert elicitation. A statistical learning model using support vector regression (SVR) is then constructed from the training data (CLASS output variables) to efficiently emulate the dynamical CLASS simulations over a much larger (n=220) set of cases. This approach is used to constrain the plausible range for each parameter using a skill score, and to identify the parameters with largest influence on the land simulation in CLASS at global and regional scales, using a random forest (RF) permutation importance algorithm. Preliminary sensitivity tests indicate that snow albedo refreshment threshold and the limiting snow depth, below which bare patches begin to appear, have the highest impact on snow output variables. The results also show a considerable reduction of the plausible ranges of the parameters values and hence reducing their uncertainty ranges, which can lead to a significant reduction of the model uncertainty. The implementation and results of this study will be presented and discussed in details.
Hadar, Eran; Mansur, Nariman; Ambar, Irit; Hod, Moshe
2011-06-01
Preterm delivery is a significant cause of neonatal morbidity and mortality. Pregnant women, with symptoms and signs consistent with preterm labor, can be treated with various tocolytic drugs. Atosiban is one of many drugs indicated to arrest imminent preterm labor. Various studies show that the efficacy of atosiban is similar to other tocolytic drugs. The main advantage of atosiban is a relativeLy low incidence of adverse maternal reactions. Its considerable shortcoming is the financial cost, compared to other available drugs. In view of its cost, we have decided to implement a strict protocol to direct the use of atosiban, with the intent to reduce costs, without hampering quality of care. The protocol was implemented from July 2009, and it outlines the medical and procedural terms to use atosiban. We compared similar time periods before and after implementation of the protocol. The outcomes compared included: treatment success, rates of preterm deliveries and financial costs. Within the timeframe that the protocol was implemented, we have been able to demonstrate a 40% reduction in atosiban related costs, compared to a parallel period, when the clinical guidelines were not implemented. This translates into savings of about NIS 40,000 (New Israeli Shekel) (approximately $10,000). This was achieved without an increase in the rate of preterm deliveries. Implementing and enforcing a simple protocol of supervision on the use of atosiban enables a considerable reduction of financial costs related to atosiban, without hampering medical care.
Leitch, Heather A; Fibach, Eitan; Rachmilewitz, Eliezer
2017-05-01
Iron is an essential element for key cellular metabolic processes. However, transfusional iron overload (IOL) may result in significant cellular toxicity. IOL occurs in transfusion dependent hematologic malignancies (HM), may lead to pathological clinical outcomes, and IOL reduction may improve outcomes. In hematopoietic stem cell transplantation (SCT) for HM, IOL may have clinical importance; endpoints examined regarding an impact of IOL and IOL reduction include transplant-related mortality, organ function, infection, relapse risk, and survival. Here we review the clinical consequences of IOL and effects of IOL reduction before, during and following SCT for HM. IOL pathophysiology is discussed as well as available tests for IOL quantification including transfusion history, serum ferritin level, transferrin saturation, hepcidin, labile plasma iron and other parameters of iron-catalyzed oxygen free radicals, and organ IOL by imaging. Data-based recommendations for IOL measurement, monitoring and reduction before, during and following SCT for HM are made. Copyright © 2017 Elsevier B.V. All rights reserved.
Lin, Huifa; Shin, Won-Yong
2017-01-01
We study secondary random access in multi-input multi-output cognitive radio networks, where a slotted ALOHA-type protocol and successive interference cancellation are used. We first introduce three types of transmit beamforming performed by secondary users, where multiple antennas are used to suppress the interference at the primary base station and/or to increase the received signal power at the secondary base station. Then, we show a simple decentralized power allocation along with the equivalent single-antenna conversion. To exploit the multiuser diversity gain, an opportunistic transmission protocol is proposed, where the secondary users generating less interference are opportunistically selected, resulting in a further reduction of the interference temperature. The proposed methods are validated via computer simulations. Numerical results show that increasing the number of transmit antennas can greatly reduce the interference temperature, while increasing the number of receive antennas leads to a reduction of the total transmit power. Optimal parameter values of the opportunistic transmission protocol are examined according to three types of beamforming and different antenna configurations, in terms of maximizing the cognitive transmission capacity. All the beamforming, decentralized power allocation, and opportunistic transmission protocol are performed by the secondary users in a decentralized manner, thus resulting in an easy implementation in practice. PMID:28076402
Lin, Huifa; Shin, Won-Yong
2017-01-01
We study secondary random access in multi-input multi-output cognitive radio networks, where a slotted ALOHA-type protocol and successive interference cancellation are used. We first introduce three types of transmit beamforming performed by secondary users, where multiple antennas are used to suppress the interference at the primary base station and/or to increase the received signal power at the secondary base station. Then, we show a simple decentralized power allocation along with the equivalent single-antenna conversion. To exploit the multiuser diversity gain, an opportunistic transmission protocol is proposed, where the secondary users generating less interference are opportunistically selected, resulting in a further reduction of the interference temperature. The proposed methods are validated via computer simulations. Numerical results show that increasing the number of transmit antennas can greatly reduce the interference temperature, while increasing the number of receive antennas leads to a reduction of the total transmit power. Optimal parameter values of the opportunistic transmission protocol are examined according to three types of beamforming and different antenna configurations, in terms of maximizing the cognitive transmission capacity. All the beamforming, decentralized power allocation, and opportunistic transmission protocol are performed by the secondary users in a decentralized manner, thus resulting in an easy implementation in practice.
Silva, Raquel V S; Tessarolo, Nathalia S; Pereira, Vinícius B; Ximenes, Vitor L; Mendes, Fábio L; de Almeida, Marlon B B; Azevedo, Débora A
2017-03-01
The elucidation of bio-oil composition is important to evaluate the processes of biomass conversion and its upgrading, and to suggest the proper use for each sample. Comprehensive two-dimensional gas chromatography with time-of-flight mass spectrometry (GC×GC-TOFMS) is a widely applied analytical approach for bio-oil investigation due to the higher separation and resolution capacity from this technique. This work addresses the issue of analytical performance to assess the comprehensive characterization of real bio-oil samples via GC×GC-TOFMS. The approach was applied to the individual quantification of compounds of real thermal (PWT), catalytic process (CPO), and hydrodeoxygenation process (HDO) bio-oils. Quantification was performed with reliability using the analytical curves of oxygenated and hydrocarbon standards as well as the deuterated internal standards. The limit of quantification was set at 1ngµL -1 for major standards, except for hexanoic acid, which was set at 5ngµL -1 . The GC×GC-TOFMS method provided good precision (<10%) and excellent accuracy (recovery range of 70-130%) for the quantification of individual hydrocarbons and oxygenated compounds in real bio-oil samples. Sugars, furans, and alcohols appear as the major constituents of the PWT, CPO, and HDO samples, respectively. In order to obtain bio-oils with better quality, the catalytic pyrolysis process may be a better option than hydrogenation due to the effective reduction of oxygenated compound concentrations and the lower cost of the process, when hydrogen is not required to promote deoxygenation in the catalytic pyrolysis process. Copyright © 2016 Elsevier B.V. All rights reserved.
Koehler, Christian J; Arntzen, Magnus Ø; Thiede, Bernd
2015-05-15
Stable isotopic labeling techniques are useful for quantitative proteomics. A cost-effective and convenient method for diethylation by reductive amination was established. The impact using either carbon-13 or deuterium on quantification accuracy and precision was investigated using diethylation. We established an effective approach for stable isotope labeling by diethylation of amino groups of peptides. The approach was validated using matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) and nanospray liquid chromatography/electrospray ionization (nanoLC/ESI)-ion trap/orbitrap for mass spectrometric analysis as well as MaxQuant for quantitative data analysis. Reaction conditions with low reagent costs, high yields and minor side reactions were established for diethylation. Furthermore, we showed that diethylation can be applied to up to sixplex labeling. For duplex experiments, we compared diethylation in the analysis of the proteome of HeLa cells using acetaldehyde-(13) C(2)/(12) C(2) and acetaldehyde-(2) H(4)/(1) H(4). Equal numbers of proteins could be identified and quantified; however, (13) C(4)/(12) C(4) -diethylation revealed a lower variance of quantitative peptide ratios within proteins resulting in a higher precision of quantified proteins and less falsely regulated proteins. The results were compared with dimethylation showing minor effects because of the lower number of deuteriums. The described approach for diethylation of primary amines is a cost-effective and accurate method for up to sixplex relative quantification of proteomes. (13) C(4)/(12) C(4) -diethylation enables duplex quantification based on chemical labeling without using deuterium which reduces identification of false-negatives and increases the quality of the quantification results. Copyright © 2015 John Wiley & Sons, Ltd.
Efficacy of Chinese auriculotherapy for stress in nursing staff: a randomized clinical trial
Kurebayashi, Leonice Fumiko Sato; da Silva, Maria Júlia Paes
2014-01-01
Objective this randomized single blind clinical study aimed to evaluate the efficacy of auriculotherapy with and without a protocol for reducing stress levels among nursing staff. Method a total of 175 nursing professionals with medium and high scores according to Vasconcelos' Stress Symptoms List were divided into 3 groups: Control (58), Group with protocol (58), Group with no protocol (59). They were assessed at the baseline, after 12 sessions, and at the follow-up (30 days). Results in the analysis of variance, statistically significant differences between the Control and Intervention groups were found in the two evaluations (p<0.05) with greater size of effect indices (Cohen) for the No protocol group. The Yang Liver 1 and 2, Kidney, Brain Stem and Shen Men were the points most used. Conclusion individualized auriculotherapy, with no protocol, could expand the scope of the technique for stress reduction compared with auriculotherapy with a protocol. NCT: 01420835 PMID:25029046
Quantification of Reduction in Forced Vital Capacity of Sand Stone Quarry Workers
Singh, Suresh Kumar; Chowdhary, G. R.; Chhangani, V. D.; Purohit, Gopal
2007-01-01
This study assessed the reduction in forced vital capacity of lungs of sand stone quarry workers exposed to high respirable suspended particulate concentration. The sand stone quarry workers are engaged in different type of activities like drilling, loading and dressing. These different working places have different concentration of RSPM and these workers are exposed to different concentration of RSPM. It is found that exposure duration and exposure concentrations are main factors responsible to damage respiratory tract of worker. It is also revealed from the study that most of the workers are suffering from silicosis if the exposure duration is more than 15 years. PMID:18180540
NASA Technical Reports Server (NTRS)
Devismes, D.; Cohen, B. A.; Li, Z.-H.; Miller, J. S.
2014-01-01
In planetary exploration, in situ absolute geochronology is one of the main important measurements that needs to be accomplished. Until now, on Mars, the age of the surface is only determined by crater density counting, which gives relative ages. These ages can have a lot of uncertainty as they depend on many parameters. More than that, the curves must be ties to absolute ages. Thus far, only the lost lander Beagle 2 was designed to conduct absolute geochronology measurements, though some recent attempts using MSL Curiosity show that this investigation is feasible and should be strongly encouraged for future flight. Experimental: The Potassium (K)-Argon Laser Experiment (KArLE) is being developed at MSFC through the NASA Planetary Instrument Definition and Development Program (PIDDP). The goal of this experiment is to provide in situ geochronology based on the K-Ar method. A laser ablates a rock under high vacuum, creating a plasma which is sensed by an optical spectrometer to do Laser Induced Breakdown Spectroscopy (LIBS). The ablated material frees gases, including radiogenic 40Ar,which is measured by a mass spectrometer (MS). As the potassium is a content and the 40Ar is a quantity, the ablated mass needed in order to relate them. The mass is given by the product of the ablated volume by the density of this material. So we determine the mineralogy of the ablated material with the LIBS spectra and images and calculate its density. The volume of the pit is measured by using microscopy. LIBS measurement of K under high vacuum: Three independant projects [1, 2, 3] including KArLE, are developing geochronological instruments based on this LA-LIBS-MS method. Despite several differences in their setup, all of them have validated the methods with analyses and ages. However, they all described difficulties with the LIBS measurements of K [3,4]. At ambient pressure, the quantification of K by LIBS on geological materials can be accurate [5]. However the protocol of the LA-LIBS-MS experiment required hundreds of shots under high vacuum in order to free enough 40Ar* to be measured by the QMS. This long duration of ablation may induces significant changes in the LIBS spectra. The pressure may increases by orders of magnitudewithin the chamber and the laser pit geometry can change the effectiveness of ablation and intensity of plasma light received. These effects introduce variation between the first and last spectra and so the quantification of K is more complex. The ablation of one crater can give, depending on the protocol of acquisition, from tens to hundreds of spectra. Protocol and results: We are in the process of further characterizing the variation introduced into LIBS spectra by the use of hundreds of laser shots, and definining a protocol that can be used to ensure accuracy and reporoducibility in the results.We are using natural rock powder standards fused in a furnace, as well as mars analog samples with known K content. We will show the result of the calibration and some new statistical approaches in order to apprehend the effects of the long time ablation on rocks under high vacuum.
Live-Cell Imaging of Mitochondria and the Actin Cytoskeleton in Budding Yeast.
Higuchi-Sanabria, Ryo; Swayne, Theresa C; Boldogh, Istvan R; Pon, Liza A
2016-01-01
Maintenance and regulation of proper mitochondrial dynamics and functions are necessary for cellular homeostasis. Numerous diseases, including neurodegeneration and muscle myopathies, and overall cellular aging are marked by declining mitochondrial function and subsequent loss of multiple other cellular functions. For these reasons, optimized protocols are needed for visualization and quantification of mitochondria and their function and fitness. In budding yeast, mitochondria are intimately associated with the actin cytoskeleton and utilize actin for their movement and inheritance. This chapter describes optimal approaches for labeling mitochondria and the actin cytoskeleton in living budding yeast cells, for imaging the labeled cells, and for analyzing the resulting images.
One-step liquid-liquid extraction of cocaine from urine samples for gas chromatographic analysis.
Farina, Marcelo; Yonamine, Maurício; Silva, Ovandir A
2002-07-17
An improved technique for cocaine extraction from urine samples for gas chromatographic (GC) analysis is described. Employing a simple liquid-liquid extraction (LLE) of cocaine with a mixture of ethyl ether:isopropanol (9:1) the method presents a mean recovery of 74.49%. Limit of detection (LOD) and limit of quantification (LOQ) were 5 and 20 ng/ml, respectively. The method is highly precise (coefficient of variation (CV) <8%) and linear from 20 to 2000 ng/ml. It can he applied to detect the presence of cocaine in urine as a marker of its recent use in drug abuse treatment protocols.
Refined approach for quantification of in vivo ischemia-reperfusion injury in the mouse heart
Medway, Debra J.; Schulz-Menger, Jeanette; Schneider, Jurgen E.; Neubauer, Stefan; Lygate, Craig A.
2009-01-01
Cardiac ischemia-reperfusion experiments in the mouse are important in vivo models of human disease. Infarct size is a particularly important scientific readout as virtually all cardiocirculatory pathways are affected by it. Therefore, such measurements must be exact and valid. The histological analysis, however, remains technically challenging, and the resulting quality is often unsatisfactory. For this report we have scrutinized each step involved in standard double-staining histology. We have tested published approaches and challenged their practicality. As a result, we propose an improved and streamlined protocol, which consistently yields high-quality histology, thereby minimizing experimental noise and group sizes. PMID:19820193
Characterizing SWCNT Dispersion in Polymer Composites
NASA Technical Reports Server (NTRS)
Lillehei, Peter T.; Kim, Jae-Woo; Gibbons, Luke; Park, Cheol
2007-01-01
The new wave of single wall carbon nanotube (SWCNT) infused composites will yield structurally sound multifunctional nanomaterials. The SWCNT network requires thorough dispersion within the polymer matrix in order to maximize the benefits of the nanomaterial. However, before any nanomaterials can be used in aerospace applications a means of quality assurance and quality control must be certified. Quality control certification requires a means of quantification, however, the measurement protocol mandates a method of seeing the dispersion first. We describe here the new tools that we have developed and implemented to first be able to see carbon nanotubes in polymers and second to measure or quantify the dispersion of the nanotubes.
Optimization of the scan protocols for CT-based material extraction in small animal PET/CT studies
NASA Astrophysics Data System (ADS)
Yang, Ching-Ching; Yu, Jhih-An; Yang, Bang-Hung; Wu, Tung-Hsin
2013-12-01
We investigated the effects of scan protocols on CT-based material extraction to minimize radiation dose while maintaining sufficient image information in small animal studies. The phantom simulation experiments were performed with the high dose (HD), medium dose (MD) and low dose (LD) protocols at 50, 70 and 80 kVp with varying mA s. The reconstructed CT images were segmented based on Hounsfield unit (HU)-physical density (ρ) calibration curves and the dual-energy CT-based (DECT) method. Compared to the (HU;ρ) method performed on CT images acquired with the 80 kVp HD protocol, a 2-fold improvement in segmentation accuracy and a 7.5-fold reduction in radiation dose were observed when the DECT method was performed on CT images acquired with the 50/80 kVp LD protocol, showing the possibility to reduce radiation dose while achieving high segmentation accuracy.
Sabry, Nirmeen; Dawoud, Dalia; Alansary, Adel; Hounsome, Natalia; Baines, Darrin
2015-12-01
Timely switching from intravenous to oral therapy ensures optimized treatment and efficient use of health care resources. Intravenous (IV) paracetamol is widely used for post-operative pain management but not always switched to the oral form in a timely manner, leading to unnecessary increase in expenditure. This study aims to evaluate the impact of a multifaceted intervention to promote timely switching from the IV to oral form in the post-operative setting. An evidence-based prescribing protocol was designed and implemented by the clinical pharmacy team in a single district general hospital in Egypt. The protocol specified the criteria for appropriate prescribing of IV paracetamol. Doctors were provided with information and educational sessions prior to implementation. A prospective, quasi-experimental study was undertaken to evaluate its impact on IV paracetamol utilization and costs. Data on monthly utilization and costs were recorded for 12 months before and after implementation (January 2012 to December 2013). Data were analysed using interrupted time series analysis. Prior to implementation, in 2012, total spending on IV paracetamol was 674 154.00 Egyptian Pounds (L.E.) ($23,668.00). There was a non-significant (P > 0.05) downward trend in utilization (-32 ampoules per month) and costs [reduction of 632 L.E. ($222) per month]. Following implementation, immediate decrease in utilization and costs (P < 0.05) and a trend change over the follow-up period were observed. Average monthly reduction was 26% (95% CI: 24% to 28%, P < 0.001). A multifaceted, protocol-based intervention to ensure timely switching from IV-to-oral paracetamol achieved significant reduction in utilization and cost of IV paracetamol in the first 5 months of its implementation. © 2015 John Wiley & Sons, Ltd.
Towards multilevel mental stress assessment using SVM with ECOC: an EEG approach.
Al-Shargie, Fares; Tang, Tong Boon; Badruddin, Nasreen; Kiguchi, Masashi
2018-01-01
Mental stress has been identified as one of the major contributing factors that leads to various diseases such as heart attack, depression, and stroke. To avoid this, stress quantification is important for clinical intervention and disease prevention. This study aims to investigate the feasibility of exploiting electroencephalography (EEG) signals to discriminate between different stress levels. We propose a new assessment protocol whereby the stress level is represented by the complexity of mental arithmetic (MA) task for example, at three levels of difficulty, and the stressors are time pressure and negative feedback. Using 18-male subjects, the experimental results showed that there were significant differences in EEG response between the control and stress conditions at different levels of MA task with p values < 0.001. Furthermore, we found a significant reduction in alpha rhythm power from one stress level to another level, p values < 0.05. In comparison, results from self-reporting questionnaire NASA-TLX approach showed no significant differences between stress levels. In addition, we developed a discriminant analysis method based on multiclass support vector machine (SVM) with error-correcting output code (ECOC). Different stress levels were detected with an average classification accuracy of 94.79%. The lateral index (LI) results further showed dominant right prefrontal cortex (PFC) to mental stress (reduced alpha rhythm). The study demonstrated the feasibility of using EEG in classifying multilevel mental stress and reported alpha rhythm power at right prefrontal cortex as a suitable index.
Plasma Membrane Permeabilization by Trains of Ultrashort Electric Pulses
Ibey, Bennett L.; Mixon, Dustin G.; Payne, Jason A.; Bowman, Angela; Sickendick, Karl; Wilmink, Gerald J.; Roach, W. Patrick; Pakhomov, Andrei G.
2010-01-01
Ultrashort electric pulses (USEP) cause long-lasting increase of cell membrane electrical conductance, and that a single USEP increased cell membrane electrical conductance proportionally to the absorbed dose (AD) with a threshold of about 10 mJ/g. The present study extends quantification of the membrane permeabilization effect to multiple USEP and employed a more accurate protocol that identified USEP effect as the difference between post- and pre-exposure conductance values (Δg) in individual cells. We showed that Δg can be increased by either increasing the number of pulses at a constant E-field, or by increasing the E-field at a constant number of pulses. For 60-ns pulses, an E-field threshold of 6 kV/cm for a single pulse was lowered to less than 1.7 kV/cm by applying 100-pulse or longer trains. However, the reduction of the E-field threshold was only achieved at the expense of a higher AD compared to a single pulse exposure. Furthermore, the effect of multiple pulses was not fully determined by AD, suggesting that cells permeabilized by the first pulse(s) in the train become less vulnerable to subsequent pulses. This explanation was corroborated by a model that treated multiple-pulse exposures as a series of single-pulse exposures and assumed an exponential decline of cell susceptibility to USEP as Δg increased after each pulse during the course of the train. PMID:20171148
Yousef, Gad G; Brown, Allan F; Funakoshi, Yayoi; Mbeunkui, Flaubert; Grace, Mary H; Ballington, James R; Loraine, Ann; Lila, Mary A
2013-05-22
Anthocyanins and phenolic acids are major secondary metabolites in blueberry with important implications for human health maintenance. An improved protocol was developed for the accurate, efficient, and rapid comparative screening for large blueberry sample sets. Triplicates of six commercial cultivars and four breeding selections were analyzed using the new method. The compound recoveries ranged from 94.2 to 97.5 ± 5.3% when samples were spiked with commercial standards prior to extraction. Eighteen anthocyanins and 4 phenolic acids were quantified in frozen and freeze-dried fruits. Large variations for individual and total anthocyanins, ranging from 201.4 to 402.8 mg/100 g, were assayed in frozen fruits. The total phenolic acid content ranged from 23.6 to 61.7 mg/100 g in frozen fruits. Across all genotypes, freeze-drying resulted in minor reductions in anthocyanin concentration (3.9%) compared to anthocyanins in frozen fruits. However, phenolic acids increased by an average of 1.9-fold (±0.3) in the freeze-dried fruit. Different genotypes frequently had comparable overall levels of total anthocyanins and phenolic acids, but differed dramatically in individual profiles of compounds. Three of the genotypes contained markedly higher concentrations of delphinidin 3-O-glucoside, cyanidin 3-O-glucoside, and malvidin 3-O-glucoside, which have previously been implicated as bioactive principles in this fruit. The implications of these findings for human health benefits are discussed.
García-Magariño, Iván; Lacuesta, Raquel; Lloret, Jaime
2018-03-27
Smart communication protocols are becoming a key mechanism for improving communication performance in networks such as wireless sensor networks. However, the literature lacks mechanisms for simulating smart communication protocols in precision agriculture for decreasing production costs. In this context, the current work presents an agent-based simulator of smart communication protocols for efficiently managing pesticides. The simulator considers the needs of electric power, crop health, percentage of alive bugs and pesticide consumption. The current approach is illustrated with three different communication protocols respectively called (a) broadcast, (b) neighbor and (c) low-cost neighbor. The low-cost neighbor protocol obtained a statistically-significant reduction in the need of electric power over the neighbor protocol, with a very large difference according to the common interpretations about the Cohen's d effect size. The presented simulator is called ABS-SmartComAgri and is freely distributed as open-source from a public research data repository. It ensures the reproducibility of experiments and allows other researchers to extend the current approach.
2018-01-01
Smart communication protocols are becoming a key mechanism for improving communication performance in networks such as wireless sensor networks. However, the literature lacks mechanisms for simulating smart communication protocols in precision agriculture for decreasing production costs. In this context, the current work presents an agent-based simulator of smart communication protocols for efficiently managing pesticides. The simulator considers the needs of electric power, crop health, percentage of alive bugs and pesticide consumption. The current approach is illustrated with three different communication protocols respectively called (a) broadcast, (b) neighbor and (c) low-cost neighbor. The low-cost neighbor protocol obtained a statistically-significant reduction in the need of electric power over the neighbor protocol, with a very large difference according to the common interpretations about the Cohen’s d effect size. The presented simulator is called ABS-SmartComAgri and is freely distributed as open-source from a public research data repository. It ensures the reproducibility of experiments and allows other researchers to extend the current approach. PMID:29584703
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siman, W.; Mikell, J. K.; Kappadath, S. C., E-mail
Purpose: To develop a practical background compensation (BC) technique to improve quantitative {sup 90}Y-bremsstrahlung single-photon emission computed tomography (SPECT)/computed tomography (CT) using a commercially available imaging system. Methods: All images were acquired using medium-energy collimation in six energy windows (EWs), ranging from 70 to 410 keV. The EWs were determined based on the signal-to-background ratio in planar images of an acrylic phantom of different thicknesses (2–16 cm) positioned below a {sup 90}Y source and set at different distances (15–35 cm) from a gamma camera. The authors adapted the widely used EW-based scatter-correction technique by modeling the BC as scaled images.more » The BC EW was determined empirically in SPECT/CT studies using an IEC phantom based on the sphere activity recovery and residual activity in the cold lung insert. The scaling factor was calculated from 20 clinical planar {sup 90}Y images. Reconstruction parameters were optimized in the same SPECT images for improved image quantification and contrast. A count-to-activity calibration factor was calculated from 30 clinical {sup 90}Y images. Results: The authors found that the most appropriate imaging EW range was 90–125 keV. BC was modeled as 0.53× images in the EW of 310–410 keV. The background-compensated clinical images had higher image contrast than uncompensated images. The maximum deviation of their SPECT calibration in clinical studies was lowest (<10%) for SPECT with attenuation correction (AC) and SPECT with AC + BC. Using the proposed SPECT-with-AC + BC reconstruction protocol, the authors found that the recovery coefficient of a 37-mm sphere (in a 10-mm volume of interest) increased from 39% to 90% and that the residual activity in the lung insert decreased from 44% to 14% over that of SPECT images with AC alone. Conclusions: The proposed EW-based BC model was developed for {sup 90}Y bremsstrahlung imaging. SPECT with AC + BC gave improved lesion detectability and activity quantification compared to SPECT with AC only. The proposed methodology can readily be used to tailor {sup 90}Y SPECT/CT acquisition and reconstruction protocols with different SPECT/CT systems for quantification and improved image quality in clinical settings.« less
McCormick, Zachary L; Korn, Marc; Reddy, Rajiv; Marcolina, Austin; Dayanim, David; Mattie, Ryan; Cushman, Daniel; Bhave, Meghan; McCarthy, Robert J; Khan, Dost; Nagpal, Geeta; Walega, David R
2017-09-01
Determine outcomes of cooled radiofrequency ablation (C-RFA) of the genicular nerves for treatment of chronic knee pain due to osteoarthritis (OA). Cross-sectional survey. Academic pain medicine center. Consecutive patients with knee OA and 50% or greater pain relief following genicular nerve blocks who underwent genicular nerve C-RFA. Survey administration six or more months after C-RFA. Pain numeric rating scale (NRS), Medication Quantification Scale III (MQSIII), Patient Global Impression of Change (PGIC), and total knee arthroplasty (TKA) data were collected. Logistic regression was used to identify factors that predicted treatment success. Thirty-three patients (52 discrete knees) met inclusion criteria. Thirty-five percent (95% confidence interval [CI] = 22-48) of procedures resulted in the combined outcome of 50% or greater reduction in NRS score, reduction of 3.4 or more points in MQSIII score, and PGIC score consistent with "very much improved/improved." Nineteen percent (95% CI = 10-33) of procedures resulted in complete pain relief. Greater duration of pain and greater than 80% pain relief from diagnostic blocks were identified as predictors of treatment success. The accuracy of the model was 0.88 (95% CI = 0.78-0.97, P < 0.001). Genicular C-RFA demonstrated a success rate of 35% based on a robust combination of outcome measures, and 19% of procedures resulted in complete relief of pain at a minimum of six months of follow-up. Report of 80% or greater relief from diagnostic blocks and duration of pain of less than five years are associated with high accuracy in predicting treatment success. Further prospective study is needed to optimize the patient selection protocol and success rate of this procedure. © 2017 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Rodriguez-Canales, Jaime; Hanson, Jeffrey C; Hipp, Jason D; Balis, Ulysses J; Tangrea, Michael A; Emmert-Buck, Michael R; Bova, G Steven
2013-01-01
Isolation of well-preserved pure cell populations is a prerequisite for sound studies of the molecular basis of any tissue-based biological phenomenon. This updated chapter reviews current methods for obtaining anatomically specific signals from molecules isolated from tissues, a basic requirement for productive linking of phenotype and genotype. The quality of samples isolated from tissue and used for molecular analysis is often glossed over or omitted from publications, making interpretation and replication of data difficult or impossible. Fortunately, recently developed techniques allow life scientists to better document and control the quality of samples used for a given assay, creating a foundation for improvement in this area. Tissue processing for molecular studies usually involves some or all of the following steps: tissue collection, gross dissection/identification, fixation, processing/embedding, storage/archiving, sectioning, staining, microdissection/annotation, and pure analyte labeling/identification and quantification. We provide a detailed comparison of some current tissue microdissection technologies and provide detailed example protocols for tissue component handling upstream and downstream from microdissection. We also discuss some of the physical and chemical issues related to optimal tissue processing and include methods specific to cytology specimens. We encourage each laboratory to use these as a starting point for optimization of their overall process of moving from collected tissue to high-quality, appropriately anatomically tagged scientific results. Improvement in this area will significantly increase life science quality and productivity. The chapter is divided into introduction, materials, protocols, and notes subheadings. Because many protocols are covered in each of these sections, information relating to a single protocol is not contiguous. To get the greatest benefit from this chapter, readers are advised to read through the entire chapter first, identify protocols appropriate to their laboratory for each step in their workflow, and then reread entries in each section pertaining to each of these single protocols.
Widmann, G; Dalla Torre, D; Hoermann, R; Schullian, P; Gassner, E M; Bale, R; Puelacher, W
2015-04-01
The influence of dose reductions on diagnostic quality using a series of high-resolution ultralow-dose computed tomography (CT) scans for computer-assisted planning and surgery including the most recent iterative reconstruction algorithms was evaluated and compared with the fracture detectability of a standard cranial emergency protocol. A human cadaver head including the mandible was artificially prepared with midfacial and orbital fractures and scanned using a 64-multislice CT scanner. The CT dose index volume (CTDIvol) and effective doses were calculated using application software. Noise was evaluated as the standard deviation in Hounsfield units within an identical region of interest in the posterior fossa. Diagnostic quality was assessed by consensus reading of a craniomaxillofacial surgeon and radiologist. Compared with the emergency protocol at CTDIvol 35.3 mGy and effective dose 3.6 mSv, low-dose protocols down to CTDIvol 1.0 mGy and 0.1 mSv (97% dose reduction) may be sufficient for the diagnosis of dislocated craniofacial fractures. Non-dislocated fractures may be detected at CTDIvol 2.6 mGy and 0.3 mSv (93% dose reduction). Adaptive statistical iterative reconstruction (ASIR) 50 and 100 reduced average noise by 30% and 56%, and model-based iterative reconstruction (MBIR) by 93%. However, the detection rate of fractures could not be improved due to smoothing effects. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Low radiation dose in computed tomography: the role of iodine
Aschoff, Andrik J; Catalano, Carlo; Krix, Martin; Albrecht, Thomas
2017-01-01
Recent approaches to reducing radiation exposure during CT examinations typically utilize automated dose modulation strategies on the basis of lower tube voltage combined with iterative reconstruction and other dose-saving techniques. Less clearly appreciated is the potentially substantial role that iodinated contrast media (CM) can play in low-radiation-dose CT examinations. Herein we discuss the role of iodinated CM in low-radiation-dose examinations and describe approaches for the optimization of CM administration protocols to further reduce radiation dose and/or CM dose while maintaining image quality for accurate diagnosis. Similar to the higher iodine attenuation obtained at low-tube-voltage settings, high-iodine-signal protocols may permit radiation dose reduction by permitting a lowering of mAs while maintaining the signal-to-noise ratio. This is particularly feasible in first pass examinations where high iodine signal can be achieved by injecting iodine more rapidly. The combination of low kV and IR can also be used to reduce the iodine dose. Here, in optimum contrast injection protocols, the volume of CM administered rather than the iodine concentration should be reduced, since with high-iodine-concentration CM further reductions of iodine dose are achievable for modern first pass examinations. Moreover, higher concentrations of CM more readily allow reductions of both flow rate and volume, thereby improving the tolerability of contrast administration. PMID:28471242
Wang, Qingqing; Zhang, Suhong; Guo, Lili; Busch, Christine M; Jian, Wenying; Weng, Naidong; Snyder, Nathaniel W; Rangiah, Kannan; Mesaros, Clementina; Blair, Ian A
2015-01-01
Background: Absolute quantification of protein biomarkers such as serum apolipoprotein A1 by both immunoassays and LC–MS can provide misleading results. Results: Recombinant ApoA-1 internal standard was prepared using stable isotope labeling by amino acids in cell culture with [13C615N2]-lysine and [13C915N1]-tyrosine in human cells. A stable isotope dilution LC–MS method for serum ApoA-1 was validated and levels analyzed for 50 nonsmokers and 50 smokers. Conclusion: The concentration of ApoA-1 in nonsmokers was 169.4 mg/dl with an 18.4% reduction to 138.2 mg/dl in smokers. The validated assay will have clinical utility for assessing effects of smoking cessation and therapeutic or dietary interventions in high-risk populations. PMID:26394123
Van Berkel, Megan A; MacDermott, Jennifer; Dungan, Kathleen M; Cook, Charles H; Murphy, Claire V
2017-12-01
Although studies demonstrate techniques to limit hypoglycaemia in critically ill patients, there are limited data supporting methods to improve management of existing hypoglycaemia. Assess the impact and sustainability of a computerised, three tiered, nurse driven protocol for hypoglycaemia treatment. Retrospective pre and post protocol study. Neurosciences and surgical intensive care units at a tertiary academic medical centre. Patients with a hypoglycaemic episode were included during a pre-protocol or post-protocol implementation period. An additional six-month cohort was evaluated to assess sustainability. Fifty-four patients were included for evaluation (35 pre- and 19 post-protocol); 122 patients were included in the sustainability cohort. Hypoglycaemia treatment significantly improved in the post-protocol cohort (20% vs. 52.6%, p=0.014); with additional improvement to 79.5% in the sustainability cohort. Time to follow-up blood glucose was decreased after treatment from 122 [Q1-Q3: 46-242] minutes pre-protocol to 25 [Q1-Q3: 9-48] minutes post protocol (p<0.0001). This reduction was maintained in the sustainability cohort [median of 29min (Q1-Q3: 20-51)]. Implementation of a nurse-driven, three-tiered protocol for treatment of hypoglyacemia significantly improved treatment rates, as well as reduced time to recheck blood glucose measurement. These benefits were sustained during a six-month period after protocol implementation. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.
2017-12-01
The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40-85% reduction in 1-NSE, and 35-90% reduction in |RB|. Overall, this uncertainty quantification framework is robust, effective and efficient for parametric uncertainty analysis, the results of which provide useful information that helps to understand the model behaviors and improve the model simulations.
Control of size and aspect ratio in hydroquinone-based synthesis of gold nanorods
NASA Astrophysics Data System (ADS)
Morasso, Carlo; Picciolini, Silvia; Schiumarini, Domitilla; Mehn, Dora; Ojea-Jiménez, Isaac; Zanchetta, Giuliano; Vanna, Renzo; Bedoni, Marzia; Prosperi, Davide; Gramatica, Furio
2015-08-01
In this article, we describe how it is possible to tune the size and the aspect ratio of gold nanorods obtained using a highly efficient protocol based on the use of hydroquinone as a reducing agent by varying the amounts of CTAB and silver ions present in the "seed-growth" solution. Our approach not only allows us to prepare nanorods with a four times increased Au3+ reduction yield, when compared with the commonly used protocol based on ascorbic acid, but also allows a remarkable reduction of 50-60 % of the amount of CTAB needed. In fact, according to our findings, the concentration of CTAB present in the seed-growth solution do not linearly influence the final aspect ratio of the obtained nanorods, and an optimal concentration range between 30 and 50 mM has been identified as the one that is able to generate particles with more elongated shapes. On the optimized protocol, the effect of the concentration of Ag+ ions in the seed-growth solution and the stability of the obtained particles has also been investigated.
Learning and Information Approaches for Inference in Dynamic Data-Driven Geophysical Applications
NASA Astrophysics Data System (ADS)
Ravela, S.
2015-12-01
Many Geophysical inference problems are characterized by non-linear processes, high-dimensional models and complex uncertainties. A dynamic coupling between models, estimation, and sampling is typically sought to efficiently characterize and reduce uncertainty. This process is however fraught with several difficulties. Among them, the key difficulties are the ability to deal with model errors, efficacy of uncertainty quantification and data assimilation. In this presentation, we present three key ideas from learning and intelligent systems theory and apply them to two geophysical applications. The first idea is the use of Ensemble Learning to compensate for model error, the second is to develop tractable Information Theoretic Learning to deal with non-Gaussianity in inference, and the third is a Manifold Resampling technique for effective uncertainty quantification. We apply these methods, first to the development of a cooperative autonomous observing system using sUAS for studying coherent structures. We apply this to Second, we apply this to the problem of quantifying risk from hurricanes and storm surges in a changing climate. Results indicate that learning approaches can enable new effectiveness in cases where standard approaches to model reduction, uncertainty quantification and data assimilation fail.
Schultealbert, Caroline; Baur, Tobias; Schütze, Andreas; Sauerwald, Tilman
2018-03-01
Dedicated methods for quantification and identification of reducing gases based on model-based temperature-cycled operation (TCO) using a single commercial MOS gas sensor are presented. During high temperature phases the sensor surface is highly oxidized, yielding a significant sensitivity increase after switching to lower temperatures (differential surface reduction, DSR). For low concentrations, the slope of the logarithmic conductance during this low-temperature phase is evaluated and can directly be used for quantification. For higher concentrations, the time constant for reaching a stable conductance during the same low-temperature phase is evaluated. Both signals represent the reaction rate of the reducing gas on the strongly oxidized surface at this low temperature and provide a linear calibration curve, which is exceptional for MOS sensors. By determining these reaction rates on different low-temperature plateaus and applying pattern recognition, the resulting footprint can be used for identification of different gases. All methods are tested over a wide concentration range from 10 ppb to 100 ppm (4 orders of magnitude) for four different reducing gases (CO, H₂, ammonia and benzene) using randomized gas exposures.
Schultealbert, Caroline; Baur, Tobias; Schütze, Andreas; Sauerwald, Tilman
2018-01-01
Dedicated methods for quantification and identification of reducing gases based on model-based temperature-cycled operation (TCO) using a single commercial MOS gas sensor are presented. During high temperature phases the sensor surface is highly oxidized, yielding a significant sensitivity increase after switching to lower temperatures (differential surface reduction, DSR). For low concentrations, the slope of the logarithmic conductance during this low-temperature phase is evaluated and can directly be used for quantification. For higher concentrations, the time constant for reaching a stable conductance during the same low-temperature phase is evaluated. Both signals represent the reaction rate of the reducing gas on the strongly oxidized surface at this low temperature and provide a linear calibration curve, which is exceptional for MOS sensors. By determining these reaction rates on different low-temperature plateaus and applying pattern recognition, the resulting footprint can be used for identification of different gases. All methods are tested over a wide concentration range from 10 ppb to 100 ppm (4 orders of magnitude) for four different reducing gases (CO, H2, ammonia and benzene) using randomized gas exposures. PMID:29494545
2015-02-06
additional pages if necessary.) PROTOCOL#: FDG20140008A DATE: 6 February 2015 PROTOCOL TITLE: A Pilot Study of Common Bile Duct Reconstruction with...obstruction or bile peritonitis. This was reported to the IACUC chair. 9. REDUCTION, REFINEMENT, OR REPLACEMENT OF ANIMAL USE; REPLACEMENT...benefit the DoD/USAF? We developed a porcine model of common bile duct injury and interposition grafting, gained experience managing these patients
Mossetti, Riccardo; Saggiorato, Dèsirèe; Tron, Gian Cesare
2011-12-16
We describe a simple and novel protocol for the synthesis of tetrahydro-1,4-benzodiazepin-2-ones with three points of diversity, exploiting the acylating properties of the recently rediscovered Ugi-imide. The final compounds can be easily prepared in three synthetic steps using a multicomponent reaction, a Staudinger reduction, and an acylative protocol, with good to excellent yields for each synthetic step.
Formation of N-alkylpyrroles via intermolecular redox amination.
Pahadi, Nirmal K; Paley, Miranda; Jana, Ranjan; Waetzig, Shelli R; Tunge, Jon A
2009-11-25
A wide variety of aldehydes, ketones, and lactols undergo redox amination when allowed to react with 3-pyrroline in the presence of a mild Brønsted acid catalyst. This reaction utilizes the inherent reducing power of 3-pyrroline to perform the equivalent of a reductive amination to form alkyl pyrroles. In doing so, the reaction avoids stoichiometric reducing agents that are typically associated with reductive aminations. Moreover, the redox amination protocol allows access to alkyl pyrroles that cannot be made via standard reductive amination.
Carbon Offsets in California: What Role for Earth Scientists in the Policy Process? (Invited)
NASA Astrophysics Data System (ADS)
Cullenward, D.; Strong, A. L.
2013-12-01
This talk addresses the policy structure in California for developing and approving carbon offset protocols, which rely on findings from the environmental and earth sciences communities. In addition to providing an overview of the legal requirements of carbon offsets, we describe a series of case studies of how scientists can engage with policymakers. Based on those experiences, we suggest ways for the earth sciences community to become more involved in climate policy development. California's climate law, known as AB 32, requires that major sectors of the state's economy reduce their emissions to 1990 levels by 2020. As part of AB 32, the California Air Resources Board created a cap-and-trade market to ensure compliance with the statutory target. Under this system, regulated companies have to acquire tradable emissions permits (called 'compliance instruments') for the greenhouse gas emissions they release. The State allocates a certain number of allowances to regulated entities through a mixture of auctions and free transfers, with the total number equal to the overall emissions target; these allowances, along with approved offsets credits, are the compliance instruments that regulated entities are required to obtain by law. One of the key policy design issues in California's cap-and-trade market concerns the use of carbon offsets. Under AB 32, the Air Resources Board can issue offset credits to project developers who reduce emissions outside of the capped sectors (electricity, industry, and transportation)--or even outside of California--pursuant to approved offset protocols. Project developers then sell the credits to regulated companies in California. Essentially, offsets allow regulated entities in California to earn credit for emissions reductions that take place outside the scope of AB 32. Many regulated entities and economists are in favor of offsets because they view them as a source of low-cost compliance instruments. On the other hand, critics argue that some offset protocols award credits for activities that would have occurred anyway; by replacing a company's need to acquire an allowance in the carbon market, critics believe that poorly designed offset protocols increase greenhouse gas emissions. Thus, the effectiveness of the policy approach depends on the scientific integrity of the offset protocols. To date, California has approved offset protocols for emissions reductions in four applications: (1) forestry, (2) urban forestry, (3) livestock, and (4) destruction of ozone-depleting substances. In addition, the State is currently considering protocols that would address (5) methane emissions from mining and (6) greenhouse gas reductions from improved rice cultivation practices. These protocols rely heavily on findings from the environmental and earth sciences communities, especially when the protocol subject involves land use or land use change. Yet, due to budget constraints, the Air Resources Board is relying primarily on third-party protocol developers to design and propose the detailed structures under which offset credits will be issued. Despite the fact that any member of the public may participate in the governance regime that leads to protocol approvals, few scientists or scientific organizations provide input into the policy process. We use case studies from several of the California protocols to illustrate ways scientists can apply their skills to a crucial stage of climate policy development.
Soriano, Brian D; Tam, Lei-Ting T; Lu, Hsieng S; Valladares, Violeta G
2012-01-01
Recombinant proteins expressed in Escherichia coli are often produced as unfolded, inactive forms accumulated in inclusion bodies. Redox-coupled thiols are typically employed in the refolding process in order to catalyze the formation of correct disulfide bonds at maximal folding efficiency. These thiols and the recombinant proteins can form mixed disulfide bonds to generate thiol-protein adducts. In this work, we apply a fluorescent-based assay for the quantification of cysteine and cysteamine adducts as observed in E. coli-derived proteins. The thiols are released by reduction of the adducted protein, collected and labeled with a fluorescent reagent, 6-aminoquinolyl-N-hydroxysuccinimidyl carbamate. The derivatized thiols are separated by reversed-phase HPLC and can be accurately quantified after method optimization. The estimated thiol content represents total amount of adducted forms present in the analyzed samples. The limit of quantification (LOQ) was established; specifically, the lowest amount of quantifiable cysteine adduction is 30 picograms and the lowest amount of quantifiable cysteamine adduction is 60 picograms. The assay is useful for quantification of adducts in final purified products as well as in-process samples from various purification steps. The assay indicates that the purification process accomplishes a decrease in cysteine adduction from 0.19 nmol adduct/nmol protein to 0.03 nmol adduct/nmol protein as well as a decrease in cysteamine adduction from 0.24 nmol adduct/nmol protein to 0.14 nmol adduct/nmol protein. Copyright © 2011. Published by Elsevier B.V.
Design of a stateless low-latency router architecture for green software-defined networking
NASA Astrophysics Data System (ADS)
Saldaña Cercós, Silvia; Ramos, Ramon M.; Ewald Eller, Ana C.; Martinello, Magnos; Ribeiro, Moisés. R. N.; Manolova Fagertun, Anna; Tafur Monroy, Idelfonso
2015-01-01
Expanding software defined networking (SDN) to transport networks requires new strategies to deal with the large number of flows that future core networks will have to face. New south-bound protocols within SDN have been proposed to benefit from having control plane detached from the data plane offering a cost- and energy-efficient forwarding engine. This paper presents an overview of a new approach named KeyFlow to simultaneously reduce latency, jitter, and power consumption in core network nodes. Results on an emulation platform indicate that round trip time (RTT) can be reduced above 50% compared to the reference protocol OpenFlow, specially when flow tables are densely populated. Jitter reduction has been demonstrated experimentally on a NetFPGA-based platform, and 57.3% power consumption reduction has been achieved.
ERIC Educational Resources Information Center
Penteado, Jose C.; Angnes, Lucio; Masini, Jorge C.; Oliveira, Paulo C. C.
2005-01-01
This article describes the reaction between nitrite and safranine O. This sensitive reaction is based on the disappearance of color of the reddish-orange azo dye, allowing the determination of nitrite at the mg mL-1 level. A factorial optimization of parameters was carried out and the method was applied for the quantification of nitrite in…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oldenburg, Curtis M.; Birkholzer, Jens T.
The Cap-and-Trade and Low Carbon Fuel Standard (LCFS) programs being administered by the California Air Resources Board (CARB) include Carbon Dioxide Capture and Storage (CCS) as a potential means to reduce greenhouse gas (GHG) emissions. However, there is currently no universal standard approach that quantifies GHG emissions reductions for CCS and that is suitable for the quantitative needs of the Cap-and-Trade and LCFS programs. CCS involves emissions related to the capture (e.g., arising from increased energy needed to separate carbon dioxide (CO 2) from a flue gas and compress it for transport), transport (e.g., by pipeline), and storage of COmore » 2 (e.g., due to leakage to the atmosphere from geologic CO 2 storage sites). In this project, we reviewed and compared monitoring, verification, and accounting (MVA) protocols for CCS from around the world by focusing on protocols specific to the geologic storage part of CCS. In addition to presenting the review of these protocols, we highlight in this report those storage-related MVA protocols that we believe are particularly appropriate for CCS in California. We find that none of the existing protocols is completely appropriate for California, but various elements of all of them could be adopted and/or augmented to develop a rigorous, defensible, and practical surface leakage MVA protocol for California. The key features of a suitable surface leakage MVA plan for California are that it: (1) informs and validates the leakage risk assessment, (2) specifies use of the most effective monitoring strategies while still being flexible enough to accommodate special or site-specific conditions, (3) quantifies stored CO 2, and (4) offers defensible estimates of uncertainty in monitored properties. California’s surface leakage MVA protocol needs to be applicable to the main CO 2 storage opportunities (in California and in other states with entities participating in California’s Cap-and-Trade or LCFS programs), specifically CO 2-enhanced oil recovery (CO 2-EOR), CO 2 injection into depleted gas reservoirs (with or without CO 2-enhanced gas recovery (CO 2-EGR)), as well as deep saline storage. Regarding the elements of an effective surface leakage MVA protocol, our recommendations for California are that: (1) both CO 2 and methane (CH 4) surface leakage should be monitored, especially for enhanced recovery scenarios, (2) emissions from all sources not directly related to injection and geologic storage (e.g., from capture, or pipeline transport) should be monitored and reported under a plan separate from the surface leakage MVA plan that is included as another component of the quantification methodology (QM), (3) the primary objective of the surface leakage MVA plan should be to quantify surface leakage of CO 2 and CH 4 and its uncertainty, with consideration of best-practices and state-of-the-art approaches to monitoring including attribution assessment, (4) effort should be made to monitor CO 2 storage and migration in the subsurface to anticipate future surface leakage monitoring needs, (5) detailed descriptions of specific monitoring technologies and approaches should be provided in the MVA plan, (6) the main purpose of the CO 2 injection project (CO 2-EOR, CO 2-EGR, or pure geologic carbon sequestration (GCS)) needs to be stated up front, (7) approaches to dealing with missing data and quantifying uncertainty need to be described, and (8) post-injection monitoring should go on for a period consistent with or longer than that prescribed by the U.S. EPA.« less
Bouden, Sarra; Chaussé, Annie; Dorbes, Stephane; El Tall, Omar; Bellakhal, Nizar; Dachraoui, Mohamed; Vautrin-Ul, Christine
2013-03-15
This paper describes the use of 4-carboxyphenyl-grafted screen-printed carbon electrodes (4-CP-SPEs) for trace lead analysis. These novel and simple use of electrodes were easily prepared by the electrochemical reduction of the corresponding diazonium salt. Pb detection was then performed by a three-steps method in order to avoid oxygen interference: (i) immersion of the grafted screen-printed electrode (SPE) in the sample and adsorption of Pb(II), (ii) reduction of adsorbed Pb(II) by chronoamperometry (CA), and (iii) oxidation of Pb by Anodic Square Wave Voltammetry (SWV). The reoxidation response was exploited for lead detection and quantification. In order to optimize the analytical responses, the influence of the adsorption medium pH and the adsorption time were investigated. Moreover, an interference study was carried out with Cu(II), Hg(II), Al(III), Mn(II), Zn(II), Cd(II) and no major interference can be expected to quantify Pb(II). The described method provided a limit of detection and a limit of quantification of 1.2 × 10(-9)M and 4.1 × 10(-9)M, respectively. These performances indicate that the 4-CP-SPE could be considered as an efficient tool for environmental analysis. Copyright © 2013 Elsevier B.V. All rights reserved.
Aboal, J R; Boquete, M T; Carballeira, A; Casanova, A; Debén, S; Fernández, J A
2017-05-01
In this study we examined 6080 data gathered by our research group during more than 20 years of research on the moss biomonitoring technique, in order to quantify the variability generated by different aspects of the protocol and to calculate the overall measurement uncertainty associated with the technique. The median variance of the concentrations of different pollutants measured in moss tissues attributed to the different methodological aspects was high, reaching values of 2851 (ng·g -1 ) 2 for Cd (sample treatment), 35.1 (μg·g -1 ) 2 for Cu (sample treatment), 861.7 (ng·g -1 ) 2 and for Hg (material selection). These variances correspond to standard deviations that constitute 67, 126 and 59% the regional background levels of these elements in the study region. The overall measurement uncertainty associated with the worst experimental protocol (5 subsamples, refrigerated, washed, 5 × 5 m size of the sampling area and once a year sampling) was between 2 and 6 times higher than that associated with the optimal protocol (30 subsamples, dried, unwashed, 20 × 20 m size of the sampling area and once a week sampling), and between 1.5 and 7 times higher than that associated with the standardized protocol (30 subsamples and once a year sampling). The overall measurement uncertainty associated with the standardized protocol could generate variations of between 14 and 47% in the regional background levels of Cd, Cu, Hg, Pb and Zn in the study area and much higher levels of variation in polluted sampling sites. We demonstrated that although the overall measurement uncertainty of the technique is still high, it can be reduced by using already well defined aspects of the protocol. Further standardization of the protocol together with application of the information on the overall measurement uncertainty would improve the reliability and comparability of the results of different biomonitoring studies, thus extending use of the technique beyond the context of scientific research. Copyright © 2017 Elsevier Ltd. All rights reserved.
Molinié, Roland; Kwiecień, Renata A; Silvestre, Virginie; Robins, Richard J
2009-12-01
N-Demethylation of tropine is an important step in the degradation of this compound and related metabolites. With the purpose of understanding the reaction mechanism(s) involved, it is desirable to measure the 15N kinetic isotope effects (KIEs), which can be accessed through the 15N isotope shift (Deltadelta15N) during the reaction. To measure the isotope fractionation in 15N during tropine degradation necessitates the extraction of the residual substrate from dilute aqueous solution without introducing artefactual isotope fractionation. Three protocols have been compared for the extraction and measurement of the 15N/14N ratio of tropine from aqueous medium, involving liquid-liquid phase partitioning or silica-C18 solid-phase extraction. Quantification was by gas chromatography (GC) on the recovered organic phase and delta15N values were obtained by isotope ratio measurement mass spectrometry (irm-MS). Although all the protocols used can provide satisfactory data and both irm-EA-MS and irm-GC-MS can be used to obtain the delta15N values, the most convenient method is liquid-liquid extraction from a reduced aqueous volume combined with irm-GC-MS. The protocols are applied to the measurement of 15N isotope shifts during growth of a Pseudomonas strain that uses tropane alkaloids as sole source of carbon and nitrogen. The accuracy of the determination of the 15N/14N ratio is sufficient to be used for the determination of 15N-KIEs. Copyright 2009 John Wiley & Sons, Ltd.
Gómez, Lina Andrea; Escobar, Magally; Peñuela, Oscar
2015-01-01
To develop a protocol for obtaining autologous platelet rich plasma in healthy individuals and to determine the concentration of five major growth factors before platelet activation. This protocol could be integrated into the guidelines of good clinical practice and research in regenerative medicine. Platelet rich plasma was isolated by centrifugation from 38 healthy men and 42 women ranging from 18 to 59 years old. The platelet count and quantification of growth factors were analyzed in eighty samples, stratified for age and gender of the donor. Analyses were performed using parametric the t-test or Pearson's analysis for non-parametric distribution. P < 0.05 was considered statistically significant. Our centrifugation protocol allowed us to concentrate basal platelet counts from 1.6 to 4.9 times (mean = 2.8). There was no correlation between platelet concentration and the level of the following growth factors: VEGF-D (r = 0.009, p = 0.4105), VEGF-A (r = 0.0068, p = 0.953), PDGF subunit AA (p = 0.3618; r = 0.1047), PDGF-BB (p = 0.5936; r = 0.6095). In the same way, there was no correlation between donor gender and growth factor concentrations. Only TGF-β concentration was correlated to platelet concentration (r = 0.3163, p = 0.0175). The procedure used allowed us to make preparations rich in platelets, low in leukocytes and red blood cells, and sterile. Our results showed biological variations in content of growth factors in PRP. The factors influencing these results should be further studied.
Nagayama, Yasunori; Nakaura, Takeshi; Tsuji, Akinori; Urata, Joji; Furusawa, Mitsuhiro; Yuki, Hideaki; Hirarta, Kenichiro; Oda, Seitaro; Kidoh, Masafumi; Utsunomiya, Daisuke; Yamashita, Yasuyuki
2017-02-01
The purpose of this study was to evaluate the feasibility of a contrast medium (CM), radiation dose reduction protocol for cerebral bone-subtraction CT angiography (BSCTA) using 80-kVp and sinogram-affirmed iterative reconstruction (SAFIRE). Seventy-five patients who had undergone BSCTA under the 120- (n = 37) or the 80-kVp protocol (n = 38) were included. CM was 370 mgI/kg for the 120-kVp and 296 mgI/kg for the 80-kVp protocol; the 120- and the 80-kVp images were reconstructed with filtered back-projection (FBP) and SAFIRE, respectively. We compared effective dose (ED), CT attenuation, image noise, and contrast-to-noise ratio (CNR) of two protocols. We also scored arterial contrast, sharpness, depiction of small arteries, visibility near skull base/clip, and overall image quality on a four-point scale. ED was 62% lower at 80- than 120-kVp (0.59 ± 0.06 vs 1.56 ± 0.13 mSv, p < 0.01). CT attenuation of the internal carotid artery (ICA) and middle cerebral artery (MCA) was significantly higher on 80- than 120-kVp (ICA: 557.4 ± 105.7 vs 370.0 ± 59.3 Hounsfield units (HU), p < 0.01; MCA: 551.9 ± 107.9 vs 364.6 ± 62.2 HU, p < 0.01). The CNR was also significantly higher on 80- than 120-kVp (ICA: 46.2 ± 10.2 vs 36.9 ± 7.6, p < 0.01; MCA: 45.7 ± 10.0 vs 35.7 ± 9.0, p < 0.01). Visibility near skull base and clip was not significantly different (p = 0.45). The other subjective scores were higher with the 80- than the 120-kVp protocol (p < 0.05). The 80-kVp acquisition with SAFIRE yields better image quality for BSCTA and substantial reduction in the radiation and CM dose compared to the 120-kVp with FBP protocol.
Brehmer, Jess L; Husband, Jeffrey B
2014-10-01
There are relatively few studies in the literature that specifically evaluate accelerated rehabilitation protocols for distal radial fractures treated with open reduction and internal fixation (ORIF). The purpose of this study was to compare the early postoperative outcomes (at zero to twelve weeks postoperatively) of patients enrolled in an accelerated rehabilitation protocol with those of patients enrolled in a standard rehabilitation protocol following ORIF for a distal radial fracture. We hypothesized that patients with accelerated rehabilitation after volar ORIF for a distal radial fracture would have an earlier return to function compared with patients who followed a standard protocol. From November 2007 to November 2010, eighty-one patients with an unstable distal radial fracture were prospectively randomized to follow either an accelerated or a standard rehabilitation protocol after undergoing ORIF with a volar plate for a distal radial fracture. Both groups began with gentle active range of motion at three to five days postoperatively. At two weeks, the accelerated group initiated wrist/forearm passive range of motion and strengthening exercises, whereas the standard group initiated passive range of motion and strengthening at six weeks postoperatively. Patients were assessed at three to five days, two weeks, three weeks, four weeks, six weeks, eight weeks, twelve weeks, and six months postoperatively. Outcomes included Disabilities of the Arm, Shoulder and Hand (DASH) scores (primary outcome) and measurements of wrist flexion/extension, supination, pronation, grip strength, and palmar pinch. The patients in the accelerated group had better mobility, strength, and DASH scores at the early postoperative time points (zero to eight weeks postoperatively) compared with the patients in the standard rehabilitation group. The difference between the groups was both clinically relevant and statistically significant. Patients who follow an accelerated rehabilitation protocol that emphasizes motion immediately postoperatively and initiates strengthening at two weeks after volar ORIF of a distal radial fracture have an earlier return to function than patients who follow a more standard rehabilitation protocol. Therapeutic Level I. See Instructions for Authors for a complete description of levels of evidence. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lipnharski, I; Carranza, C; Quails, N
Purpose: To optimize adult head CT protocol by reducing dose to an appropriate level while providing CT images of diagnostic quality. Methods: Five cadavers were scanned from the skull base to the vertex using a routine adult head CT protocol (120 kVp, 270 mA, 0.75 s rotation, 0.5 mm × 32 detectors, 70.8 mGy CTDIvol) followed by seven reduced-dose protocols with varying combinations of reduced tube current, reduced rotation time, and increased detectors with CTDIvol ranging from 38.2 to 65.6 mGy. Organ doses were directly measured with 21 OSL dosimeters placed on the surface and implanted in the head bymore » a neurosurgeon. Two neuroradiologists assessed grey-white matter differentiation, fluid space, ventricular size, midline shift, brain mass, edema, ischemia, and skull fractures on a three point scale: (1) Unacceptable, (2) Borderline Acceptable, and (3) Acceptable. Results: For the standard scan, doses to the skin, lens of the eye, salivary glands, thyroid, and brain were 37.55 mGy, 49.65 mGy, 40.67 mGy, 4.63 mGy, and 27.33 mGy, respectively. Two cadavers had cerebral edema due to changing dynamics of postmortem effects, causing the grey-white matter differentiation to appear less distinct. Two cadavers with preserved grey-white matter received acceptable scores for all image quality features for the protocol with a CTDIvol of 57.3 mGy, allowing organ dose savings ranging from 34% to 45%. One cadaver allowed for greater dose reduction for the protocol with a CTDIvol of 42 mGy. Conclusion: Efforts to optimize scan protocol should consider both dose and clinical image quality. This is made possible with postmortem subjects, whose brains are similar to patients, allowing for an investigation of ideal scan parameters. Radiologists at our institution accepted scan protocols acquired with lower scan parameters, with CTDIvol values closer to the American College of Radiology’s (ACR) Achievable Dose level of 57 mGy.« less
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2007-01-01
This report presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV) [SMV]. The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space. Also, additional innovative state space reduction techniques are introduced that can be used in future verification efforts applied to this and other protocols.
Ultrasound image filtering using the mutiplicative model
NASA Astrophysics Data System (ADS)
Navarrete, Hugo; Frery, Alejandro C.; Sanchez, Fermin; Anto, Joan
2002-04-01
Ultrasound images, as a special case of coherent images, are normally corrupted with multiplicative noise i.e. speckle noise. Speckle noise reduction is a difficult task due to its multiplicative nature, but good statistical models of speckle formation are useful to design adaptive speckle reduction filters. In this article a new statistical model, emerging from the Multiplicative Model framework, is presented and compared to previous models (Rayleigh, Rice and K laws). It is shown that the proposed model gives the best performance when modeling the statistics of ultrasound images. Finally, the parameters of the model can be used to quantify the extent of speckle formation; this quantification is applied to adaptive speckle reduction filter design. The effectiveness of the filter is demonstrated on typical in-vivo log-compressed B-scan images obtained by a clinical ultrasound system.
Lee, I-Te; Wang, Jun-Sing; Fu, Chia-Po; Lin, Shih-Yi; Sheu, Wayne Huey-Herng
2016-10-01
Brain-derived neurotrophic factor (BDNF) plays a role in energy homeostasis. However, the postprandial BDNF change has not been well investigated. We hypothesized that the BDNF increment after oral glucose challenge is associated with body weight.To address this possibility, man adults with obesity in conjunction with metabolic syndrome were compared with normal weight controls at baseline in the initial cross-sectional protocol. The obese subjects then underwent a 12-week program for body-weight reduction in the prospective protocol. The area under the curve (AUC) of serum BDNF was recorded during a 75 g oral glucose tolerant test and the BDNF AUC index was defined as [(AUC of BDNF) - (fasting BDNF2 hours)]/(fasting BDNF2 hours).A total of 25 controls and 36 obese subjects completed the study assessments. In the cross-sectional protocol, the BDNF AUC index was significantly higher in the obese subjects than in the controls (9.0 ± 16.5% vs. - 8.0 ± 22.5%, P = 0.001). After weight reduction (from 97.0 ± 12.5 kg to 88.6 ± 12.9 kg, P < 0.001), the percentage change of body weight was significantly associated with the BDNF AUC index after the study (95% CI between 0.21 and 1.82, P = 0.015). Using 6% weight reduction as a cut-off value, a larger weight reduction was able to reliably predict a negative BDNF AUC index.In conclusion, a high BDNF AUC index was observed for obese men in this study, whereas the index value significantly decreased after body-weight reduction. These findings suggest that postprandial BDNF increment may be associated with obesity.
Lee, I-Te; Wang, Jun-Sing; Fu, Chia-Po; Lin, Shih-Yi; Sheu, Wayne Huey-Herng
2016-01-01
Abstract Brain-derived neurotrophic factor (BDNF) plays a role in energy homeostasis. However, the postprandial BDNF change has not been well investigated. We hypothesized that the BDNF increment after oral glucose challenge is associated with body weight. To address this possibility, man adults with obesity in conjunction with metabolic syndrome were compared with normal weight controls at baseline in the initial cross-sectional protocol. The obese subjects then underwent a 12-week program for body-weight reduction in the prospective protocol. The area under the curve (AUC) of serum BDNF was recorded during a 75 g oral glucose tolerant test and the BDNF AUC index was defined as [(AUC of BDNF) − (fasting BDNF∗2 hours)]/(fasting BDNF∗2 hours). A total of 25 controls and 36 obese subjects completed the study assessments. In the cross-sectional protocol, the BDNF AUC index was significantly higher in the obese subjects than in the controls (9.0 ± 16.5% vs. − 8.0 ± 22.5%, P = 0.001). After weight reduction (from 97.0 ± 12.5 kg to 88.6 ± 12.9 kg, P < 0.001), the percentage change of body weight was significantly associated with the BDNF AUC index after the study (95% CI between 0.21 and 1.82, P = 0.015). Using 6% weight reduction as a cut-off value, a larger weight reduction was able to reliably predict a negative BDNF AUC index. In conclusion, a high BDNF AUC index was observed for obese men in this study, whereas the index value significantly decreased after body-weight reduction. These findings suggest that postprandial BDNF increment may be associated with obesity. PMID:27787389
Noël, Peter B; Engels, Stephan; Köhler, Thomas; Muenzel, Daniela; Franz, Daniela; Rasper, Michael; Rummeny, Ernst J; Dobritz, Martin; Fingerle, Alexander A
2018-01-01
Background The explosive growth of computer tomography (CT) has led to a growing public health concern about patient and population radiation dose. A recently introduced technique for dose reduction, which can be combined with tube-current modulation, over-beam reduction, and organ-specific dose reduction, is iterative reconstruction (IR). Purpose To evaluate the quality, at different radiation dose levels, of three reconstruction algorithms for diagnostics of patients with proven liver metastases under tumor follow-up. Material and Methods A total of 40 thorax-abdomen-pelvis CT examinations acquired from 20 patients in a tumor follow-up were included. All patients were imaged using the standard-dose and a specific low-dose CT protocol. Reconstructed slices were generated by using three different reconstruction algorithms: a classical filtered back projection (FBP); a first-generation iterative noise-reduction algorithm (iDose4); and a next generation model-based IR algorithm (IMR). Results The overall detection of liver lesions tended to be higher with the IMR algorithm than with FBP or iDose4. The IMR dataset at standard dose yielded the highest overall detectability, while the low-dose FBP dataset showed the lowest detectability. For the low-dose protocols, a significantly improved detectability of the liver lesion can be reported compared to FBP or iDose 4 ( P = 0.01). The radiation dose decreased by an approximate factor of 5 between the standard-dose and the low-dose protocol. Conclusion The latest generation of IR algorithms significantly improved the diagnostic image quality and provided virtually noise-free images for ultra-low-dose CT imaging.
Sizoo, Bram B; Kuiper, Erik
2017-05-01
Anxiety and depression co-occur in 50-70% of adults with autism spectrum disorder (ASD) but treatment methods for these comorbid problems have not been systematically studied. Recently, two ASD-tailored protocols were published: mindfulness based stress reduction (MBSR) and cognitive behavioural therapy (CBT). We wanted to investigate if both methods are equally effective in reducing anxiety and depression symptoms among adults with ASD. 59 adults with ASD and anxiety or depression scores above 7 on the Hospital Anxiety and Depression Scale, gave informed consent to participate; 27 followed the CBT protocol, and 32 the MBSR treatment protocol. Anxiety and depression scores, autism symptoms, rumination, and global mood were registered at the start, at the end of the 13-week treatment period, and at 3-months follow-up. Irrational beliefs and mindful attention awareness were used as process measures during treatment and at follow-up. Results indicate that both MBSR and CBT are associated with a reduction in anxiety and depressive symptoms among adults with ASD, with a sustained effect at follow-up, but without a main effect for treatment group. A similar pattern was seen for the reduction of autistic symptoms, rumination and the improvement in global mood. There are some indications that MBSR may be preferred over CBT with respect to the treatment effect on anxiety when the scores on measures of irrational beliefs or positive global mood at baseline are high. Mindfulness and cognitive behavioral therapies are both promising treatment methods for reducing comorbid anxiety and depression in adults with ASD. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fahmi, Rachid; Eck, Brendan L.; Vembar, Mani; Bezerra, Hiram G.; Wilson, David L.
2014-03-01
We investigated the use of an advanced hybrid iterative reconstruction (IR) technique (iDose4, Philips Health- care) for low dose dynamic myocardial CT perfusion (CTP) imaging. A porcine model was created to mimic coronary stenosis through partial occlusion of the left anterior descending (LAD) artery with a balloon catheter. The severity of LAD occlusion was adjusted with FFR measurements. Dynamic CT images were acquired at end-systole (45% R-R) using a multi-detector CT (MDCT) scanner. Various corrections were applied to the acquired scans to reduce motion and imaging artifacts. Absolute myocardial blood flow (MBF) was computed with a deconvolution-based approach using singular value decomposition (SVD). We compared a high and a low dose radiation protocol corresponding to two different tube-voltage/tube-current combinations (80kV p/100mAs and 120kV p/150mAs). The corresponding radiation doses for these protocols are 7.8mSv and 34.3mSV , respectively. The images were reconstructed using conventional FBP and three noise-reduction strengths of the IR method, iDose. Flow contrast-to-noise ratio, CNRf, as obtained from MBF maps, was used to quantitatively evaluate the effect of reconstruction on contrast between normal and ischemic myocardial tissue. Preliminary results showed that the use of iDose to reconstruct low dose images provide better or comparable CNRf to that of high dose images reconstructed with FBP, suggesting significant dose savings. CNRf was improved with the three used levels of iDose compared to FBP for both protocols. When using the entire 4D dynamic sequence for MBF computation, a 77% dose reduction was achieved, while considering only half the scans (i.e., every other heart cycle) allowed even further dose reduction while maintaining relatively higher CNRf.
Ricci, R Daniel; Cerullo, James; Blanc, Robert O; McMahon, Patrick J; Buoncritiani, Anthony M; Stone, David A; Fu, Freddie H
2008-01-01
Objective: To present the case of a talocrural dislocation with a Weber type C fibular fracture in a National Collegiate Athletic Association Division I football athlete. Background: The athlete, while attempting to make a tackle during a game, collided with an opponent, who in turn stepped on the lateral aspect of the athlete's ankle, resulting in forced ankle eversion and external rotation. On-field evaluation showed a laterally displaced talocrural dislocation. Immediate reduction was performed in the athletic training room to maintain skin integrity. Post-reduction radiographs revealed a Weber type C fibular fracture and increased medial joint clear space. A below-knee, fiberglass splint was applied to stabilize the ankle joint complex. Differential Diagnosis: Subtalar dislocation, Maisonneuve fracture, malleolar fracture, deltoid ligament rupture, syndesmosis disruption. Treatment: The sports medicine staff immediately splinted and transported the athlete to the athletic training room to reduce the dislocation. The athlete then underwent an open reduction and internal fixation procedure to stabilize the injury: 2 syndesmosis screws and a fibular plate were placed to keep the ankle joint in an anatomically reduced position. With the guidance of the athletic training staff, the athlete underwent an accelerated physical rehabilitation protocol in an effort to return to sport as quickly and safely as possible. Uniqueness: Most talocrural dislocations and associated Weber type C fibular fractures are due to motor vehicle accidents or falls. We are the first to describe this injury in a Division I football player and to present a general rehabilitation protocol for a high-level athlete. Conclusions: Sports medicine practitioners must recognize that this injury can occur in the athletic environment. Prompt reduction, early surgical intervention, sufficient resources, and an accelerated rehabilitation protocol all contributed to a successful outcome in the patient. PMID:18523569