A quantitative reconstruction software suite for SPECT imaging
NASA Astrophysics Data System (ADS)
Namías, Mauro; Jeraj, Robert
2017-11-01
Quantitative Single Photon Emission Tomography (SPECT) imaging allows for measurement of activity concentrations of a given radiotracer in vivo. Although SPECT has usually been perceived as non-quantitative by the medical community, the introduction of accurate CT based attenuation correction and scatter correction from hybrid SPECT/CT scanners has enabled SPECT systems to be as quantitative as Positron Emission Tomography (PET) systems. We implemented a software suite to reconstruct quantitative SPECT images from hybrid or dedicated SPECT systems with a separate CT scanner. Attenuation, scatter and collimator response corrections were included in an Ordered Subset Expectation Maximization (OSEM) algorithm. A novel scatter fraction estimation technique was introduced. The SPECT/CT system was calibrated with a cylindrical phantom and quantitative accuracy was assessed with an anthropomorphic phantom and a NEMA/IEC image quality phantom. Accurate activity measurements were achieved at an organ level. This software suite helps increasing quantitative accuracy of SPECT scanners.
Costanzi, Stefano; Skorski, Matthew; Deplano, Alessandro; Habermehl, Brett; Mendoza, Mary; Wang, Keyun; Biederman, Michelle; Dawson, Jessica; Gao, Jia
2016-11-01
With the present work we quantitatively studied the modellability of the inactive state of Class A G protein-coupled receptors (GPCRs). Specifically, we constructed models of one of the Class A GPCRs for which structures solved in the inactive state are available, namely the β 2 AR, using as templates each of the other class members for which structures solved in the inactive state are also available. Our results showed a detectable linear correlation between model accuracy and model/template sequence identity. This suggests that the likely accuracy of the homology models that can be built for a given receptor can be generally forecasted on the basis of the available templates. We also probed whether sequence alignments that allow for the presence of gaps within the transmembrane domains to account for structural irregularities afford better models than the classical alignment procedures that do not allow for the presence of gaps within such domains. As our results indicated, although the overall differences are very subtle, the inclusion of internal gaps within the transmembrane domains has a noticeable a beneficial effect on the local structural accuracy of the domain in question. Copyright © 2016 Elsevier Inc. All rights reserved.
Stress Corrosion Cracking Facet Crystallography of Ti-8Al-1Mo-1V (Preprint)
2011-05-01
fractography and electron backscatter diffraction. The results indicate that most facets are formed nearly perpendicular to the loading direction on...of Ti-8Al- 1Mo-1V have been characterized using quantitative fractography and electron backscatter diffraction. The results indicate that most facets...EBSD and quantitative tilt fractography [27;29] allow for determination of the crystallographic fracture plane to an accuracy between 1o [29] and
Schlippenbach, Trixi von; Oefner, Peter J; Gronwald, Wolfram
2018-03-09
Non-uniform sampling (NUS) allows the accelerated acquisition of multidimensional NMR spectra. The aim of this contribution was the systematic evaluation of the impact of various quantitative NUS parameters on the accuracy and precision of 2D NMR measurements of urinary metabolites. Urine aliquots spiked with varying concentrations (15.6-500.0 µM) of tryptophan, tyrosine, glutamine, glutamic acid, lactic acid, and threonine, which can only be resolved fully by 2D NMR, were used to assess the influence of the sampling scheme, reconstruction algorithm, amount of omitted data points, and seed value on the quantitative performance of NUS in 1 H, 1 H-TOCSY and 1 H, 1 H-COSY45 NMR spectroscopy. Sinusoidal Poisson-gap sampling and a compressed sensing approach employing the iterative re-weighted least squares method for spectral reconstruction allowed a 50% reduction in measurement time while maintaining sufficient quantitative accuracy and precision for both types of homonuclear 2D NMR spectroscopy. Together with other advances in instrument design, such as state-of-the-art cryogenic probes, use of 2D NMR spectroscopy in large biomedical cohort studies seems feasible.
Krummen, David E; Patel, Mitul; Nguyen, Hong; Ho, Gordon; Kazi, Dhruv S; Clopton, Paul; Holland, Marian C; Greenberg, Scott L; Feld, Gregory K; Faddis, Mitchell N; Narayan, Sanjiv M
2010-11-01
Quantitative ECG Analysis. Optimal atrial tachyarrhythmia management is facilitated by accurate electrocardiogram interpretation, yet typical atrial flutter (AFl) may present without sawtooth F-waves or RR regularity, and atrial fibrillation (AF) may be difficult to separate from atypical AFl or rapid focal atrial tachycardia (AT). We analyzed whether improved diagnostic accuracy using a validated analysis tool significantly impacts costs and patient care. We performed a prospective, blinded, multicenter study using a novel quantitative computerized algorithm to identify atrial tachyarrhythmia mechanism from the surface ECG in patients referred for electrophysiology study (EPS). In 122 consecutive patients (age 60 ± 12 years) referred for EPS, 91 sustained atrial tachyarrhythmias were studied. ECGs were also interpreted by 9 physicians from 3 specialties for comparison and to allow healthcare system modeling. Diagnostic accuracy was compared to the diagnosis at EPS. A Markov model was used to estimate the impact of improved arrhythmia diagnosis. We found 13% of typical AFl ECGs had neither sawtooth flutter waves nor RR regularity, and were misdiagnosed by the majority of clinicians (0/6 correctly diagnosed by consensus visual interpretation) but correctly by quantitative analysis in 83% (5/6, P = 0.03). AF diagnosis was also improved through use of the algorithm (92%) versus visual interpretation (primary care: 76%, P < 0.01). Economically, we found that these improvements in diagnostic accuracy resulted in an average cost-savings of $1,303 and 0.007 quality-adjusted-life-years per patient. Typical AFl and AF are frequently misdiagnosed using visual criteria. Quantitative analysis improves diagnostic accuracy and results in improved healthcare costs and patient outcomes. © 2010 Wiley Periodicals, Inc.
Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.
Omer, Travis; Intes, Xavier; Hahn, Juergen
2015-01-01
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.
Fluorescence-based Western blotting for quantitation of protein biomarkers in clinical samples.
Zellner, Maria; Babeluk, Rita; Diestinger, Michael; Pirchegger, Petra; Skeledzic, Senada; Oehler, Rudolf
2008-09-01
Since most high throughput techniques used in biomarker discovery are very time and cost intensive, highly specific and quantitative analytical alternative application methods are needed for the routine analysis. Conventional Western blotting allows detection of specific proteins to the level of single isotypes while its quantitative accuracy is rather limited. We report a novel and improved quantitative Western blotting method. The use of fluorescently labelled secondary antibodies strongly extends the dynamic range of the quantitation and improves the correlation with the protein amount (r=0.997). By an additional fluorescent staining of all proteins immediately after their transfer to the blot membrane, it is possible to visualise simultaneously the antibody binding and the total protein profile. This allows for an accurate correction for protein load. Applying this normalisation it could be demonstrated that fluorescence-based Western blotting is able to reproduce a quantitative analysis of two specific proteins in blood platelet samples from 44 subjects with different diseases as initially conducted by 2D-DIGE. These results show that the proposed fluorescence-based Western blotting is an adequate application technique for biomarker quantitation and suggest possibilities of employment that go far beyond.
Quantitative hard x-ray phase contrast imaging of micropipes in SiC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kohn, V. G.; Argunova, T. S.; Je, J. H., E-mail: jhje@postech.ac.kr
2013-12-15
Peculiarities of quantitative hard x-ray phase contrast imaging of micropipes in SiC are discussed. The micropipe is assumed as a hollow cylinder with an elliptical cross section. The major and minor diameters can be restored using the least square fitting procedure by comparing the experimental data, i.e. the profile across the micropipe axis, with those calculated based on phase contrast theory. It is shown that one projection image gives an information which does not allow a complete determination of the elliptical cross section, if an orientation of micropipe is not known. Another problem is a weak accuracy in estimating themore » diameters, partly because of using pink synchrotron radiation, which is necessary because a monochromatic beam intensity is not sufficient to reveal the weak contrast from a very small object. The general problems of accuracy in estimating the two diameters using the least square procedure are discussed. Two experimental examples are considered to demonstrate small as well as modest accuracies in estimating the diameters.« less
This research utilizes the quantitative polymerase chain reaction (qPCR) to determine ribosomal copy number of fungal organisms found in unhealthy indoor environments. Knowing specific copy numbers will allow for greater accuracy in quantification when utilizing current pQCR tec...
Nemec, Ursula; Nemec, Stefan F; Novotny, Clemens; Weber, Michael; Czerny, Christian; Krestan, Christian R
2012-06-01
To investigate the diagnostic accuracy, through quantitative analysis, of contrast-enhanced ultrasound (CEUS), using a microbubble contrast agent, in the differentiation of thyroid nodules. This prospective study enrolled 46 patients with solitary, scintigraphically non-functional thyroid nodules. These patients were scheduled for surgery and underwent preoperative CEUS with pulse-inversion harmonic imaging after intravenous microbubble contrast medium administration. Using histology as a standard of reference, time-intensity curves of benign and malignant nodules were compared by means of peak enhancement and wash-out enhancement relative to the baseline intensity using a mixed model ANOVA. ROC analysis was performed to assess the diagnostic accuracy in the differentiation of benign and malignant nodules on CEUS. The complete CEUS data of 42 patients (31/42 [73.8%] benign and 11/42 [26.2%] malignant nodules) revealed a significant difference (P < 0.001) in enhancement between benign and malignant nodules. Furthermore, based on ROC analysis, CEUS demonstrated sensitivity of 76.9%, specificity of 84.8% and accuracy of 82.6%. Quantitative analysis of CEUS using a microbubble contrast agent allows the differentiation of benign and malignant thyroid nodules and may potentially serve, in addition to grey-scale and Doppler ultrasound, as an adjunctive tool in the assessment of patients with thyroid nodules. • Contrast-enhanced ultrasound (CEUS) helps differentiate between benign and malignant thyroid nodules. • Quantitative CEUS analysis yields sensitivity of 76.9% and specificity of 84.8%. • CEUS may be a potentially useful adjunct in assessing thyroid nodules.
Application of linear regression analysis in accuracy assessment of rolling force calculations
NASA Astrophysics Data System (ADS)
Poliak, E. I.; Shim, M. K.; Kim, G. S.; Choo, W. Y.
1998-10-01
Efficient operation of the computational models employed in process control systems require periodical assessment of the accuracy of their predictions. Linear regression is proposed as a tool which allows separate systematic and random prediction errors from those related to measurements. A quantitative characteristic of the model predictive ability is introduced in addition to standard statistical tests for model adequacy. Rolling force calculations are considered as an example for the application. However, the outlined approach can be used to assess the performance of any computational model.
Living specimen tomography by digital holographic microscopy: morphometry of testate amoeba
NASA Astrophysics Data System (ADS)
Charrière, Florian; Pavillon, Nicolas; Colomb, Tristan; Depeursinge, Christian; Heger, Thierry J.; Mitchell, Edward A. D.; Marquet, Pierre; Rappaz, Benjamin
2006-08-01
This paper presents an optical diffraction tomography technique based on digital holographic microscopy. Quantitative 2-dimensional phase images are acquired for regularly-spaced angular positions of the specimen covering a total angle of π, allowing to built 3-dimensional quantitative refractive index distributions by an inverse Radon transform. A 20x magnification allows a resolution better than 3 μm in all three dimensions, with accuracy better than 0.01 for the refractive index measurements. This technique is for the first time to our knowledge applied to living specimen (testate amoeba, Protista). Morphometric measurements are extracted from the tomographic reconstructions, showing that the commonly used method for testate amoeba biovolume evaluation leads to systematic under evaluations by about 50%.
Fully automatic and precise data analysis developed for time-of-flight mass spectrometry.
Meyer, Stefan; Riedo, Andreas; Neuland, Maike B; Tulej, Marek; Wurz, Peter
2017-09-01
Scientific objectives of current and future space missions are focused on the investigation of the origin and evolution of the solar system with the particular emphasis on habitability and signatures of past and present life. For in situ measurements of the chemical composition of solid samples on planetary surfaces, the neutral atmospheric gas and the thermal plasma of planetary atmospheres, the application of mass spectrometers making use of time-of-flight mass analysers is a technique widely used. However, such investigations imply measurements with good statistics and, thus, a large amount of data to be analysed. Therefore, faster and especially robust automated data analysis with enhanced accuracy is required. In this contribution, an automatic data analysis software, which allows fast and precise quantitative data analysis of time-of-flight mass spectrometric data, is presented and discussed in detail. A crucial part of this software is a robust and fast peak finding algorithm with a consecutive numerical integration method allowing precise data analysis. We tested our analysis software with data from different time-of-flight mass spectrometers and different measurement campaigns thereof. The quantitative analysis of isotopes, using automatic data analysis, yields results with an accuracy of isotope ratios up to 100 ppm for a signal-to-noise ratio (SNR) of 10 4 . We show that the accuracy of isotope ratios is in fact proportional to SNR -1 . Furthermore, we observe that the accuracy of isotope ratios is inversely proportional to the mass resolution. Additionally, we show that the accuracy of isotope ratios is depending on the sample width T s by T s 0.5 . Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Brahmi, Djamel; Cassoux, Nathalie; Serruys, Camille; Giron, Alain; Lehoang, Phuc; Fertil, Bernard
1999-05-01
To support ophthalmologists in their daily routine and enable the quantitative assessment of progression of Cytomegalovirus infection as observed on series of retinal angiograms, a methodology allowing an accurate comparison of retinal borders has been developed. In order to evaluate accuracy of borders, ophthalmologists have been asked to repeatedly outline boundaries between infected and noninfected areas. As a matter of fact, accuracy of drawing relies on local features such as contrast, quality of image, background..., all factors which make the boundaries more or less perceptible from one part of an image to another. In order to directly estimate accuracy of retinal border from image analysis, an artificial neural network (a succession of unsupervised and supervised neural networks) has been designed to correlate accuracy of drawing (as calculated form ophthalmologists' hand-outlines) with local features of the underlying image. Our method has been applied to the quantification of CMV retinitis. It is shown that accuracy of border is properly predicted and characterized by a confident envelope that allows, after a registration phase based on fixed landmarks such as vessel forks, to accurately assess the evolution of CMV infection.
Validation of a Spectral Method for Quantitative Measurement of Color in Protein Drug Solutions.
Yin, Jian; Swartz, Trevor E; Zhang, Jian; Patapoff, Thomas W; Chen, Bartolo; Marhoul, Joseph; Shih, Norman; Kabakoff, Bruce; Rahimi, Kimia
2016-01-01
A quantitative spectral method has been developed to precisely measure the color of protein solutions. In this method, a spectrophotometer is utilized for capturing the visible absorption spectrum of a protein solution, which can then be converted to color values (L*a*b*) that represent human perception of color in a quantitative three-dimensional space. These quantitative values (L*a*b*) allow for calculating the best match of a sample's color to a European Pharmacopoeia reference color solution. In order to qualify this instrument and assay for use in clinical quality control, a technical assessment was conducted to evaluate the assay suitability and precision. Setting acceptance criteria for this study required development and implementation of a unique statistical method for assessing precision in 3-dimensional space. Different instruments, cuvettes, protein solutions, and analysts were compared in this study. The instrument accuracy, repeatability, and assay precision were determined. The instrument and assay are found suitable for use in assessing color of drug substances and drug products and is comparable to the current European Pharmacopoeia visual assessment method. In the biotechnology industry, a visual assessment is the most commonly used method for color characterization, batch release, and stability testing of liquid protein drug solutions. Using this method, an analyst visually determines the color of the sample by choosing the closest match to a standard color series. This visual method can be subjective because it requires an analyst to make a judgment of the best match of color of the sample to the standard color series, and it does not capture data on hue and chroma that would allow for improved product characterization and the ability to detect subtle differences between samples. To overcome these challenges, we developed a quantitative spectral method for color determination that greatly reduces the variability in measuring color and allows for a more precise understanding of color differences. In this study, we established a statistical method for assessing precision in 3-dimensional space and demonstrated that the quantitative spectral method is comparable with respect to precision and accuracy to the current European Pharmacopoeia visual assessment method. © PDA, Inc. 2016.
A custom-built PET phantom design for quantitative imaging of printed distributions.
Markiewicz, P J; Angelis, G I; Kotasidis, F; Green, M; Lionheart, W R; Reader, A J; Matthews, J C
2011-11-07
This note presents a practical approach to a custom-made design of PET phantoms enabling the use of digital radioactive distributions with high quantitative accuracy and spatial resolution. The phantom design allows planar sources of any radioactivity distribution to be imaged in transaxial and axial (sagittal or coronal) planes. Although the design presented here is specially adapted to the high-resolution research tomograph (HRRT), the presented methods can be adapted to almost any PET scanner. Although the presented phantom design has many advantages, a number of practical issues had to be overcome such as positioning of the printed source, calibration, uniformity and reproducibility of printing. A well counter (WC) was used in the calibration procedure to find the nonlinear relationship between digital voxel intensities and the actual measured radioactive concentrations. Repeated printing together with WC measurements and computed radiography (CR) using phosphor imaging plates (IP) were used to evaluate the reproducibility and uniformity of such printing. Results show satisfactory printing uniformity and reproducibility; however, calibration is dependent on the printing mode and the physical state of the cartridge. As a demonstration of the utility of using printed phantoms, the image resolution and quantitative accuracy of reconstructed HRRT images are assessed. There is very good quantitative agreement in the calibration procedure between HRRT, CR and WC measurements. However, the high resolution of CR and its quantitative accuracy supported by WC measurements made it possible to show the degraded resolution of HRRT brain images caused by the partial-volume effect and the limits of iterative image reconstruction.
Automatic Gleason grading of prostate cancer using quantitative phase imaging and machine learning
NASA Astrophysics Data System (ADS)
Nguyen, Tan H.; Sridharan, Shamira; Macias, Virgilia; Kajdacsy-Balla, Andre; Melamed, Jonathan; Do, Minh N.; Popescu, Gabriel
2017-03-01
We present an approach for automatic diagnosis of tissue biopsies. Our methodology consists of a quantitative phase imaging tissue scanner and machine learning algorithms to process these data. We illustrate the performance by automatic Gleason grading of prostate specimens. The imaging system operates on the principle of interferometry and, as a result, reports on the nanoscale architecture of the unlabeled specimen. We use these data to train a random forest classifier to learn textural behaviors of prostate samples and classify each pixel in the image into different classes. Automatic diagnosis results were computed from the segmented regions. By combining morphological features with quantitative information from the glands and stroma, logistic regression was used to discriminate regions with Gleason grade 3 versus grade 4 cancer in prostatectomy tissue. The overall accuracy of this classification derived from a receiver operating curve was 82%, which is in the range of human error when interobserver variability is considered. We anticipate that our approach will provide a clinically objective and quantitative metric for Gleason grading, allowing us to corroborate results across instruments and laboratories and feed the computer algorithms for improved accuracy.
Shinkins, Bethany; Yang, Yaling; Abel, Lucy; Fanshawe, Thomas R
2017-04-14
Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy. We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1) what evidence aside from test accuracy was searched for and synthesised, 2) which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3) how/whether threshold effects were explored, 4) how the potential dependency between multiple tests in a pathway was accounted for, and 5) for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated. The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings. The uptake of appropriate meta-analysis methods for synthesising evidence on diagnostic test accuracy in UK NIHR HTAs has improved in recent years. Future research should focus on other evidence requirements for cost-effectiveness assessment, threshold effects for quantitative tests and the impact of multiple diagnostic tests.
Complementary techniques: validation of gene expression data by quantitative real time PCR.
Provenzano, Maurizio; Mocellin, Simone
2007-01-01
Microarray technology can be considered the most powerful tool for screening gene expression profiles of biological samples. After data mining, results need to be validated with highly reliable biotechniques allowing for precise quantitation of transcriptional abundance of identified genes. Quantitative real time PCR (qrt-PCR) technology has recently reached a level of sensitivity, accuracy and practical ease that support its use as a routine bioinstrumentation for gene level measurement. Currently, qrt-PCR is considered by most experts the most appropriate method to confirm or confute microarray-generated data. The knowledge of the biochemical principles underlying qrt-PCR as well as some related technical issues must be beard in mind when using this biotechnology.
Usefulness of quantitative susceptibility mapping for the diagnosis of Parkinson disease.
Murakami, Y; Kakeda, S; Watanabe, K; Ueda, I; Ogasawara, A; Moriya, J; Ide, S; Futatsuya, K; Sato, T; Okada, K; Uozumi, T; Tsuji, S; Liu, T; Wang, Y; Korogi, Y
2015-06-01
Quantitative susceptibility mapping allows overcoming several nonlocal restrictions of susceptibility-weighted and phase imaging and enables quantification of magnetic susceptibility. We compared the diagnostic accuracy of quantitative susceptibility mapping and R2* (1/T2*) mapping to discriminate between patients with Parkinson disease and controls. For 21 patients with Parkinson disease and 21 age- and sex-matched controls, 2 radiologists measured the quantitative susceptibility mapping values and R2* values in 6 brain structures (the thalamus, putamen, caudate nucleus, pallidum, substantia nigra, and red nucleus). The quantitative susceptibility mapping values and R2* values of the substantia nigra were significantly higher in patients with Parkinson disease (P < .01); measurements in other brain regions did not differ significantly between patients and controls. For the discrimination of patients with Parkinson disease from controls, receiver operating characteristic analysis suggested that the optimal cutoff values for the substantia nigra, based on the Youden Index, were >0.210 for quantitative susceptibility mapping and >28.8 for R2*. The sensitivity, specificity, and accuracy of quantitative susceptibility mapping were 90% (19 of 21), 86% (18 of 21), and 88% (37 of 42), respectively; for R2* mapping, they were 81% (17 of 21), 52% (11 of 21), and 67% (28 of 42). Pair-wise comparisons showed that the areas under the receiver operating characteristic curves were significantly larger for quantitative susceptibility mapping than for R2* mapping (0.91 versus 0.69, P < .05). Quantitative susceptibility mapping showed higher diagnostic performance than R2* mapping for the discrimination between patients with Parkinson disease and controls. © 2015 by American Journal of Neuroradiology.
ERIC Educational Resources Information Center
Mayhew, Matthew J.; Simonoff, Jeffrey S.
2015-01-01
The purpose of this paper is to describe effect coding as an alternative quantitative practice for analyzing and interpreting categorical, multi-raced independent variables in higher education research. Not only may effect coding enable researchers to get closer to respondents' original intentions, it allows for more accurate analyses of all race…
Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J. S.; Tsui, Benjamin M. W.
2011-01-01
Purpose We assessed the quantitation accuracy of small animal pinhole single photon emission computed tomography (SPECT) under the current preclinical settings, where image compensations are not routinely applied. Procedures The effects of several common image-degrading factors and imaging parameters on quantitation accuracy were evaluated using Monte-Carlo simulation methods. Typical preclinical imaging configurations were modeled, and quantitative analyses were performed based on image reconstructions without compensating for attenuation, scatter, and limited system resolution. Results Using mouse-sized phantom studies as examples, attenuation effects alone degraded quantitation accuracy by up to −18% (Tc-99m or In-111) or −41% (I-125). The inclusion of scatter effects changed the above numbers to −12% (Tc-99m or In-111) and −21% (I-125), respectively, indicating the significance of scatter in quantitative I-125 imaging. Region-of-interest (ROI) definitions have greater impacts on regional quantitation accuracy for small sphere sources as compared to attenuation and scatter effects. For the same ROI, SPECT acquisitions using pinhole apertures of different sizes could significantly affect the outcome, whereas the use of different radii-of-rotation yielded negligible differences in quantitation accuracy for the imaging configurations simulated. Conclusions We have systematically quantified the influence of several factors affecting the quantitation accuracy of small animal pinhole SPECT. In order to consistently achieve accurate quantitation within 5% of the truth, comprehensive image compensation methods are needed. PMID:19048346
Canela, Andrés; Vera, Elsa; Klatt, Peter; Blasco, María A
2007-03-27
A major limitation of studies of the relevance of telomere length to cancer and age-related diseases in human populations and to the development of telomere-based therapies has been the lack of suitable high-throughput (HT) assays to measure telomere length. We have developed an automated HT quantitative telomere FISH platform, HT quantitative FISH (Q-FISH), which allows the quantification of telomere length as well as percentage of short telomeres in large human sample sets. We show here that this technique provides the accuracy and sensitivity to uncover associations between telomere length and human disease.
McGarry, Bryony L; Rogers, Harriet J; Knight, Michael J; Jokivarsi, Kimmo T; Sierra, Alejandra; Gröhn, Olli Hj; Kauppinen, Risto A
2016-08-01
Quantitative T2 relaxation magnetic resonance imaging allows estimation of stroke onset time. We aimed to examine the accuracy of quantitative T1 and quantitative T2 relaxation times alone and in combination to provide estimates of stroke onset time in a rat model of permanent focal cerebral ischemia and map the spatial distribution of elevated quantitative T1 and quantitative T2 to assess tissue status. Permanent middle cerebral artery occlusion was induced in Wistar rats. Animals were scanned at 9.4T for quantitative T1, quantitative T2, and Trace of Diffusion Tensor (Dav) up to 4 h post-middle cerebral artery occlusion. Time courses of differentials of quantitative T1 and quantitative T2 in ischemic and non-ischemic contralateral brain tissue (ΔT1, ΔT2) and volumes of tissue with elevated T1 and T2 relaxation times (f1, f2) were determined. TTC staining was used to highlight permanent ischemic damage. ΔT1, ΔT2, f1, f2, and the volume of tissue with both elevated quantitative T1 and quantitative T2 (V(Overlap)) increased with time post-middle cerebral artery occlusion allowing stroke onset time to be estimated. V(Overlap) provided the most accurate estimate with an uncertainty of ±25 min. At all times-points regions with elevated relaxation times were smaller than areas with Dav defined ischemia. Stroke onset time can be determined by quantitative T1 and quantitative T2 relaxation times and tissue volumes. Combining quantitative T1 and quantitative T2 provides the most accurate estimate and potentially identifies irreversibly damaged brain tissue. © 2016 World Stroke Organization.
Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.
2015-01-01
Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices. PMID:25466541
NASA Astrophysics Data System (ADS)
Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.
2014-12-01
Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices.
Electronic field emission models beyond the Fowler-Nordheim one
NASA Astrophysics Data System (ADS)
Lepetit, Bruno
2017-12-01
We propose several quantum mechanical models to describe electronic field emission from first principles. These models allow us to correlate quantitatively the electronic emission current with the electrode surface details at the atomic scale. They all rely on electronic potential energy surfaces obtained from three dimensional density functional theory calculations. They differ by the various quantum mechanical methods (exact or perturbative, time dependent or time independent), which are used to describe tunneling through the electronic potential energy barrier. Comparison of these models between them and with the standard Fowler-Nordheim one in the context of one dimensional tunneling allows us to assess the impact on the accuracy of the computed current of the approximations made in each model. Among these methods, the time dependent perturbative one provides a well-balanced trade-off between accuracy and computational cost.
Stern, K I; Malkova, T L
The objective of the present study was the development and validation of sibutramine demethylated derivatives, desmethyl sibutramine and didesmethyl sibutramine. Gas-liquid chromatography with the flame ionization detector was used for the quantitative determination of the above substances in dietary supplements. The conditions for the chromatographic determination of the analytes in the presence of the reference standard, methyl stearate, were proposed allowing to achieve the efficient separation. The method has the necessary sensitivity, specificity, linearity, accuracy, and precision (on the intra-day and inter-day basis) which suggests its good validation characteristics. The proposed method can be employed in the analytical laboratories for the quantitative determination of sibutramine derivatives in biologically active dietary supplements.
NASA Astrophysics Data System (ADS)
Favicchio, Rosy; Psycharakis, Stylianos; Schönig, Kai; Bartsch, Dusan; Mamalaki, Clio; Papamatheakis, Joseph; Ripoll, Jorge; Zacharakis, Giannis
2016-02-01
Fluorescent proteins and dyes are routine tools for biological research to describe the behavior of genes, proteins, and cells, as well as more complex physiological dynamics such as vessel permeability and pharmacokinetics. The use of these probes in whole body in vivo imaging would allow extending the range and scope of current biomedical applications and would be of great interest. In order to comply with a wide variety of application demands, in vivo imaging platform requirements span from wide spectral coverage to precise quantification capabilities. Fluorescence molecular tomography (FMT) detects and reconstructs in three dimensions the distribution of a fluorophore in vivo. Noncontact FMT allows fast scanning of an excitation source and noninvasive measurement of emitted fluorescent light using a virtual array detector operating in free space. Here, a rigorous process is defined that fully characterizes the performance of a custom-built horizontal noncontact FMT setup. Dynamic range, sensitivity, and quantitative accuracy across the visible spectrum were evaluated using fluorophores with emissions between 520 and 660 nm. These results demonstrate that high-performance quantitative three-dimensional visible light FMT allowed the detection of challenging mesenteric lymph nodes in vivo and the comparison of spectrally distinct fluorescent reporters in cell culture.
Klimek-Turek, A; Sikora, M; Rybicki, M; Dzido, T H
2016-03-04
A new concept of using thin-layer chromatography to sample preparation for the quantitative determination of solute/s followed by instrumental techniques is presented Thin-layer chromatography (TLC) is used to completely separate acetaminophen and its internal standard from other components (matrix) and to form a single spot/zone containing them at the solvent front position (after the final stage of the thin-layer chromatogram development). The location of the analytes and internal standard in the solvent front zone allows their easy extraction followed by quantitation by HPLC. The exctraction procedure of the solute/s and internal standard can proceed from whole solute frontal zone or its part without lowering in accuracy of quantitative analysis. Copyright © 2016 Elsevier B.V. All rights reserved.
Prentice, Boone M; Chumbley, Chad W; Hachey, Brian C; Norris, Jeremy L; Caprioli, Richard M
2016-10-04
Quantitative matrix-assisted laser desorption/ionization time-of-flight (MALDI TOF) approaches have historically suffered from poor accuracy and precision mainly due to the nonuniform distribution of matrix and analyte across the target surface, matrix interferences, and ionization suppression. Tandem mass spectrometry (MS/MS) can be used to ensure chemical specificity as well as improve signal-to-noise ratios by eliminating interferences from chemical noise, alleviating some concerns about dynamic range. However, conventional MALDI TOF/TOF modalities typically only scan for a single MS/MS event per laser shot, and multiplex assays require sequential analyses. We describe here new methodology that allows for multiple TOF/TOF fragmentation events to be performed in a single laser shot. This technology allows the reference of analyte intensity to that of the internal standard in each laser shot, even when the analyte and internal standard are quite disparate in m/z, thereby improving quantification while maintaining chemical specificity and duty cycle. In the quantitative analysis of the drug enalapril in pooled human plasma with ramipril as an internal standard, a greater than 4-fold improvement in relative standard deviation (<10%) was observed as well as improved coefficients of determination (R 2 ) and accuracy (>85% quality controls). Using this approach we have also performed simultaneous quantitative analysis of three drugs (promethazine, enalapril, and verapamil) using deuterated analogues of these drugs as internal standards.
Ramanathan, Ragu; Ghosal, Anima; Ramanathan, Lakshmi; Comstock, Kate; Shen, Helen; Ramanathan, Dil
2018-05-01
Evaluation of HPLC-high-resolution mass spectrometry (HPLC-HRMS) full scan with polarity switching for increasing throughput of human in vitro cocktail drug-drug interaction assay. Microsomal incubates were analyzed using a high resolution and high mass accuracy Q-Exactive mass spectrometer to collect integrated qualitative and quantitative (qual/quant) data. Within assay, positive-to-negative polarity switching HPLC-HRMS method allowed quantification of eight and two probe compounds in the positive and negative ionization modes, respectively, while monitoring for LOR and its metabolites. LOR-inhibited CYP2C19 and showed higher activity for CYP2D6, CYP2E1 and CYP3A4. Overall, LC-HRMS-based nontargeted full scan quantitation allowed to improve the throughput of the in vitro cocktail drug-drug interaction assay.
Pepin, K.M.; Spackman, E.; Brown, J.D.; Pabilonia, K.L.; Garber, L.P.; Weaver, J.T.; Kennedy, D.A.; Patyk, K.A.; Huyvaert, K.P.; Miller, R.S.; Franklin, A.B.; Pedersen, K.; Bogich, T.L.; Rohani, P.; Shriner, S.A.; Webb, C.T.; Riley, S.
2014-01-01
Wild birds are the primary source of genetic diversity for influenza A viruses that eventually emerge in poultry and humans. Much progress has been made in the descriptive ecology of avian influenza viruses (AIVs), but contributions are less evident from quantitative studies (e.g., those including disease dynamic models). Transmission between host species, individuals and flocks has not been measured with sufficient accuracy to allow robust quantitative evaluation of alternate control protocols. We focused on the United States of America (USA) as a case study for determining the state of our quantitative knowledge of potential AIV emergence processes from wild hosts to poultry. We identified priorities for quantitative research that would build on existing tools for responding to AIV in poultry and concluded that the following knowledge gaps can be addressed with current empirical data: (1) quantification of the spatio-temporal relationships between AIV prevalence in wild hosts and poultry populations, (2) understanding how the structure of different poultry sectors impacts within-flock transmission, (3) determining mechanisms and rates of between-farm spread, and (4) validating current policy-decision tools with data. The modeling studies we recommend will improve our mechanistic understanding of potential AIV transmission patterns in USA poultry, leading to improved measures of accuracy and reduced uncertainty when evaluating alternative control strategies. PMID:24462191
NASA Technical Reports Server (NTRS)
Kierein-Young, K. S.; Kruse, F. A.; Lefkoff, A. B.
1992-01-01
The Jet Propulsion Laboratory Airborne Synthetic Aperture Radar (JPL-AIRSAR) is used to collect full polarimetric measurements at P-, L-, and C-bands. These data are analyzed using the radar analysis and visualization environment (RAVEN). The AIRSAR data are calibrated using in-scene corner reflectors to allow for quantitative analysis of the radar backscatter. RAVEN is used to extract surface characteristics. Inversion models are used to calculate quantitative surface roughness values and fractal dimensions. These values are used to generate synthetic surface plots that represent the small-scale surface structure of areas in Death Valley. These procedures are applied to a playa, smooth salt-pan, and alluvial fan surfaces in Death Valley. Field measurements of surface roughness are used to verify the accuracy.
Colagiorgio, P; Romano, F; Sardi, F; Moraschini, M; Sozzi, A; Bejor, M; Ricevuti, G; Buizza, A; Ramat, S
2014-01-01
The problem of a correct fall risk assessment is becoming more and more critical with the ageing of the population. In spite of the available approaches allowing a quantitative analysis of the human movement control system's performance, the clinical assessment and diagnostic approach to fall risk assessment still relies mostly on non-quantitative exams, such as clinical scales. This work documents our current effort to develop a novel method to assess balance control abilities through a system implementing an automatic evaluation of exercises drawn from balance assessment scales. Our aim is to overcome the classical limits characterizing these scales i.e. limited granularity and inter-/intra-examiner reliability, to obtain objective scores and more detailed information allowing to predict fall risk. We used Microsoft Kinect to record subjects' movements while performing challenging exercises drawn from clinical balance scales. We then computed a set of parameters quantifying the execution of the exercises and fed them to a supervised classifier to perform a classification based on the clinical score. We obtained a good accuracy (~82%) and especially a high sensitivity (~83%).
PET guidance for liver radiofrequency ablation: an evaluation
NASA Astrophysics Data System (ADS)
Lei, Peng; Dandekar, Omkar; Mahmoud, Faaiza; Widlus, David; Malloy, Patrick; Shekhar, Raj
2007-03-01
Radiofrequency ablation (RFA) is emerging as the primary mode of treatment of unresectable malignant liver tumors. With current intraoperative imaging modalities, quick, precise, and complete localization of lesions remains a challenge for liver RFA. Fusion of intraoperative CT and preoperative PET images, which relies on PET and CT registration, can produce a new image with complementary metabolic and anatomic data and thus greatly improve the targeting accuracy. Unlike neurological images, alignment of abdominal images by combined PET/CT scanner is prone to errors as a result of large nonrigid misalignment in abdominal images. Our use of a normalized mutual information-based 3D nonrigid registration technique has proven powerful for whole-body PET and CT registration. We demonstrate here that this technique is capable of acceptable abdominal PET and CT registration as well. In five clinical cases, both qualitative and quantitative validation showed that the registration is robust and accurate. Quantitative accuracy was evaluated by comparison between the result from the algorithm and clinical experts. The accuracy of registration is much less than the allowable margin in liver RFA. Study findings show the technique's potential to enable the augmentation of intraoperative CT with preoperative PET to reduce procedure time, avoid repeating procedures, provide clinicians with complementary functional/anatomic maps, avoid omitting dispersed small lesions, and improve the accuracy of tumor targeting in liver RFA.
Pepin, K M; Spackman, E; Brown, J D; Pabilonia, K L; Garber, L P; Weaver, J T; Kennedy, D A; Patyk, K A; Huyvaert, K P; Miller, R S; Franklin, A B; Pedersen, K; Bogich, T L; Rohani, P; Shriner, S A; Webb, C T; Riley, S
2014-03-01
Wild birds are the primary source of genetic diversity for influenza A viruses that eventually emerge in poultry and humans. Much progress has been made in the descriptive ecology of avian influenza viruses (AIVs), but contributions are less evident from quantitative studies (e.g., those including disease dynamic models). Transmission between host species, individuals and flocks has not been measured with sufficient accuracy to allow robust quantitative evaluation of alternate control protocols. We focused on the United States of America (USA) as a case study for determining the state of our quantitative knowledge of potential AIV emergence processes from wild hosts to poultry. We identified priorities for quantitative research that would build on existing tools for responding to AIV in poultry and concluded that the following knowledge gaps can be addressed with current empirical data: (1) quantification of the spatio-temporal relationships between AIV prevalence in wild hosts and poultry populations, (2) understanding how the structure of different poultry sectors impacts within-flock transmission, (3) determining mechanisms and rates of between-farm spread, and (4) validating current policy-decision tools with data. The modeling studies we recommend will improve our mechanistic understanding of potential AIV transmission patterns in USA poultry, leading to improved measures of accuracy and reduced uncertainty when evaluating alternative control strategies. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Boes, Kelsey S.; Roberts, Michael S.; Vinueza, Nelson R.
2018-03-01
Complex mixture analysis is a costly and time-consuming task facing researchers with foci as varied as food science and fuel analysis. When faced with the task of quantifying oxygen-rich bio-oil molecules in a complex diesel mixture, we asked whether complex mixtures could be qualitatively and quantitatively analyzed on a single mass spectrometer with mid-range resolving power without the use of lengthy separations. To answer this question, we developed and evaluated a quantitation method that eliminated chromatography steps and expanded the use of quadrupole-time-of-flight mass spectrometry from primarily qualitative to quantitative as well. To account for mixture complexity, the method employed an ionization dopant, targeted tandem mass spectrometry, and an internal standard. This combination of three techniques achieved reliable quantitation of oxygen-rich eugenol in diesel from 300 to 2500 ng/mL with sufficient linearity (R2 = 0.97 ± 0.01) and excellent accuracy (percent error = 0% ± 5). To understand the limitations of the method, it was compared to quantitation attained on a triple quadrupole mass spectrometer, the gold standard for quantitation. The triple quadrupole quantified eugenol from 50 to 2500 ng/mL with stronger linearity (R2 = 0.996 ± 0.003) than the quadrupole-time-of-flight and comparable accuracy (percent error = 4% ± 5). This demonstrates that a quadrupole-time-of-flight can be used for not only qualitative analysis but also targeted quantitation of oxygen-rich lignin molecules in complex mixtures without extensive sample preparation. The rapid and cost-effective method presented here offers new possibilities for bio-oil research, including: (1) allowing for bio-oil studies that demand repetitive analysis as process parameters are changed and (2) making this research accessible to more laboratories. [Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Boes, Kelsey S.; Roberts, Michael S.; Vinueza, Nelson R.
2017-12-01
Complex mixture analysis is a costly and time-consuming task facing researchers with foci as varied as food science and fuel analysis. When faced with the task of quantifying oxygen-rich bio-oil molecules in a complex diesel mixture, we asked whether complex mixtures could be qualitatively and quantitatively analyzed on a single mass spectrometer with mid-range resolving power without the use of lengthy separations. To answer this question, we developed and evaluated a quantitation method that eliminated chromatography steps and expanded the use of quadrupole-time-of-flight mass spectrometry from primarily qualitative to quantitative as well. To account for mixture complexity, the method employed an ionization dopant, targeted tandem mass spectrometry, and an internal standard. This combination of three techniques achieved reliable quantitation of oxygen-rich eugenol in diesel from 300 to 2500 ng/mL with sufficient linearity (R2 = 0.97 ± 0.01) and excellent accuracy (percent error = 0% ± 5). To understand the limitations of the method, it was compared to quantitation attained on a triple quadrupole mass spectrometer, the gold standard for quantitation. The triple quadrupole quantified eugenol from 50 to 2500 ng/mL with stronger linearity (R2 = 0.996 ± 0.003) than the quadrupole-time-of-flight and comparable accuracy (percent error = 4% ± 5). This demonstrates that a quadrupole-time-of-flight can be used for not only qualitative analysis but also targeted quantitation of oxygen-rich lignin molecules in complex mixtures without extensive sample preparation. The rapid and cost-effective method presented here offers new possibilities for bio-oil research, including: (1) allowing for bio-oil studies that demand repetitive analysis as process parameters are changed and (2) making this research accessible to more laboratories. [Figure not available: see fulltext.
Boes, Kelsey S; Roberts, Michael S; Vinueza, Nelson R
2018-03-01
Complex mixture analysis is a costly and time-consuming task facing researchers with foci as varied as food science and fuel analysis. When faced with the task of quantifying oxygen-rich bio-oil molecules in a complex diesel mixture, we asked whether complex mixtures could be qualitatively and quantitatively analyzed on a single mass spectrometer with mid-range resolving power without the use of lengthy separations. To answer this question, we developed and evaluated a quantitation method that eliminated chromatography steps and expanded the use of quadrupole-time-of-flight mass spectrometry from primarily qualitative to quantitative as well. To account for mixture complexity, the method employed an ionization dopant, targeted tandem mass spectrometry, and an internal standard. This combination of three techniques achieved reliable quantitation of oxygen-rich eugenol in diesel from 300 to 2500 ng/mL with sufficient linearity (R 2 = 0.97 ± 0.01) and excellent accuracy (percent error = 0% ± 5). To understand the limitations of the method, it was compared to quantitation attained on a triple quadrupole mass spectrometer, the gold standard for quantitation. The triple quadrupole quantified eugenol from 50 to 2500 ng/mL with stronger linearity (R 2 = 0.996 ± 0.003) than the quadrupole-time-of-flight and comparable accuracy (percent error = 4% ± 5). This demonstrates that a quadrupole-time-of-flight can be used for not only qualitative analysis but also targeted quantitation of oxygen-rich lignin molecules in complex mixtures without extensive sample preparation. The rapid and cost-effective method presented here offers new possibilities for bio-oil research, including: (1) allowing for bio-oil studies that demand repetitive analysis as process parameters are changed and (2) making this research accessible to more laboratories. Graphical Abstract ᅟ.
A low-cost tracked C-arm (TC-arm) upgrade system for versatile quantitative intraoperative imaging.
Amiri, Shahram; Wilson, David R; Masri, Bassam A; Anglin, Carolyn
2014-07-01
C-arm fluoroscopy is frequently used in clinical applications as a low-cost and mobile real-time qualitative assessment tool. C-arms, however, are not widely accepted for applications involving quantitative assessments, mainly due to the lack of reliable and low-cost position tracking methods, as well as adequate calibration and registration techniques. The solution suggested in this work is a tracked C-arm (TC-arm) which employs a low-cost sensor tracking module that can be retrofitted to any conventional C-arm for tracking the individual joints of the device. Registration and offline calibration methods were developed that allow accurate tracking of the gantry and determination of the exact intrinsic and extrinsic parameters of the imaging system for any acquired fluoroscopic image. The performance of the system was evaluated in comparison to an Optotrak[Formula: see text] motion tracking system and by a series of experiments on accurately built ball-bearing phantoms. Accuracies of the system were determined for 2D-3D registration, three-dimensional landmark localization, and for generating panoramic stitched views in simulated intraoperative applications. The system was able to track the center point of the gantry with an accuracy of [Formula: see text] mm or better. Accuracies of 2D-3D registrations were [Formula: see text] mm and [Formula: see text]. Three-dimensional landmark localization had an accuracy of [Formula: see text] of the length (or [Formula: see text] mm) on average, depending on whether the landmarks were located along, above, or across the table. The overall accuracies of the two-dimensional measurements conducted on stitched panoramic images of the femur and lumbar spine were 2.5 [Formula: see text] 2.0 % [Formula: see text] and [Formula: see text], respectively. The TC-arm system has the potential to achieve sophisticated quantitative fluoroscopy assessment capabilities using an existing C-arm imaging system. This technology may be useful to improve the quality of orthopedic surgery and interventional radiology.
How to Combine ChIP with qPCR.
Asp, Patrik
2018-01-01
Chromatin immunoprecipitation (ChIP) coupled with quantitative PCR (qPCR) has in the last 15 years become a basic mainstream tool in genomic research. Numerous commercially available ChIP kits, qPCR kits, and real-time PCR systems allow for quick and easy analysis of virtually anything chromatin-related as long as there is an available antibody. However, the highly accurate quantitative dimension added by using qPCR to analyze ChIP samples significantly raises the bar in terms of experimental accuracy, appropriate controls, data analysis, and data presentation. This chapter will address these potential pitfalls by providing protocols and procedures that address the difficulties inherent in ChIP-qPCR assays.
23 CFR 1200.22 - State traffic safety information system improvements grants.
Code of Federal Regulations, 2013 CFR
2013-04-01
... measures to be used to demonstrate quantitative progress in the accuracy, completeness, timeliness... to implement, provides an explanation. (d) Requirement for quantitative improvement. A State shall demonstrate quantitative improvement in the data attributes of accuracy, completeness, timeliness, uniformity...
23 CFR 1200.22 - State traffic safety information system improvements grants.
Code of Federal Regulations, 2014 CFR
2014-04-01
... measures to be used to demonstrate quantitative progress in the accuracy, completeness, timeliness... to implement, provides an explanation. (d) Requirement for quantitative improvement. A State shall demonstrate quantitative improvement in the data attributes of accuracy, completeness, timeliness, uniformity...
Stochastic approach to error estimation for image-guided robotic systems.
Haidegger, Tamas; Gyõri, Sándor; Benyo, Balazs; Benyó, Zoltáán
2010-01-01
Image-guided surgical systems and surgical robots are primarily developed to provide patient safety through increased precision and minimal invasiveness. Even more, robotic devices should allow for refined treatments that are not possible by other means. It is crucial to determine the accuracy of a system, to define the expected overall task execution error. A major step toward this aim is to quantitatively analyze the effect of registration and tracking-series of multiplication of erroneous homogeneous transformations. First, the currently used models and algorithms are introduced along with their limitations, and a new, probability distribution based method is described. The new approach has several advantages, as it was demonstrated in our simulations. Primarily, it determines the full 6 degree of freedom accuracy of the point of interest, allowing for the more accurate use of advanced application-oriented concepts, such as Virtual Fixtures. On the other hand, it becomes feasible to consider different surgical scenarios with varying weighting factors.
Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.
2016-01-01
Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969
Improved quality control of [18F]fluoromethylcholine.
Nader, Michael; Reindl, Dietmar; Eichinger, Reinhard; Beheshti, Mohsen; Langsteger, Werner
2011-11-01
With respect to the broad application of [(18)F-methyl]fluorocholine (FCH), there is a need for a safe, but also efficient and convenient way for routine quality control of FCH. Therefore, a GC- method should be developed and validated which allows the simultaneous quantitation of all chemical impurities and residual solvents such as acetonitrile, ethanol, dibromomethane and N,N-dimethylaminoethanol. Analytical GC has been performed with a GC-capillary column Optima 1701 (50 m×0.32 mm), and a pre-column deactivated capillary column phenyl-Sil (10 m×0.32) in line with a flame ionization detector (FID) was used. The validation includes the following tests: specificity, range, accuracy, linearity, precision, limit of detection (LOD) and limit of quantitation (LOQ) of all listed substances. The described GC method has been successfully used for the quantitation of the listed chemical impurities. The specificity of the GC separation has been proven by demonstrating that the appearing peaks are completely separated from each other and that a resolution R≥1.5 for the separation of the peaks could be achieved. The specified range confirmed that the analytical procedure provides an acceptable degree of linearity, accuracy and precision. For each substance, a range from 2% to 120% of the specification limit could be demonstrated. The corresponding LOD values were determined and were much lower than the specification limits. An efficient and convenient GC method for the quality control of FCH has been developed and validated which meets all acceptance criteria in terms of linearity, specificity, precision, accuracy, LOD and LOQ. Copyright © 2011 Elsevier Inc. All rights reserved.
Edwards, Stefan M.; Sørensen, Izel F.; Sarup, Pernille; Mackay, Trudy F. C.; Sørensen, Peter
2016-01-01
Predicting individual quantitative trait phenotypes from high-resolution genomic polymorphism data is important for personalized medicine in humans, plant and animal breeding, and adaptive evolution. However, this is difficult for populations of unrelated individuals when the number of causal variants is low relative to the total number of polymorphisms and causal variants individually have small effects on the traits. We hypothesized that mapping molecular polymorphisms to genomic features such as genes and their gene ontology categories could increase the accuracy of genomic prediction models. We developed a genomic feature best linear unbiased prediction (GFBLUP) model that implements this strategy and applied it to three quantitative traits (startle response, starvation resistance, and chill coma recovery) in the unrelated, sequenced inbred lines of the Drosophila melanogaster Genetic Reference Panel. Our results indicate that subsetting markers based on genomic features increases the predictive ability relative to the standard genomic best linear unbiased prediction (GBLUP) model. Both models use all markers, but GFBLUP allows differential weighting of the individual genetic marker relationships, whereas GBLUP weighs the genetic marker relationships equally. Simulation studies show that it is possible to further increase the accuracy of genomic prediction for complex traits using this model, provided the genomic features are enriched for causal variants. Our GFBLUP model using prior information on genomic features enriched for causal variants can increase the accuracy of genomic predictions in populations of unrelated individuals and provides a formal statistical framework for leveraging and evaluating information across multiple experimental studies to provide novel insights into the genetic architecture of complex traits. PMID:27235308
Borgese, L; Salmistraro, M; Gianoncelli, A; Zacco, A; Lucchini, R; Zimmerman, N; Pisani, L; Siviero, G; Depero, L E; Bontempi, E
2012-01-30
This work is presented as an improvement of a recently introduced method for airborne particulate matter (PM) filter analysis [1]. X-ray standing wave (XSW) and total reflection X-ray fluorescence (TXRF) were performed with a new dedicated laboratory instrumentation. The main advantage of performing both XSW and TXRF, is the possibility to distinguish the nature of the sample: if it is a small droplet dry residue, a thin film like or a bulk sample. Another advantage is related to the possibility to select the angle of total reflection to make TXRF measurements. Finally, the possibility to switch the X-ray source allows to measure with more accuracy lighter and heavier elements (with a change in X-ray anode, for example from Mo to Cu). The aim of the present study is to lay the theoretical foundation of the new proposed method for airborne PM filters quantitative analysis improving the accuracy and efficiency of quantification by means of an external standard. The theoretical model presented and discussed demonstrated that airborne PM filters can be considered as thin layers. A set of reference samples is prepared in laboratory and used to obtain a calibration curve. Our results demonstrate that the proposed method for quantitative analysis of air PM filters is affordable and reliable without the necessity to digest filters to obtain quantitative chemical analysis, and that the use of XSW improve the accuracy of TXRF analysis. Copyright © 2011 Elsevier B.V. All rights reserved.
Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G
2015-10-01
Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.
Schmidt, Mark E; Chiao, Ping; Klein, Gregory; Matthews, Dawn; Thurfjell, Lennart; Cole, Patricia E; Margolin, Richard; Landau, Susan; Foster, Norman L; Mason, N Scott; De Santi, Susan; Suhy, Joyce; Koeppe, Robert A; Jagust, William
2015-09-01
In vivo imaging of amyloid burden with positron emission tomography (PET) provides a means for studying the pathophysiology of Alzheimer's and related diseases. Measurement of subtle changes in amyloid burden requires quantitative analysis of image data. Reliable quantitative analysis of amyloid PET scans acquired at multiple sites and over time requires rigorous standardization of acquisition protocols, subject management, tracer administration, image quality control, and image processing and analysis methods. We review critical points in the acquisition and analysis of amyloid PET, identify ways in which technical factors can contribute to measurement variability, and suggest methods for mitigating these sources of noise. Improved quantitative accuracy could reduce the sample size necessary to detect intervention effects when amyloid PET is used as a treatment end point and allow more reliable interpretation of change in amyloid burden and its relationship to clinical course. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Monitoring of protease catalyzed reactions by quantitative MALDI MS using metal labeling.
Gregorius, Barbara; Jakoby, Thomas; Schaumlöffel, Dirk; Tholey, Andreas
2013-05-21
Quantitative mass spectrometry is a powerful tool for the determination of enzyme activities as it does not require labeled substrates and simultaneously allows for the identification of reaction products. However, major restrictions are the limited number of samples which can be measured in parallel due to the need for isotope labeled internal standards. Here we describe the use of metal labeling of peptides for the setup of multiplexed enzyme activity assays. After proteolytic reaction, using the protease trypsin, remaining substrates and peptide products formed in the reaction were labeled with metal chelators complexing rare earth metal ions. Labeled peptides were quantified with high accuracy and over a wide dynamic range (at least 2 orders of magnitude) using MALDI MS in case of simple peptide mixtures or by LC-MALDI MS for complex substrate mixtures and used for the monitoring of time-dependent product formation and substrate consumption. Due to multiplexing capabilities and accuracy, the presented approach will be useful for the determination of enzyme activities with a wide range of biochemical and biotechnological applications.
Global, quantitative and dynamic mapping of protein subcellular localization.
Itzhak, Daniel N; Tyanova, Stefka; Cox, Jürgen; Borner, Georg Hh
2016-06-09
Subcellular localization critically influences protein function, and cells control protein localization to regulate biological processes. We have developed and applied Dynamic Organellar Maps, a proteomic method that allows global mapping of protein translocation events. We initially used maps statically to generate a database with localization and absolute copy number information for over 8700 proteins from HeLa cells, approaching comprehensive coverage. All major organelles were resolved, with exceptional prediction accuracy (estimated at >92%). Combining spatial and abundance information yielded an unprecedented quantitative view of HeLa cell anatomy and organellar composition, at the protein level. We subsequently demonstrated the dynamic capabilities of the approach by capturing translocation events following EGF stimulation, which we integrated into a quantitative model. Dynamic Organellar Maps enable the proteome-wide analysis of physiological protein movements, without requiring any reagents specific to the investigated process, and will thus be widely applicable in cell biology.
Imaging Performance of Quantitative Transmission Ultrasound
Lenox, Mark W.; Wiskin, James; Lewis, Matthew A.; Darrouzet, Stephen; Borup, David; Hsieh, Scott
2015-01-01
Quantitative Transmission Ultrasound (QTUS) is a tomographic transmission ultrasound modality that is capable of generating 3D speed-of-sound maps of objects in the field of view. It performs this measurement by propagating a plane wave through the medium from a transmitter on one side of a water tank to a high resolution receiver on the opposite side. This information is then used via inverse scattering to compute a speed map. In addition, the presence of reflection transducers allows the creation of a high resolution, spatially compounded reflection map that is natively coregistered to the speed map. A prototype QTUS system was evaluated for measurement and geometric accuracy as well as for the ability to correctly determine speed of sound. PMID:26604918
Probing sub-alveolar length scales with hyperpolarized-gas diffusion NMR
NASA Astrophysics Data System (ADS)
Miller, Wilson; Carl, Michael; Mooney, Karen; Mugler, John; Cates, Gordon
2009-05-01
Diffusion MRI of the lung is a promising technique for detecting alterations of normal lung microstructure in diseases such as emphysema. The length scale being probed using this technique is related to the time scale over which the helium-3 or xenon-129 diffusion is observed. We have developed new MR pulse sequence methods for making diffusivity measurements at sub-millisecond diffusion times, allowing one to probe smaller length scales than previously possible in-vivo, and opening the possibility of making quantitative measurements of the ratio of surface area to volume (S/V) in the lung airspaces. The quantitative accuracy of simulated and experimental measurements in microstructure phantoms will be discussed, and preliminary in-vivo results will be presented.
Yao, Yingyi; Guo, Weisheng; Zhang, Jian; Wu, Yudong; Fu, Weihua; Liu, Tingting; Wu, Xiaoli; Wang, Hanjie; Gong, Xiaoqun; Liang, Xing-Jie; Chang, Jin
2016-09-07
Ultrasensitive and quantitative fast screening of cancer biomarkers by immunochromatography test strip (ICTS) is still challenging in clinic. The gold nanoparticles (NPs) based ICTS with colorimetric readout enables a quick spectrum screening but suffers from nonquantitative performance; although ICTS with fluorescence readout (FICTS) allows quantitative detection, its sensitivity still deserves more efforts and attentions. In this work, by taking advantages of colorimetric ICTS and FICTS, we described a reverse fluorescence enhancement ICTS (rFICTS) with bimodal signal readout for ultrasensitive and quantitative fast screening of carcinoembryonic antigen (CEA). In the presence of target, gold NPs aggregation in T line induced colorimetric readout, allowing on-the-spot spectrum screening in 10 min by naked eye. Meanwhile, the reverse fluorescence enhancement signal enabled more accurately quantitative detection with better sensitivity (5.89 pg/mL for CEA), which is more than 2 orders of magnitude lower than that of the conventional FICTS. The accuracy and stability of the rFICTS were investigated with more than 100 clinical serum samples for large-scale screening. Furthermore, this rFICTS also realized postoperative monitoring by detecting CEA in a patient with colon cancer and comparing with CT imaging diagnosis. These results indicated this rFICTS is particularly suitable for point-of-care (POC) diagnostics in both resource-rich and resource-limited settings.
Barnes, Anna; Alonzi, Roberto; Blackledge, Matthew; Charles-Edwards, Geoff; Collins, David J; Cook, Gary; Coutts, Glynn; Goh, Vicky; Graves, Martin; Kelly, Charles; Koh, Dow-Mu; McCallum, Hazel; Miquel, Marc E; O'Connor, James; Padhani, Anwar; Pearson, Rachel; Priest, Andrew; Rockall, Andrea; Stirling, James; Taylor, Stuart; Tunariu, Nina; van der Meulen, Jan; Walls, Darren; Winfield, Jessica; Punwani, Shonit
2018-01-01
Application of whole body diffusion-weighted MRI (WB-DWI) for oncology are rapidly increasing within both research and routine clinical domains. However, WB-DWI as a quantitative imaging biomarker (QIB) has significantly slower adoption. To date, challenges relating to accuracy and reproducibility, essential criteria for a good QIB, have limited widespread clinical translation. In recognition, a UK workgroup was established in 2016 to provide technical consensus guidelines (to maximise accuracy and reproducibility of WB-MRI QIBs) and accelerate the clinical translation of quantitative WB-DWI applications for oncology. A panel of experts convened from cancer centres around the UK with subspecialty expertise in quantitative imaging and/or the use of WB-MRI with DWI. A formal consensus method was used to obtain consensus agreement regarding best practice. Questions were asked about the appropriateness or otherwise on scanner hardware and software, sequence optimisation, acquisition protocols, reporting, and ongoing quality control programs to monitor precision and accuracy and agreement on quality control. The consensus panel was able to reach consensus on 73% (255/351) items and based on consensus areas made recommendations to maximise accuracy and reproducibly of quantitative WB-DWI studies performed at 1.5T. The panel were unable to reach consensus on the majority of items related to quantitative WB-DWI performed at 3T. This UK Quantitative WB-DWI Technical Workgroup consensus provides guidance on maximising accuracy and reproducibly of quantitative WB-DWI for oncology. The consensus guidance can be used by researchers and clinicians to harmonise WB-DWI protocols which will accelerate clinical translation of WB-DWI-derived QIBs.
RapidEye constellation relative radiometric accuracy measurement using lunar images
NASA Astrophysics Data System (ADS)
Steyn, Joe; Tyc, George; Beckett, Keith; Hashida, Yoshi
2009-09-01
The RapidEye constellation includes five identical satellites in Low Earth Orbit (LEO). Each satellite has a 5-band (blue, green, red, red-edge and near infrared (NIR)) multispectral imager at 6.5m GSD. A three-axes attitude control system allows pointing the imager of each satellite at the Moon during lunations. It is therefore possible to image the Moon from near identical viewing geometry within a span of 80 minutes with each one of the imagers. Comparing the radiometrically corrected images obtained from each band and each satellite allows a near instantaneous relative radiometric accuracy measurement and determination of relative gain changes between the five imagers. A more traditional terrestrial vicarious radiometric calibration program has also been completed by MDA on RapidEye. The two components of this program provide for spatial radiometric calibration ensuring that detector-to-detector response remains flat, while a temporal radiometric calibration approach has accumulated images of specific dry dessert calibration sites. These images are used to measure the constellation relative radiometric response and make on-ground gain and offset adjustments in order to maintain the relative accuracy of the constellation within +/-2.5%. A quantitative comparison between the gain changes measured by the lunar method and the terrestrial temporal radiometric calibration method is performed and will be presented.
Gamal El-Dien, Omnia; Ratcliffe, Blaise; Klápště, Jaroslav; Porth, Ilga; Chen, Charles; El-Kassaby, Yousry A.
2016-01-01
The open-pollinated (OP) family testing combines the simplest known progeny evaluation and quantitative genetics analyses as candidates’ offspring are assumed to represent independent half-sib families. The accuracy of genetic parameter estimates is often questioned as the assumption of “half-sibling” in OP families may often be violated. We compared the pedigree- vs. marker-based genetic models by analysing 22-yr height and 30-yr wood density for 214 white spruce [Picea glauca (Moench) Voss] OP families represented by 1694 individuals growing on one site in Quebec, Canada. Assuming half-sibling, the pedigree-based model was limited to estimating the additive genetic variances which, in turn, were grossly overestimated as they were confounded by very minor dominance and major additive-by-additive epistatic genetic variances. In contrast, the implemented genomic pairwise realized relationship models allowed the disentanglement of additive from all nonadditive factors through genetic variance decomposition. The marker-based models produced more realistic narrow-sense heritability estimates and, for the first time, allowed estimating the dominance and epistatic genetic variances from OP testing. In addition, the genomic models showed better prediction accuracies compared to pedigree models and were able to predict individual breeding values for new individuals from untested families, which was not possible using the pedigree-based model. Clearly, the use of marker-based relationship approach is effective in estimating the quantitative genetic parameters of complex traits even under simple and shallow pedigree structure. PMID:26801647
Chaturvedi, Palak; Doerfler, Hannes; Jegadeesan, Sridharan; Ghatak, Arindam; Pressman, Etan; Castillejo, Maria Angeles; Wienkoop, Stefanie; Egelhofer, Volker; Firon, Nurit; Weckwerth, Wolfram
2015-11-06
Recently, we have developed a quantitative shotgun proteomics strategy called mass accuracy precursor alignment (MAPA). The MAPA algorithm uses high mass accuracy to bin mass-to-charge (m/z) ratios of precursor ions from LC-MS analyses, determines their intensities, and extracts a quantitative sample versus m/z ratio data alignment matrix from a multitude of samples. Here, we introduce a novel feature of this algorithm that allows the extraction and alignment of proteotypic peptide precursor ions or any other target peptide from complex shotgun proteomics data for accurate quantification of unique proteins. This strategy circumvents the problem of confusing the quantification of proteins due to indistinguishable protein isoforms by a typical shotgun proteomics approach. We applied this strategy to a comparison of control and heat-treated tomato pollen grains at two developmental stages, post-meiotic and mature. Pollen is a temperature-sensitive tissue involved in the reproductive cycle of plants and plays a major role in fruit setting and yield. By LC-MS-based shotgun proteomics, we identified more than 2000 proteins in total for all different tissues. By applying the targeted MAPA data-processing strategy, 51 unique proteins were identified as heat-treatment-responsive protein candidates. The potential function of the identified candidates in a specific developmental stage is discussed.
Wu, C; de Jong, J R; Gratama van Andel, H A; van der Have, F; Vastenhouw, B; Laverman, P; Boerman, O C; Dierckx, R A J O; Beekman, F J
2011-09-21
Attenuation of photon flux on trajectories between the source and pinhole apertures affects the quantitative accuracy of reconstructed single-photon emission computed tomography (SPECT) images. We propose a Chang-based non-uniform attenuation correction (NUA-CT) for small-animal SPECT/CT with focusing pinhole collimation, and compare the quantitative accuracy with uniform Chang correction based on (i) body outlines extracted from x-ray CT (UA-CT) and (ii) on hand drawn body contours on the images obtained with three integrated optical cameras (UA-BC). Measurements in phantoms and rats containing known activities of isotopes were conducted for evaluation. In (125)I, (201)Tl, (99m)Tc and (111)In phantom experiments, average relative errors comparing to the gold standards measured in a dose calibrator were reduced to 5.5%, 6.8%, 4.9% and 2.8%, respectively, with NUA-CT. In animal studies, these errors were 2.1%, 3.3%, 2.0% and 2.0%, respectively. Differences in accuracy on average between results of NUA-CT, UA-CT and UA-BC were less than 2.3% in phantom studies and 3.1% in animal studies except for (125)I (3.6% and 5.1%, respectively). All methods tested provide reasonable attenuation correction and result in high quantitative accuracy. NUA-CT shows superior accuracy except for (125)I, where other factors may have more impact on the quantitative accuracy than the selected attenuation correction.
[Quantitative surface analysis of Pt-Co, Cu-Au and Cu-Ag alloy films by XPS and AES].
Li, Lian-Zhong; Zhuo, Shang-Jun; Shen, Ru-Xiang; Qian, Rong; Gao, Jie
2013-11-01
In order to improve the quantitative analysis accuracy of AES, We associated XPS with AES and studied the method to reduce the error of AES quantitative analysis, selected Pt-Co, Cu-Au and Cu-Ag binary alloy thin-films as the samples, used XPS to correct AES quantitative analysis results by changing the auger sensitivity factors to make their quantitative analysis results more similar. Then we verified the accuracy of the quantitative analysis of AES when using the revised sensitivity factors by other samples with different composition ratio, and the results showed that the corrected relative sensitivity factors can reduce the error in quantitative analysis of AES to less than 10%. Peak defining is difficult in the form of the integral spectrum of AES analysis since choosing the starting point and ending point when determining the characteristic auger peak intensity area with great uncertainty, and to make analysis easier, we also processed data in the form of the differential spectrum, made quantitative analysis on the basis of peak to peak height instead of peak area, corrected the relative sensitivity factors, and verified the accuracy of quantitative analysis by the other samples with different composition ratio. The result showed that the analytical error in quantitative analysis of AES reduced to less than 9%. It showed that the accuracy of AES quantitative analysis can be highly improved by the way of associating XPS with AES to correct the auger sensitivity factors since the matrix effects are taken into account. Good consistency was presented, proving the feasibility of this method.
NASA Astrophysics Data System (ADS)
Costanzi, Stefano; Tikhonova, Irina G.; Harden, T. Kendall; Jacobson, Kenneth A.
2009-11-01
Accurate in silico models for the quantitative prediction of the activity of G protein-coupled receptor (GPCR) ligands would greatly facilitate the process of drug discovery and development. Several methodologies have been developed based on the properties of the ligands, the direct study of the receptor-ligand interactions, or a combination of both approaches. Ligand-based three-dimensional quantitative structure-activity relationships (3D-QSAR) techniques, not requiring knowledge of the receptor structure, have been historically the first to be applied to the prediction of the activity of GPCR ligands. They are generally endowed with robustness and good ranking ability; however they are highly dependent on training sets. Structure-based techniques generally do not provide the level of accuracy necessary to yield meaningful rankings when applied to GPCR homology models. However, they are essentially independent from training sets and have a sufficient level of accuracy to allow an effective discrimination between binders and nonbinders, thus qualifying as viable lead discovery tools. The combination of ligand and structure-based methodologies in the form of receptor-based 3D-QSAR and ligand and structure-based consensus models results in robust and accurate quantitative predictions. The contribution of the structure-based component to these combined approaches is expected to become more substantial and effective in the future, as more sophisticated scoring functions are developed and more detailed structural information on GPCRs is gathered.
Ultrasensitive, self-calibrated cavity ring-down spectrometer for quantitative trace gas analysis.
Chen, Bing; Sun, Yu R; Zhou, Ze-Yi; Chen, Jian; Liu, An-Wen; Hu, Shui-Ming
2014-11-10
A cavity ring-down spectrometer is built for trace gas detection using telecom distributed feedback (DFB) diode lasers. The longitudinal modes of the ring-down cavity are used as frequency markers without active-locking either the laser or the high-finesse cavity. A control scheme is applied to scan the DFB laser frequency, matching the cavity modes one by one in sequence and resulting in a correct index at each recorded spectral data point, which allows us to calibrate the spectrum with a relative frequency precision of 0.06 MHz. Besides the frequency precision of the spectrometer, a sensitivity (noise-equivalent absorption) of 4×10-11 cm-1 Hz-1/2 has also been demonstrated. A minimum detectable absorption coefficient of 5×10-12 cm-1 has been obtained by averaging about 100 spectra recorded in 2 h. The quantitative accuracy is tested by measuring the CO2 concentrations in N2 samples prepared by the gravimetric method, and the relative deviation is less than 0.3%. The trace detection capability is demonstrated by detecting CO2 of ppbv-level concentrations in a high-purity nitrogen gas sample. Simple structure, high sensitivity, and good accuracy make the instrument very suitable for quantitative trace gas analysis.
Lau, Darryl; Hervey-Jumper, Shawn L; Han, Seunggu J; Berger, Mitchel S
2018-05-01
OBJECTIVE There is ample evidence that extent of resection (EOR) is associated with improved outcomes for glioma surgery. However, it is often difficult to accurately estimate EOR intraoperatively, and surgeon accuracy has yet to be reviewed. In this study, the authors quantitatively assessed the accuracy of intraoperative perception of EOR during awake craniotomy for tumor resection. METHODS A single-surgeon experience of performing awake craniotomies for tumor resection over a 17-year period was examined. Retrospective review of operative reports for quantitative estimation of EOR was recorded. Definitive EOR was based on postoperative MRI. Analysis of accuracy of EOR estimation was examined both as a general outcome (gross-total resection [GTR] or subtotal resection [STR]), and quantitatively (5% within EOR on postoperative MRI). Patient demographics, tumor characteristics, and surgeon experience were examined. The effects of accuracy on motor and language outcomes were assessed. RESULTS A total of 451 patients were included in the study. Overall accuracy of intraoperative perception of whether GTR or STR was achieved was 79.6%, and overall accuracy of quantitative perception of resection (within 5% of postoperative MRI) was 81.4%. There was a significant difference (p = 0.049) in accuracy for gross perception over the 17-year period, with improvement over the later years: 1997-2000 (72.6%), 2001-2004 (78.5%), 2005-2008 (80.7%), and 2009-2013 (84.4%). Similarly, there was a significant improvement (p = 0.015) in accuracy of quantitative perception of EOR over the 17-year period: 1997-2000 (72.2%), 2001-2004 (69.8%), 2005-2008 (84.8%), and 2009-2013 (93.4%). This improvement in accuracy is demonstrated by the significantly higher odds of correctly estimating quantitative EOR in the later years of the series on multivariate logistic regression. Insular tumors were associated with the highest accuracy of gross perception (89.3%; p = 0.034), but lowest accuracy of quantitative perception (61.1% correct; p < 0.001) compared with tumors in other locations. Even after adjusting for surgeon experience, this particular trend for insular tumors remained true. The absence of 1p19q co-deletion was associated with higher quantitative perception accuracy (96.9% vs 81.5%; p = 0.051). Tumor grade, recurrence, diagnosis, and isocitrate dehydrogenase-1 (IDH-1) status were not associated with accurate perception of EOR. Overall, new neurological deficits occurred in 8.4% of cases, and 42.1% of those new neurological deficits persisted after the 3-month follow-up. Correct quantitative perception was associated with lower postoperative motor deficits (2.4%) compared with incorrect perceptions (8.0%; p = 0.029). There were no detectable differences in language outcomes based on perception of EOR. CONCLUSIONS The findings from this study suggest that there is a learning curve associated with the ability to accurately assess intraoperative EOR during glioma surgery, and it may take more than a decade to be truly proficient. Understanding the factors associated with this ability to accurately assess EOR will provide safer surgeries while maximizing tumor resection.
Machine learning of accurate energy-conserving molecular force fields.
Chmiela, Stefan; Tkatchenko, Alexandre; Sauceda, Huziel E; Poltavsky, Igor; Schütt, Kristof T; Müller, Klaus-Robert
2017-05-01
Using conservation of energy-a fundamental property of closed classical and quantum mechanical systems-we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential energy surfaces of intermediate-sized molecules with an accuracy of 0.3 kcal mol -1 for energies and 1 kcal mol -1 Å̊ -1 for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods.
Machine learning of accurate energy-conserving molecular force fields
Chmiela, Stefan; Tkatchenko, Alexandre; Sauceda, Huziel E.; Poltavsky, Igor; Schütt, Kristof T.; Müller, Klaus-Robert
2017-01-01
Using conservation of energy—a fundamental property of closed classical and quantum mechanical systems—we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential energy surfaces of intermediate-sized molecules with an accuracy of 0.3 kcal mol−1 for energies and 1 kcal mol−1 Å̊−1 for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods. PMID:28508076
Heart energy signature spectrogram for cardiovascular diagnosis
Kudriavtsev, Vladimir; Polyshchuk, Vladimir; Roy, Douglas L
2007-01-01
A new method and application is proposed to characterize intensity and pitch of human heart sounds and murmurs. Using recorded heart sounds from the library of one of the authors, a visual map of heart sound energy was established. Both normal and abnormal heart sound recordings were studied. Representation is based on Wigner-Ville joint time-frequency transformations. The proposed methodology separates acoustic contributions of cardiac events simultaneously in pitch, time and energy. The resolution accuracy is superior to any other existing spectrogram method. The characteristic energy signature of the innocent heart murmur in a child with the S3 sound is presented. It allows clear detection of S1, S2 and S3 sounds, S2 split, systolic murmur, and intensity of these components. The original signal, heart sound power change with time, time-averaged frequency, energy density spectra and instantaneous variations of power and frequency/pitch with time, are presented. These data allow full quantitative characterization of heart sounds and murmurs. High accuracy in both time and pitch resolution is demonstrated. Resulting visual images have self-referencing quality, whereby individual features and their changes become immediately obvious. PMID:17480232
Dependence of quantitative accuracy of CT perfusion imaging on system parameters
NASA Astrophysics Data System (ADS)
Li, Ke; Chen, Guang-Hong
2017-03-01
Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.
Winfree, Seth; Dagher, Pierre C; Dunn, Kenneth W; Eadon, Michael T; Ferkowicz, Michael; Barwinska, Daria; Kelly, Katherine J; Sutton, Timothy A; El-Achkar, Tarek M
2018-06-05
Kidney biopsy remains the gold standard for uncovering the pathogenesis of acute and chronic kidney diseases. However, the ability to perform high resolution, quantitative, molecular and cellular interrogation of this precious tissue is still at a developing stage compared to other fields such as oncology. Here, we discuss recent advances in performing large-scale, three-dimensional (3D), multi-fluorescence imaging of kidney biopsies and quantitative analysis referred to as 3D tissue cytometry. This approach allows the accurate measurement of specific cell types and their spatial distribution in a thick section spanning the entire length of the biopsy. By uncovering specific disease signatures, including rare occurrences, and linking them to the biology in situ, this approach will enhance our understanding of disease pathogenesis. Furthermore, by providing accurate quantitation of cellular events, 3D cytometry may improve the accuracy of prognosticating the clinical course and response to therapy. Therefore, large-scale 3D imaging and cytometry of kidney biopsy is poised to become a bridge towards personalized medicine for patients with kidney disease. © 2018 S. Karger AG, Basel.
Quantitative Monitoring of Microbial Species during Bioleaching of a Copper Concentrate.
Hedrich, Sabrina; Guézennec, Anne-Gwenaëlle; Charron, Mickaël; Schippers, Axel; Joulian, Catherine
2016-01-01
Monitoring of the microbial community in bioleaching processes is essential in order to control process parameters and enhance the leaching efficiency. Suitable methods are, however, limited as they are usually not adapted to bioleaching samples and often no taxon-specific assays are available in the literature for these types of consortia. Therefore, our study focused on the development of novel quantitative real-time PCR (qPCR) assays for the quantification of Acidithiobacillus caldus, Leptospirillum ferriphilum, Sulfobacillus thermosulfidooxidans , and Sulfobacillus benefaciens and comparison of the results with data from other common molecular monitoring methods in order to evaluate their accuracy and specificity. Stirred tank bioreactors for the leaching of copper concentrate, housing a consortium of acidophilic, moderately thermophilic bacteria, relevant in several bioleaching operations, served as a model system. The microbial community analysis via qPCR allowed a precise monitoring of the evolution of total biomass as well as abundance of specific species. Data achieved by the standard fingerprinting methods, terminal restriction fragment length polymorphism (T-RFLP) and capillary electrophoresis single strand conformation polymorphism (CE-SSCP) on the same samples followed the same trend as qPCR data. The main added value of qPCR was, however, to provide quantitative data for each species whereas only relative abundance could be deduced from T-RFLP and CE-SSCP profiles. Additional value was obtained by applying two further quantitative methods which do not require nucleic acid extraction, total cell counting after SYBR Green staining and metal sulfide oxidation activity measurements via microcalorimetry. Overall, these complementary methods allow for an efficient quantitative microbial community monitoring in various bioleaching operations.
Quantitative Monitoring of Microbial Species during Bioleaching of a Copper Concentrate
Hedrich, Sabrina; Guézennec, Anne-Gwenaëlle; Charron, Mickaël; Schippers, Axel; Joulian, Catherine
2016-01-01
Monitoring of the microbial community in bioleaching processes is essential in order to control process parameters and enhance the leaching efficiency. Suitable methods are, however, limited as they are usually not adapted to bioleaching samples and often no taxon-specific assays are available in the literature for these types of consortia. Therefore, our study focused on the development of novel quantitative real-time PCR (qPCR) assays for the quantification of Acidithiobacillus caldus, Leptospirillum ferriphilum, Sulfobacillus thermosulfidooxidans, and Sulfobacillus benefaciens and comparison of the results with data from other common molecular monitoring methods in order to evaluate their accuracy and specificity. Stirred tank bioreactors for the leaching of copper concentrate, housing a consortium of acidophilic, moderately thermophilic bacteria, relevant in several bioleaching operations, served as a model system. The microbial community analysis via qPCR allowed a precise monitoring of the evolution of total biomass as well as abundance of specific species. Data achieved by the standard fingerprinting methods, terminal restriction fragment length polymorphism (T-RFLP) and capillary electrophoresis single strand conformation polymorphism (CE-SSCP) on the same samples followed the same trend as qPCR data. The main added value of qPCR was, however, to provide quantitative data for each species whereas only relative abundance could be deduced from T-RFLP and CE-SSCP profiles. Additional value was obtained by applying two further quantitative methods which do not require nucleic acid extraction, total cell counting after SYBR Green staining and metal sulfide oxidation activity measurements via microcalorimetry. Overall, these complementary methods allow for an efficient quantitative microbial community monitoring in various bioleaching operations. PMID:28066365
Sun, Wanxin; Chang, Shi; Tai, Dean C S; Tan, Nancy; Xiao, Guangfa; Tang, Huihuan; Yu, Hanry
2008-01-01
Liver fibrosis is associated with an abnormal increase in an extracellular matrix in chronic liver diseases. Quantitative characterization of fibrillar collagen in intact tissue is essential for both fibrosis studies and clinical applications. Commonly used methods, histological staining followed by either semiquantitative or computerized image analysis, have limited sensitivity, accuracy, and operator-dependent variations. The fibrillar collagen in sinusoids of normal livers could be observed through second-harmonic generation (SHG) microscopy. The two-photon excited fluorescence (TPEF) images, recorded simultaneously with SHG, clearly revealed the hepatocyte morphology. We have systematically optimized the parameters for the quantitative SHG/TPEF imaging of liver tissue and developed fully automated image analysis algorithms to extract the information of collagen changes and cell necrosis. Subtle changes in the distribution and amount of collagen and cell morphology are quantitatively characterized in SHG/TPEF images. By comparing to traditional staining, such as Masson's trichrome and Sirius red, SHG/TPEF is a sensitive quantitative tool for automated collagen characterization in liver tissue. Our system allows for enhanced detection and quantification of sinusoidal collagen fibers in fibrosis research and clinical diagnostics.
Rizzo, Gaia; Raffeiner, Bernd; Coran, Alessandro; Ciprian, Luca; Fiocco, Ugo; Botsios, Costantino; Stramare, Roberto; Grisan, Enrico
2015-07-01
Inflammatory rheumatic diseases are the leading causes of disability and constitute a frequent medical disorder, leading to inability to work, high comorbidity, and increased mortality. The standard for diagnosing and differentiating arthritis is based on clinical examination, laboratory exams, and imaging findings, such as synovitis, bone edema, or joint erosions. Contrast-enhanced ultrasound (CEUS) examination of the small joints is emerging as a sensitive tool for assessing vascularization and disease activity. Quantitative assessment is mostly performed at the region of interest level, where the mean intensity curve is fitted with an exponential function. We showed that using a more physiologically motivated perfusion curve, and by estimating the kinetic parameters separately pixel by pixel, the quantitative information gathered is able to more effectively characterize the different perfusion patterns. In particular, we demonstrated that a random forest classifier based on pixelwise quantification of the kinetic contrast agent perfusion features can discriminate rheumatoid arthritis from different arthritis forms (psoriatic arthritis, spondyloarthritis, and arthritis in connective tissue disease) with an average accuracy of 97%. On the contrary, clinical evaluation (DAS28), semiquantitative CEUS assessment, serological markers, or region-based parameters do not allow such a high diagnostic accuracy.
Rizzo, Gaia; Raffeiner, Bernd; Coran, Alessandro; Ciprian, Luca; Fiocco, Ugo; Botsios, Costantino; Stramare, Roberto; Grisan, Enrico
2015-01-01
Abstract. Inflammatory rheumatic diseases are the leading causes of disability and constitute a frequent medical disorder, leading to inability to work, high comorbidity, and increased mortality. The standard for diagnosing and differentiating arthritis is based on clinical examination, laboratory exams, and imaging findings, such as synovitis, bone edema, or joint erosions. Contrast-enhanced ultrasound (CEUS) examination of the small joints is emerging as a sensitive tool for assessing vascularization and disease activity. Quantitative assessment is mostly performed at the region of interest level, where the mean intensity curve is fitted with an exponential function. We showed that using a more physiologically motivated perfusion curve, and by estimating the kinetic parameters separately pixel by pixel, the quantitative information gathered is able to more effectively characterize the different perfusion patterns. In particular, we demonstrated that a random forest classifier based on pixelwise quantification of the kinetic contrast agent perfusion features can discriminate rheumatoid arthritis from different arthritis forms (psoriatic arthritis, spondyloarthritis, and arthritis in connective tissue disease) with an average accuracy of 97%. On the contrary, clinical evaluation (DAS28), semiquantitative CEUS assessment, serological markers, or region-based parameters do not allow such a high diagnostic accuracy. PMID:27014713
Montanini, R; Freni, F; Rossi, G L
2012-09-01
This paper reports one of the first experimental results on the application of ultrasound activated lock-in vibrothermography for quantitative assessment of buried flaws in complex cast parts. The use of amplitude modulated ultrasonic heat generation allowed selective response of defective areas within the part, as the defect itself is turned into a local thermal wave emitter. Quantitative evaluation of hidden damages was accomplished by estimating independently both the area and the depth extension of the buried flaws, while x-ray 3D computed tomography was used as reference for sizing accuracy assessment. To retrieve flaw's area, a simple yet effective histogram-based phase image segmentation algorithm with automatic pixels classification has been developed. A clear correlation was found between the thermal (phase) signature measured by the infrared camera on the target surface and the actual mean cross-section area of the flaw. Due to the very fast cycle time (<30 s/part), the method could potentially be applied for 100% quality control of casting components.
Govyadinov, Alexander A; Amenabar, Iban; Huth, Florian; Carney, P Scott; Hillenbrand, Rainer
2013-05-02
Scattering-type scanning near-field optical microscopy (s-SNOM) and Fourier transform infrared nanospectroscopy (nano-FTIR) are emerging tools for nanoscale chemical material identification. Here, we push s-SNOM and nano-FTIR one important step further by enabling them to quantitatively measure local dielectric constants and infrared absorption. Our technique is based on an analytical model, which allows for a simple inversion of the near-field scattering problem. It yields the dielectric permittivity and absorption of samples with 2 orders of magnitude improved spatial resolution compared to far-field measurements and is applicable to a large class of samples including polymers and biological matter. We verify the capabilities by determining the local dielectric permittivity of a PMMA film from nano-FTIR measurements, which is in excellent agreement with far-field ellipsometric data. We further obtain local infrared absorption spectra with unprecedented accuracy in peak position and shape, which is the key to quantitative chemometrics on the nanometer scale.
Asynchronous adaptive time step in quantitative cellular automata modeling
Zhu, Hao; Pang, Peter YH; Sun, Yan; Dhar, Pawan
2004-01-01
Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment. PMID:15222901
Global, quantitative and dynamic mapping of protein subcellular localization
Itzhak, Daniel N; Tyanova, Stefka; Cox, Jürgen; Borner, Georg HH
2016-01-01
Subcellular localization critically influences protein function, and cells control protein localization to regulate biological processes. We have developed and applied Dynamic Organellar Maps, a proteomic method that allows global mapping of protein translocation events. We initially used maps statically to generate a database with localization and absolute copy number information for over 8700 proteins from HeLa cells, approaching comprehensive coverage. All major organelles were resolved, with exceptional prediction accuracy (estimated at >92%). Combining spatial and abundance information yielded an unprecedented quantitative view of HeLa cell anatomy and organellar composition, at the protein level. We subsequently demonstrated the dynamic capabilities of the approach by capturing translocation events following EGF stimulation, which we integrated into a quantitative model. Dynamic Organellar Maps enable the proteome-wide analysis of physiological protein movements, without requiring any reagents specific to the investigated process, and will thus be widely applicable in cell biology. DOI: http://dx.doi.org/10.7554/eLife.16950.001 PMID:27278775
3D quantitative analysis of early decomposition changes of the human face.
Caplova, Zuzana; Gibelli, Daniele Maria; Poppa, Pasquale; Cummaudo, Marco; Obertova, Zuzana; Sforza, Chiarella; Cattaneo, Cristina
2018-03-01
Decomposition of the human body and human face is influenced, among other things, by environmental conditions. The early decomposition changes that modify the appearance of the face may hamper the recognition and identification of the deceased. Quantitative assessment of those changes may provide important information for forensic identification. This report presents a pilot 3D quantitative approach of tracking early decomposition changes of a single cadaver in controlled environmental conditions by summarizing the change with weekly morphological descriptions. The root mean square (RMS) value was used to evaluate the changes of the face after death. The results showed a high correlation (r = 0.863) between the measured RMS and the time since death. RMS values of each scan are presented, as well as the average weekly RMS values. The quantification of decomposition changes could improve the accuracy of antemortem facial approximation and potentially could allow the direct comparisons of antemortem and postmortem 3D scans.
Morales-Navarrete, Hernán; Segovia-Miranda, Fabián; Klukowski, Piotr; Meyer, Kirstin; Nonaka, Hidenori; Marsico, Giovanni; Chernykh, Mikhail; Kalaidzidis, Alexander; Zerial, Marino; Kalaidzidis, Yannis
2015-01-01
A prerequisite for the systems biology analysis of tissues is an accurate digital three-dimensional reconstruction of tissue structure based on images of markers covering multiple scales. Here, we designed a flexible pipeline for the multi-scale reconstruction and quantitative morphological analysis of tissue architecture from microscopy images. Our pipeline includes newly developed algorithms that address specific challenges of thick dense tissue reconstruction. Our implementation allows for a flexible workflow, scalable to high-throughput analysis and applicable to various mammalian tissues. We applied it to the analysis of liver tissue and extracted quantitative parameters of sinusoids, bile canaliculi and cell shapes, recognizing different liver cell types with high accuracy. Using our platform, we uncovered an unexpected zonation pattern of hepatocytes with different size, nuclei and DNA content, thus revealing new features of liver tissue organization. The pipeline also proved effective to analyse lung and kidney tissue, demonstrating its generality and robustness. DOI: http://dx.doi.org/10.7554/eLife.11214.001 PMID:26673893
Umeda, Takuro; Miwa, Kenta; Murata, Taisuke; Miyaji, Noriaki; Wagatsuma, Kei; Motegi, Kazuki; Terauchi, Takashi; Koizumi, Mitsuru
2017-12-01
The present study aimed to qualitatively and quantitatively evaluate PET images as a function of acquisition time for various leg sizes, and to optimize a shorter variable-acquisition time protocol for legs to achieve better qualitative and quantitative accuracy of true whole-body PET/CT images. The diameters of legs to be modeled as phantoms were defined based on data derived from 53 patients. This study analyzed PET images of a NEMA phantom and three plastic bottle phantoms (diameter, 5.68, 8.54 and 10.7 cm) that simulated the human body and legs, respectively. The phantoms comprised two spheres (diameters, 10 and 17 mm) containing fluorine-18 fluorodeoxyglucose solution with sphere-to-background ratios of 4 at a background radioactivity level of 2.65 kBq/mL. All PET data were reconstructed with acquisition times ranging from 10 to 180, and 1200 s. We visually evaluated image quality and determined the coefficient of variance (CV) of the background, contrast and the quantitative %error of the hot spheres, and then determined two shorter variable-acquisition protocols for legs. Lesion detectability and quantitative accuracy determined based on maximum standardized uptake values (SUV max ) in PET images of a patient using the proposed protocols were also evaluated. A larger phantom and a shorter acquisition time resulted in increased background noise on images and decreased the contrast in hot spheres. A visual score of ≥ 1.5 was obtained when the acquisition time was ≥ 30 s for three leg phantoms, and ≥ 120 s for the NEMA phantom. The quantitative %errors of the 10- and 17-mm spheres in the leg phantoms were ± 15 and ± 10%, respectively, in PET images with a high CV (scan < 30 s). The mean SUV max of three lesions using the current fixed-acquisition and two proposed variable-acquisition time protocols in the clinical study were 3.1, 3.1 and 3.2, respectively, which did not significantly differ. Leg acquisition time per bed position of even 30-90 s allows axial equalization, uniform image noise and a maximum ± 15% quantitative accuracy for the smallest lesion. The overall acquisition time was reduced by 23-42% using the proposed shorter variable than the current fixed-acquisition time for imaging legs, indicating that this is a useful and practical protocol for routine qualitative and quantitative PET/CT assessment in the clinical setting.
An approach for quantitative image quality analysis for CT
NASA Astrophysics Data System (ADS)
Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe
2016-03-01
An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.
Kuhlenbeck, Debbie L; Eichold, Thomas H; Hoke, Steven H; Baker, Timothy R; Mensen, Robert; Wehmeyer, Kenneth R
2005-01-01
An on-line liquid chromatography/tandem mass spectrometry (LC-MS/MS) procedure, using the Prospekt- 2 system, was developed and used for the determination of the levels of the active ingredients of cough/cold medications in human plasma matrix. The experimental configuration allows direct plasma injection by performing on- line solid phase extraction (SPE) on small cartridge columns prior to elution of the analyte(s) onto the analytical column and subsequent MS/MS detection. The quantitative analysis of three analytes with differing polarities, dextromethorphan (DEX), dextrorphan (DET) and guaifenesin (GG) in human plasma presented a significant challenge. Using stable-isotope-labeled internal standards for each analyte, the Prospekt-2 on-line methodology was evaluated for sensitivity, suppression, accuracy, precision, linearity, analyst time, analysis time, cost, carryover and ease of use. The lower limit of quantitation for the on-line SPE procedure for DEX, DET and GG was 0.05, 0.05 and 5.0 ng mL(-1), respectively, using a 0.1 mL sample volume. The linear range for DEX and DET was 0.05-50 ng mL(-1) and was 5-5,000 ng mL(-1) for GG. Accuracy and precision data for five different levels of QC samples were collected over three separate days. Accuracy ranged from 90% to 112% for all three analytes, while the precision, as measured by the %RSD, ranged from 1.5% to 16.0%
Multistrip western blotting to increase quantitative data output.
Kiyatkin, Anatoly; Aksamitiene, Edita
2009-01-01
The qualitative and quantitative measurements of protein abundance and modification states are essential in understanding their functions in diverse cellular processes. Typical western blotting, though sensitive, is prone to produce substantial errors and is not readily adapted to high-throughput technologies. Multistrip western blotting is a modified immunoblotting procedure based on simultaneous electrophoretic transfer of proteins from multiple strips of polyacrylamide gels to a single membrane sheet. In comparison with the conventional technique, Multistrip western blotting increases the data output per single blotting cycle up to tenfold, allows concurrent monitoring of up to nine different proteins from the same loading of the sample, and substantially improves the data accuracy by reducing immunoblotting-derived signal errors. This approach enables statistically reliable comparison of different or repeated sets of data, and therefore is beneficial to apply in biomedical diagnostics, systems biology, and cell signaling research.
Zhou, Yangbo; Fox, Daniel S; Maguire, Pierce; O’Connell, Robert; Masters, Robert; Rodenburg, Cornelia; Wu, Hanchun; Dapor, Maurizio; Chen, Ying; Zhang, Hongzhou
2016-01-01
Two-dimensional (2D) materials usually have a layer-dependent work function, which require fast and accurate detection for the evaluation of their device performance. A detection technique with high throughput and high spatial resolution has not yet been explored. Using a scanning electron microscope, we have developed and implemented a quantitative analytical technique which allows effective extraction of the work function of graphene. This technique uses the secondary electron contrast and has nanometre-resolved layer information. The measurement of few-layer graphene flakes shows the variation of work function between graphene layers with a precision of less than 10 meV. It is expected that this technique will prove extremely useful for researchers in a broad range of fields due to its revolutionary throughput and accuracy. PMID:26878907
Mordini, Federico E; Haddad, Tariq; Hsu, Li-Yueh; Kellman, Peter; Lowrey, Tracy B; Aletras, Anthony H; Bandettini, W Patricia; Arai, Andrew E
2014-01-01
This study's primary objective was to determine the sensitivity, specificity, and accuracy of fully quantitative stress perfusion cardiac magnetic resonance (CMR) versus a reference standard of quantitative coronary angiography. We hypothesized that fully quantitative analysis of stress perfusion CMR would have high diagnostic accuracy for identifying significant coronary artery stenosis and exceed the accuracy of semiquantitative measures of perfusion and qualitative interpretation. Relatively few studies apply fully quantitative CMR perfusion measures to patients with coronary disease and comparisons to semiquantitative and qualitative methods are limited. Dual bolus dipyridamole stress perfusion CMR exams were performed in 67 patients with clinical indications for assessment of myocardial ischemia. Stress perfusion images alone were analyzed with a fully quantitative perfusion (QP) method and 3 semiquantitative methods including contrast enhancement ratio, upslope index, and upslope integral. Comprehensive exams (cine imaging, stress/rest perfusion, late gadolinium enhancement) were analyzed qualitatively with 2 methods including the Duke algorithm and standard clinical interpretation. A 70% or greater stenosis by quantitative coronary angiography was considered abnormal. The optimum diagnostic threshold for QP determined by receiver-operating characteristic curve occurred when endocardial flow decreased to <50% of mean epicardial flow, which yielded a sensitivity of 87% and specificity of 93%. The area under the curve for QP was 92%, which was superior to semiquantitative methods: contrast enhancement ratio: 78%; upslope index: 82%; and upslope integral: 75% (p = 0.011, p = 0.019, p = 0.004 vs. QP, respectively). Area under the curve for QP was also superior to qualitative methods: Duke algorithm: 70%; and clinical interpretation: 78% (p < 0.001 and p < 0.001 vs. QP, respectively). Fully quantitative stress perfusion CMR has high diagnostic accuracy for detecting obstructive coronary artery disease. QP outperforms semiquantitative measures of perfusion and qualitative methods that incorporate a combination of cine, perfusion, and late gadolinium enhancement imaging. These findings suggest a potential clinical role for quantitative stress perfusion CMR. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Lam, Van K; Nguyen, Thanh C; Chung, Byung M; Nehmetallah, George; Raub, Christopher B
2018-03-01
The noninvasive, fast acquisition of quantitative phase maps using digital holographic microscopy (DHM) allows tracking of rapid cellular motility on transparent substrates. On two-dimensional surfaces in vitro, MDA-MB-231 cancer cells assume several morphologies related to the mode of migration and substrate stiffness, relevant to mechanisms of cancer invasiveness in vivo. The quantitative phase information from DHM may accurately classify adhesive cancer cell subpopulations with clinical relevance. To test this, cells from the invasive breast cancer MDA-MB-231 cell line were cultured on glass, tissue-culture treated polystyrene, and collagen hydrogels, and imaged with DHM followed by epifluorescence microscopy after staining F-actin and nuclei. Trends in cell phase parameters were tracked on the different substrates, during cell division, and during matrix adhesion, relating them to F-actin features. Support vector machine learning algorithms were trained and tested using parameters from holographic phase reconstructions and cell geometric features from conventional phase images, and used to distinguish between elongated and rounded cell morphologies. DHM was able to distinguish between elongated and rounded morphologies of MDA-MB-231 cells with 94% accuracy, compared to 83% accuracy using cell geometric features from conventional brightfield microscopy. This finding indicates the potential of DHM to detect and monitor cancer cell morphologies relevant to cell cycle phase status, substrate adhesion, and motility. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin
2017-08-15
Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.
Hamamura, Kensuke; Yanagida, Mitsuaki; Ishikawa, Hitoshi; Banzai, Michio; Yoshitake, Hiroshi; Nonaka, Daisuke; Tanaka, Kenji; Sakuraba, Mayumi; Miyakuni, Yasuka; Takamori, Kenji; Nojima, Michio; Yoshida, Koyo; Fujiwara, Hiroshi; Takeda, Satoru; Araki, Yoshihiko
2018-03-01
Purpose We previously attempted to develop quantitative enzyme-linked immunosorbent assay (ELISA) systems for the PDA039/044/071 peptides, potential serum disease biomarkers (DBMs) of pregnancy-induced hypertension (PIH), primarily identified by a peptidomic approach (BLOTCHIP®-mass spectrometry (MS)). However, our methodology did not extend to PDA071 (cysteinyl α2-HS-glycoprotein 341-367 ), due to difficulty to produce a specific antibody against the peptide. The aim of the present study was to establish an alternative PDA071 quantitation system using liquid chromatography-multiple reaction monitoring (LC-MRM)/MS, to explore the potential utility of PDA071 as a DBM for PIH. Methods We tested heat/acid denaturation methods in efforts to purify serum PDA071 and developed an LC-MRM/MS method allowing for specific quantitation thereof. We measured serum PDA071 concentrations, and these results were validated including by three-dimensional (3D) plotting against PDA039 (kininogen-1 439-456 )/044 (kininogen-1 438-456 ) concentrations, followed by discriminant analysis. Results PDA071 was successfully extracted from serum using a heat denaturation method. Optimum conditions for quantitation via LC-MRM/MS were developed; the assayed serum PDA071 correlated well with the BLOTCHIP® assay values. Although the PDA071 alone did not significantly differ between patients and controls, 3D plotting of PDA039/044/071 peptide concentrations and construction of a Jackknife classification matrix were satisfactory in terms of PIH diagnostic precision. Conclusions Combination analysis using both PDA071 and PDA039/044 concentrations allowed PIH diagnostic accuracy to be attained, and our method will be valuable in future pathophysiological studies of hypertensive disorders of pregnancy.
Quantitative optical metrology with CMOS cameras
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Kolenovic, Ervin; Ferguson, Curtis F.
2004-08-01
Recent advances in laser technology, optical sensing, and computer processing of data, have lead to the development of advanced quantitative optical metrology techniques for high accuracy measurements of absolute shapes and deformations of objects. These techniques provide noninvasive, remote, and full field of view information about the objects of interest. The information obtained relates to changes in shape and/or size of the objects, characterizes anomalies, and provides tools to enhance fabrication processes. Factors that influence selection and applicability of an optical technique include the required sensitivity, accuracy, and precision that are necessary for a particular application. In this paper, sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography (OEH) based on CMOS cameras, are discussed. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gauges, demonstrating the applicability of CMOS cameras in quantitative optical metrology techniques. It is shown that the advanced nature of CMOS technology can be applied to challenging engineering applications, including the study of rapidly evolving phenomena occurring in MEMS and micromechatronics.
NASA Astrophysics Data System (ADS)
Jézéquel, Tangi; Silvestre, Virginie; Dinis, Katy; Giraudeau, Patrick; Akoka, Serge
2018-04-01
Isotope ratio monitoring by 13C NMR spectrometry (irm-13C NMR) provides the complete 13C intramolecular position-specific composition at natural abundance. It represents a powerful tool to track the (bio)chemical pathway which has led to the synthesis of targeted molecules, since it allows Position-specific Isotope Analysis (PSIA). Due to the very small composition range (which represents the range of variation of the isotopic composition of a given nuclei) of 13C natural abundance values (50‰), irm-13C NMR requires a 1‰ accuracy and thus highly quantitative analysis by 13C NMR. Until now, the conventional strategy to determine the position-specific abundance xi relies on the combination of irm-MS (isotopic ratio monitoring Mass Spectrometry) and 13C quantitative NMR. However this approach presents a serious drawback since it relies on two different techniques and requires to measure separately the signal of all the carbons of the analyzed compound, which is not always possible. To circumvent this constraint, we recently proposed a new methodology to perform 13C isotopic analysis using an internal reference method and relying on NMR only. The method combines a highly quantitative 1H NMR pulse sequence (named DWET) with a 13C isotopic NMR measurement. However, the recently published DWET sequence is unsuited for samples with short T1, which forms a serious limitation for irm-13C NMR experiments where a relaxing agent is added. In this context, we suggest two variants of the DWET called Multi-WET and Profiled-WET, developed and optimized to reach the same accuracy of 1‰ with a better immunity towards T1 variations. Their performance is evaluated on the determination of the 13C isotopic profile of vanillin. Both pulse sequences show a 1‰ accuracy with an increased robustness to pulse miscalibrations compared to the initial DWET method. This constitutes a major advance in the context of irm-13C NMR since it is now possible to perform isotopic analysis with high relaxing agent concentrations, leading to a strong reduction of the overall experiment time.
Ho, Junming; Zwicker, Vincent E; Yuen, Karen K Y; Jolliffe, Katrina A
2017-10-06
Robust quantum chemical methods are employed to predict the pK a 's of several families of dual hydrogen-bonding organocatalysts/anion receptors, including deltamides and croconamides as well as their thio derivatives. The average accuracy of these predictions is ∼1 pK a unit and allows for a comparison of the acidity between classes of receptors and for quantitative studies of substituent effects. These computational insights further explain the relationship between pK a and chloride anion affinity of these receptors that will be important for designing future anion receptors and organocatalysts.
Coupled-mode propagation in multicore fibers characterized by optical low-coherence reflectometry.
Salathé, R P; Gilgen, H; Bodmer, G
1996-07-01
A fiber-optical low-coherence ref lectometer has been used to probe a multicore fiber locally at a wavelength of 1.3 microm. This technique allows one to determine the group index of refraction of the modes in the multicore fiber with high accuracy. Light propagation that is due to noncoherent coupling of energy from one fiber core to adjacent cores through cladding modes can be distinguished quantitatively from light propagating in coherently coupled modes. Intercore coupling constants in the range of 0.6-2 mm(-1) have been evaluated for the coupled modes.
An arbitrary-order staggered time integrator for the linear acoustic wave equation
NASA Astrophysics Data System (ADS)
Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo
2018-02-01
We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.
Magnetic resonance imaging of the preterm infant brain.
Doria, Valentina; Arichi, Tomoki; Edwards, David A
2014-01-01
Despite improvements in neonatal care, survivors of preterm birth are still at a significantly increased risk of developing life-long neurological difficulties including cerebral palsy and cognitive difficulties. Cranial ultrasound is routinely used in neonatal practice, but has a low sensitivity for identifying later neurodevelopmental difficulties. Magnetic Resonance Imaging (MRI) can be used to identify intracranial abnormalities with greater diagnostic accuracy in preterm infants, and theoretically might improve the planning and targeting of long-term neurodevelopmental care; reducing parental stress and unplanned healthcare utilisation; and ultimately may improve healthcare cost effectiveness. Furthermore, MR imaging offers the advantage of allowing the quantitative assessment of the integrity, growth and function of intracranial structures, thereby providing the means to develop sensitive biomarkers which may be predictive of later neurological impairment. However further work is needed to define the accuracy and value of diagnosis by MR and the techniques's precise role in care pathways for preterm infants.
Trans-dimensional MCMC methods for fully automatic motion analysis in tagged MRI.
Smal, Ihor; Carranza-Herrezuelo, Noemí; Klein, Stefan; Niessen, Wiro; Meijering, Erik
2011-01-01
Tagged magnetic resonance imaging (tMRI) is a well-known noninvasive method allowing quantitative analysis of regional heart dynamics. Its clinical use has so far been limited, in part due to the lack of robustness and accuracy of existing tag tracking algorithms in dealing with low (and intrinsically time-varying) image quality. In this paper, we propose a novel probabilistic method for tag tracking, implemented by means of Bayesian particle filtering and a trans-dimensional Markov chain Monte Carlo (MCMC) approach, which efficiently combines information about the imaging process and tag appearance with prior knowledge about the heart dynamics obtained by means of non-rigid image registration. Experiments using synthetic image data (with ground truth) and real data (with expert manual annotation) from preclinical (small animal) and clinical (human) studies confirm that the proposed method yields higher consistency, accuracy, and intrinsic tag reliability assessment in comparison with other frequently used tag tracking methods.
Biglands, John D; Ibraheem, Montasir; Magee, Derek R; Radjenovic, Aleksandra; Plein, Sven; Greenwood, John P
2018-05-01
This study sought to compare the diagnostic accuracy of visual and quantitative analyses of myocardial perfusion cardiovascular magnetic resonance against a reference standard of quantitative coronary angiography. Visual analysis of perfusion cardiovascular magnetic resonance studies for assessing myocardial perfusion has been shown to have high diagnostic accuracy for coronary artery disease. However, only a few small studies have assessed the diagnostic accuracy of quantitative myocardial perfusion. This retrospective study included 128 patients randomly selected from the CE-MARC (Clinical Evaluation of Magnetic Resonance Imaging in Coronary Heart Disease) study population such that the distribution of risk factors and disease status was proportionate to the full population. Visual analysis results of cardiovascular magnetic resonance perfusion images, by consensus of 2 expert readers, were taken from the original study reports. Quantitative myocardial blood flow estimates were obtained using Fermi-constrained deconvolution. The reference standard for myocardial ischemia was a quantitative coronary x-ray angiogram stenosis severity of ≥70% diameter in any coronary artery of >2 mm diameter, or ≥50% in the left main stem. Diagnostic performance was calculated using receiver-operating characteristic curve analysis. The area under the curve for visual analysis was 0.88 (95% confidence interval: 0.81 to 0.95) with a sensitivity of 81.0% (95% confidence interval: 69.1% to 92.8%) and specificity of 86.0% (95% confidence interval: 78.7% to 93.4%). For quantitative stress myocardial blood flow the area under the curve was 0.89 (95% confidence interval: 0.83 to 0.96) with a sensitivity of 87.5% (95% confidence interval: 77.3% to 97.7%) and specificity of 84.5% (95% confidence interval: 76.8% to 92.3%). There was no statistically significant difference between the diagnostic performance of quantitative and visual analyses (p = 0.72). Incorporating rest myocardial blood flow values to generate a myocardial perfusion reserve did not significantly increase the quantitative analysis area under the curve (p = 0.79). Quantitative perfusion has a high diagnostic accuracy for detecting coronary artery disease but is not superior to visual analysis. The incorporation of rest perfusion imaging does not improve diagnostic accuracy in quantitative perfusion analysis. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lee, Seungwan; Kang, Sooncheol; Eom, Jisoo
2017-03-01
Contrast-enhanced mammography has been used to demonstrate functional information about a breast tumor by injecting contrast agents. However, a conventional technique with a single exposure degrades the efficiency of tumor detection due to structure overlapping. Dual-energy techniques with energy-integrating detectors (EIDs) also cause an increase of radiation dose and an inaccuracy of material decomposition due to the limitations of EIDs. On the other hands, spectral mammography with photon-counting detectors (PCDs) is able to resolve the issues induced by the conventional technique and EIDs using their energy-discrimination capabilities. In this study, the contrast-enhanced spectral mammography based on a PCD was implemented by using a polychromatic dual-energy model, and the proposed technique was compared with the dual-energy technique with an EID in terms of quantitative accuracy and radiation dose. The results showed that the proposed technique improved the quantitative accuracy as well as reduced radiation dose comparing to the dual-energy technique with an EID. The quantitative accuracy of the contrast-enhanced spectral mammography based on a PCD was slightly improved as a function of radiation dose. Therefore, the contrast-enhanced spectral mammography based on a PCD is able to provide useful information for detecting breast tumors and improving diagnostic accuracy.
Computer aided manual validation of mass spectrometry-based proteomic data.
Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M
2013-06-15
Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.
Hierarchical image segmentation via recursive superpixel with adaptive regularity
NASA Astrophysics Data System (ADS)
Nakamura, Kensuke; Hong, Byung-Woo
2017-11-01
A fast and accurate segmentation algorithm in a hierarchical way based on a recursive superpixel technique is presented. We propose a superpixel energy formulation in which the trade-off between data fidelity and regularization is dynamically determined based on the local residual in the energy optimization procedure. We also present an energy optimization algorithm that allows a pixel to be shared by multiple regions to improve the accuracy and appropriate the number of segments. The qualitative and quantitative evaluations demonstrate that our algorithm, combining the proposed energy and optimization, outperforms the conventional k-means algorithm by up to 29.10% in F-measure. We also perform comparative analysis with state-of-the-art algorithms in the hierarchical segmentation. Our algorithm yields smooth regions throughout the hierarchy as opposed to the others that include insignificant details. Our algorithm overtakes the other algorithms in terms of balance between accuracy and computational time. Specifically, our method runs 36.48% faster than the region-merging approach, which is the fastest of the comparing algorithms, while achieving a comparable accuracy.
The active learning hypothesis of the job-demand-control model: an experimental examination.
Häusser, Jan Alexander; Schulz-Hardt, Stefan; Mojzisch, Andreas
2014-01-01
The active learning hypothesis of the job-demand-control model [Karasek, R. A. 1979. "Job Demands, Job Decision Latitude, and Mental Strain: Implications for Job Redesign." Administration Science Quarterly 24: 285-307] proposes positive effects of high job demands and high job control on performance. We conducted a 2 (demands: high vs. low) × 2 (control: high vs. low) experimental office workplace simulation to examine this hypothesis. Since performance during a work simulation is confounded by the boundaries of the demands and control manipulations (e.g. time limits), we used a post-test, in which participants continued working at their task, but without any manipulation of demands and control. This post-test allowed for examining active learning (transfer) effects in an unconfounded fashion. Our results revealed that high demands had a positive effect on quantitative performance, without affecting task accuracy. In contrast, high control resulted in a speed-accuracy tradeoff, that is participants in the high control conditions worked slower but with greater accuracy than participants in the low control conditions.
Machine Learning of Accurate Energy-Conserving Molecular Force Fields
NASA Astrophysics Data System (ADS)
Chmiela, Stefan; Tkatchenko, Alexandre; Sauceda, Huziel; Poltavsky, Igor; Schütt, Kristof; Müller, Klaus-Robert; GDML Collaboration
Efficient and accurate access to the Born-Oppenheimer potential energy surface (PES) is essential for long time scale molecular dynamics (MD) simulations. Using conservation of energy - a fundamental property of closed classical and quantum mechanical systems - we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio MD trajectories (AIMD). The GDML implementation is able to reproduce global potential-energy surfaces of intermediate-size molecules with an accuracy of 0.3 kcal/mol for energies and 1 kcal/mol/Å for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, malonaldehyde, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative MD simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods.
NASA Astrophysics Data System (ADS)
Ahn, Sangtae; Ross, Steven G.; Asma, Evren; Miao, Jun; Jin, Xiao; Cheng, Lishui; Wollenweber, Scott D.; Manjeshwar, Ravindra M.
2015-08-01
Ordered subset expectation maximization (OSEM) is the most widely used algorithm for clinical PET image reconstruction. OSEM is usually stopped early and post-filtered to control image noise and does not necessarily achieve optimal quantitation accuracy. As an alternative to OSEM, we have recently implemented a penalized likelihood (PL) image reconstruction algorithm for clinical PET using the relative difference penalty with the aim of improving quantitation accuracy without compromising visual image quality. Preliminary clinical studies have demonstrated visual image quality including lesion conspicuity in images reconstructed by the PL algorithm is better than or at least as good as that in OSEM images. In this paper we evaluate lesion quantitation accuracy of the PL algorithm with the relative difference penalty compared to OSEM by using various data sets including phantom data acquired with an anthropomorphic torso phantom, an extended oval phantom and the NEMA image quality phantom; clinical data; and hybrid clinical data generated by adding simulated lesion data to clinical data. We focus on mean standardized uptake values and compare them for PL and OSEM using both time-of-flight (TOF) and non-TOF data. The results demonstrate improvements of PL in lesion quantitation accuracy compared to OSEM with a particular improvement in cold background regions such as lungs.
Masci, Ilaria; Vannozzi, Giuseppe; Bergamini, Elena; Pesce, Caterina; Getchell, Nancy; Cappozzo, Aurelio
2013-04-01
Objective quantitative evaluation of motor skill development is of increasing importance to carefully drive physical exercise programs in childhood. Running is a fundamental motor skill humans adopt to accomplish locomotion, which is linked to physical activity levels, although the assessment is traditionally carried out using qualitative evaluation tests. The present study aimed at investigating the feasibility of using inertial sensors to quantify developmental differences in the running pattern of young children. Qualitative and quantitative assessment tools were adopted to identify a skill-sensitive set of biomechanical parameters for running and to further our understanding of the factors that determine progression to skilled running performance. Running performances of 54 children between the ages of 2 and 12 years were submitted to both qualitative and quantitative analysis, the former using sequences of developmental level, the latter estimating temporal and kinematic parameters from inertial sensor measurements. Discriminant analysis with running developmental level as dependent variable allowed to identify a set of temporal and kinematic parameters, within those obtained with the sensor, that best classified children into the qualitative developmental levels (accuracy higher than 67%). Multivariate analysis of variance with the quantitative parameters as dependent variables allowed to identify whether and which specific parameters or parameter subsets were differentially sensitive to specific transitions between contiguous developmental levels. The findings showed that different sets of temporal and kinematic parameters are able to tap all steps of the transitional process in running skill described through qualitative observation and can be prospectively used for applied diagnostic and sport training purposes. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Yokum, Jeffrey S.; Pryputniewicz, Ryszard J.
2002-06-01
Sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography based on fiber optics and high-spatial and high-digital resolution cameras, are discussed in this paper. It is shown that sensitivity, accuracy, and precision dependent on both, the effective determination of optical phase and the effective characterization of the illumination-observation conditions. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gages, demonstrating the applicability of quantitative optical metrology techniques to satisfy constantly increasing needs for the study and development of emerging technologies.
Determining the Thickness and the Sub-Structure Details of the Magnetopause from MMS Data
NASA Astrophysics Data System (ADS)
Manuzzo, R.; Belmont, G.; Rezeau, L.
2017-12-01
The magnetopause thickness, like its mean location, is a notion that can have different meanings depending which parameters are considered (magnetic field or plasma properties). In any case, all the determinations have been done, up to now, considering the magnetopause boundary as a structure strictly stationary and 1D (or with a simple curvature). These determinations have shown to be very sensitive to the accuracy of the normal direction, because it affects the projection of the quantities of interest in studying geometrical sensitive phenomena such as the magnetic reconnection. Furthermore, the 1D stationary assumptions are likely to be rarely verified at the real magnetopause. The high quality measurements of MMS and their high time resolution now allow investigating the magnetopause structure in its more delicate features and with an unequal spatio-temporal accuracy. We make use here of the MDD tool developed by [Shi et al., 2005], which gives the dimensionality of the gradients from the four-point measurements of MMS and allows estimating the direction of the local normal when defined. Extending this method to various quantities, we can draw their profiles as functions of a physical abscissa (length instead of time) along a sensible normal. This procedure allows answering quantitatively the questions concerning the locations and the thicknesses of the different sub-structures encountered inside the "global magnetopause" [Rezeau, 2017, paper submitted to JGR-Space Physics].
Analysing magnetism using scanning SQUID microscopy.
Reith, P; Renshaw Wang, X; Hilgenkamp, H
2017-12-01
Scanning superconducting quantum interference device microscopy (SSM) is a scanning probe technique that images local magnetic flux, which allows for mapping of magnetic fields with high field and spatial accuracy. Many studies involving SSM have been published in the last few decades, using SSM to make qualitative statements about magnetism. However, quantitative analysis using SSM has received less attention. In this work, we discuss several aspects of interpreting SSM images and methods to improve quantitative analysis. First, we analyse the spatial resolution and how it depends on several factors. Second, we discuss the analysis of SSM scans and the information obtained from the SSM data. Using simulations, we show how signals evolve as a function of changing scan height, SQUID loop size, magnetization strength, and orientation. We also investigated 2-dimensional autocorrelation analysis to extract information about the size, shape, and symmetry of magnetic features. Finally, we provide an outlook on possible future applications and improvements.
Analysing magnetism using scanning SQUID microscopy
NASA Astrophysics Data System (ADS)
Reith, P.; Renshaw Wang, X.; Hilgenkamp, H.
2017-12-01
Scanning superconducting quantum interference device microscopy (SSM) is a scanning probe technique that images local magnetic flux, which allows for mapping of magnetic fields with high field and spatial accuracy. Many studies involving SSM have been published in the last few decades, using SSM to make qualitative statements about magnetism. However, quantitative analysis using SSM has received less attention. In this work, we discuss several aspects of interpreting SSM images and methods to improve quantitative analysis. First, we analyse the spatial resolution and how it depends on several factors. Second, we discuss the analysis of SSM scans and the information obtained from the SSM data. Using simulations, we show how signals evolve as a function of changing scan height, SQUID loop size, magnetization strength, and orientation. We also investigated 2-dimensional autocorrelation analysis to extract information about the size, shape, and symmetry of magnetic features. Finally, we provide an outlook on possible future applications and improvements.
Multistrip Western blotting: a tool for comparative quantitative analysis of multiple proteins.
Aksamitiene, Edita; Hoek, Jan B; Kiyatkin, Anatoly
2015-01-01
The qualitative and quantitative measurements of protein abundance and modification states are essential in understanding their functions in diverse cellular processes. Typical Western blotting, though sensitive, is prone to produce substantial errors and is not readily adapted to high-throughput technologies. Multistrip Western blotting is a modified immunoblotting procedure based on simultaneous electrophoretic transfer of proteins from multiple strips of polyacrylamide gels to a single membrane sheet. In comparison with the conventional technique, Multistrip Western blotting increases data output per single blotting cycle up to tenfold; allows concurrent measurement of up to nine different total and/or posttranslationally modified protein expression obtained from the same loading of the sample; and substantially improves the data accuracy by reducing immunoblotting-derived signal errors. This approach enables statistically reliable comparison of different or repeated sets of data and therefore is advantageous to apply in biomedical diagnostics, systems biology, and cell signaling research.
Developing a database for pedestrians' earthquake emergency evacuation in indoor scenarios.
Zhou, Junxue; Li, Sha; Nie, Gaozhong; Fan, Xiwei; Tan, Jinxian; Li, Huayue; Pang, Xiaoke
2018-01-01
With the booming development of evacuation simulation software, developing an extensive database in indoor scenarios for evacuation models is imperative. In this paper, we conduct a qualitative and quantitative analysis of the collected videotapes and aim to provide a complete and unitary database of pedestrians' earthquake emergency response behaviors in indoor scenarios, including human-environment interactions. Using the qualitative analysis method, we extract keyword groups and keywords that code the response modes of pedestrians and construct a general decision flowchart using chronological organization. Using the quantitative analysis method, we analyze data on the delay time, evacuation speed, evacuation route and emergency exit choices. Furthermore, we study the effect of classroom layout on emergency evacuation. The database for indoor scenarios provides reliable input parameters and allows the construction of real and effective constraints for use in software and mathematical models. The database can also be used to validate the accuracy of evacuation models.
Udompaisarn, Somsiri; Arthan, Dumrongkiet; Somana, Jamorn
2017-04-19
An enzymatic method for specific determination of stevioside content was established. Recombinant β-glucosidase BT_3567 (rBT_3567) from Bacteroides thetaiotaomicron HB-13 exhibited selective hydrolysis of stevioside at β-1,2-glycosidic bond to yield rubusoside and glucose. Coupling of this enzyme with glucose oxidase and peroxidase allowed for quantitation of stevioside content in Stevia samples by using a colorimetric-based approach. The series of reactions for stevioside determination can be completed within 1 h at 37 °C. Stevioside determination using the enzymatic assay strongly correlated with results obtained from HPLC quantitation (r 2 = 0.9629, n = 16). The percentages of coefficient variation (CV) of within day (n = 12) and between days (n = 12) assays were lower than 5%, and accuracy ranges were 95-105%. This analysis demonstrates that the enzymatic method developed in this study is specific, easy to perform, accurate, and yields reproducible results.
Objective breast tissue image classification using Quantitative Transmission ultrasound tomography
NASA Astrophysics Data System (ADS)
Malik, Bilal; Klock, John; Wiskin, James; Lenox, Mark
2016-12-01
Quantitative Transmission Ultrasound (QT) is a powerful and emerging imaging paradigm which has the potential to perform true three-dimensional image reconstruction of biological tissue. Breast imaging is an important application of QT and allows non-invasive, non-ionizing imaging of whole breasts in vivo. Here, we report the first demonstration of breast tissue image classification in QT imaging. We systematically assess the ability of the QT images’ features to differentiate between normal breast tissue types. The three QT features were used in Support Vector Machines (SVM) classifiers, and classification of breast tissue as either skin, fat, glands, ducts or connective tissue was demonstrated with an overall accuracy of greater than 90%. Finally, the classifier was validated on whole breast image volumes to provide a color-coded breast tissue volume. This study serves as a first step towards a computer-aided detection/diagnosis platform for QT.
IsoCor: correcting MS data in isotope labeling experiments.
Millard, Pierre; Letisse, Fabien; Sokol, Serguei; Portais, Jean-Charles
2012-05-01
Mass spectrometry (MS) is widely used for isotopic labeling studies of metabolism and other biological processes. Quantitative applications-e.g. metabolic flux analysis-require tools to correct the raw MS data for the contribution of all naturally abundant isotopes. IsoCor is a software that allows such correction to be applied to any chemical species. Hence it can be used to exploit any isotopic tracer, from well-known ((13)C, (15)N, (18)O, etc) to unusual ((57)Fe, (77)Se, etc) isotopes. It also provides new features-e.g. correction for the isotopic purity of the tracer-to improve the accuracy of quantitative isotopic studies, and implements an efficient algorithm to process large datasets. Its user-friendly interface makes isotope labeling experiments more accessible to a wider biological community. IsoCor is distributed under OpenSource license at http://metasys.insa-toulouse.fr/software/isocor/
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Pryputniewicz, Ryszard J.
2002-06-01
Effective suppression of speckle noise content in interferometric data images can help in improving accuracy and resolution of the results obtained with interferometric optical metrology techniques. In this paper, novel speckle noise reduction algorithms based on the discrete wavelet transformation are presented. The algorithms proceed by: (a) estimating the noise level contained in the interferograms of interest, (b) selecting wavelet families, (c) applying the wavelet transformation using the selected families, (d) wavelet thresholding, and (e) applying the inverse wavelet transformation, producing denoised interferograms. The algorithms are applied to the different stages of the processing procedures utilized for generation of quantitative speckle correlation interferometry data of fiber-optic based opto-electronic holography (FOBOEH) techniques, allowing identification of optimal processing conditions. It is shown that wavelet algorithms are effective for speckle noise reduction while preserving image features otherwise faded with other algorithms.
One registration multi-atlas-based pseudo-CT generation for attenuation correction in PET/MRI.
Arabi, Hossein; Zaidi, Habib
2016-10-01
The outcome of a detailed assessment of various strategies for atlas-based whole-body bone segmentation from magnetic resonance imaging (MRI) was exploited to select the optimal parameters and setting, with the aim of proposing a novel one-registration multi-atlas (ORMA) pseudo-CT generation approach. The proposed approach consists of only one online registration between the target and reference images, regardless of the number of atlas images (N), while for the remaining atlas images, the pre-computed transformation matrices to the reference image are used to align them to the target image. The performance characteristics of the proposed method were evaluated and compared with conventional atlas-based attenuation map generation strategies (direct registration of the entire atlas images followed by voxel-wise weighting (VWW) and arithmetic averaging atlas fusion). To this end, four different positron emission tomography (PET) attenuation maps were generated via arithmetic averaging and VWW scheme using both direct registration and ORMA approaches as well as the 3-class attenuation map obtained from the Philips Ingenuity TF PET/MRI scanner commonly used in the clinical setting. The evaluation was performed based on the accuracy of extracted whole-body bones by the different attenuation maps and by quantitative analysis of resulting PET images compared to CT-based attenuation-corrected PET images serving as reference. The comparison of validation metrics regarding the accuracy of extracted bone using the different techniques demonstrated the superiority of the VWW atlas fusion algorithm achieving a Dice similarity measure of 0.82 ± 0.04 compared to arithmetic averaging atlas fusion (0.60 ± 0.02), which uses conventional direct registration. Application of the ORMA approach modestly compromised the accuracy, yielding a Dice similarity measure of 0.76 ± 0.05 for ORMA-VWW and 0.55 ± 0.03 for ORMA-averaging. The results of quantitative PET analysis followed the same trend with less significant differences in terms of SUV bias, whereas massive improvements were observed compared to PET images corrected for attenuation using the 3-class attenuation map. The maximum absolute bias achieved by VWW and VWW-ORMA methods was 06.4 ± 5.5 in the lung and 07.9 ± 4.8 in the bone, respectively. The proposed algorithm is capable of generating decent attenuation maps. The quantitative analysis revealed a good correlation between PET images corrected for attenuation using the proposed pseudo-CT generation approach and the corresponding CT images. The computational time is reduced by a factor of 1/N at the expense of a modest decrease in quantitative accuracy, thus allowing us to achieve a reasonable compromise between computing time and quantitative performance.
Blattmann, Peter; Heusel, Moritz; Aebersold, Ruedi
2016-01-01
SWATH-MS is an acquisition and analysis technique of targeted proteomics that enables measuring several thousand proteins with high reproducibility and accuracy across many samples. OpenSWATH is popular open-source software for peptide identification and quantification from SWATH-MS data. For downstream statistical and quantitative analysis there exist different tools such as MSstats, mapDIA and aLFQ. However, the transfer of data from OpenSWATH to the downstream statistical tools is currently technically challenging. Here we introduce the R/Bioconductor package SWATH2stats, which allows convenient processing of the data into a format directly readable by the downstream analysis tools. In addition, SWATH2stats allows annotation, analyzing the variation and the reproducibility of the measurements, FDR estimation, and advanced filtering before submitting the processed data to downstream tools. These functionalities are important to quickly analyze the quality of the SWATH-MS data. Hence, SWATH2stats is a new open-source tool that summarizes several practical functionalities for analyzing, processing, and converting SWATH-MS data and thus facilitates the efficient analysis of large-scale SWATH/DIA datasets.
van Dijk, R; van Assen, M; Vliegenthart, R; de Bock, G H; van der Harst, P; Oudkerk, M
2017-11-27
Stress cardiovascular magnetic resonance (CMR) perfusion imaging is a promising modality for the evaluation of coronary artery disease (CAD) due to high spatial resolution and absence of radiation. Semi-quantitative and quantitative analysis of CMR perfusion are based on signal-intensity curves produced during the first-pass of gadolinium contrast. Multiple semi-quantitative and quantitative parameters have been introduced. Diagnostic performance of these parameters varies extensively among studies and standardized protocols are lacking. This study aims to determine the diagnostic accuracy of semi- quantitative and quantitative CMR perfusion parameters, compared to multiple reference standards. Pubmed, WebOfScience, and Embase were systematically searched using predefined criteria (3272 articles). A check for duplicates was performed (1967 articles). Eligibility and relevance of the articles was determined by two reviewers using pre-defined criteria. The primary data extraction was performed independently by two researchers with the use of a predefined template. Differences in extracted data were resolved by discussion between the two researchers. The quality of the included studies was assessed using the 'Quality Assessment of Diagnostic Accuracy Studies Tool' (QUADAS-2). True positives, false positives, true negatives, and false negatives were subtracted/calculated from the articles. The principal summary measures used to assess diagnostic accuracy were sensitivity, specificity, andarea under the receiver operating curve (AUC). Data was pooled according to analysis territory, reference standard and perfusion parameter. Twenty-two articles were eligible based on the predefined study eligibility criteria. The pooled diagnostic accuracy for segment-, territory- and patient-based analyses showed good diagnostic performance with sensitivity of 0.88, 0.82, and 0.83, specificity of 0.72, 0.83, and 0.76 and AUC of 0.90, 0.84, and 0.87, respectively. In per territory analysis our results show similar diagnostic accuracy comparing anatomical (AUC 0.86(0.83-0.89)) and functional reference standards (AUC 0.88(0.84-0.90)). Only the per territory analysis sensitivity did not show significant heterogeneity. None of the groups showed signs of publication bias. The clinical value of semi-quantitative and quantitative CMR perfusion analysis remains uncertain due to extensive inter-study heterogeneity and large differences in CMR perfusion acquisition protocols, reference standards, and methods of assessment of myocardial perfusion parameters. For wide spread implementation, standardization of CMR perfusion techniques is essential. CRD42016040176 .
NASA Astrophysics Data System (ADS)
Qi, Pan; Shao, Wenbin; Liao, Shusheng
2016-02-01
For quantitative defects detection research on heat transfer tube in nuclear power plants (NPP), two parts of work are carried out based on the crack as the main research objects. (1) Production optimization of calibration tube. Firstly, ASME, RSEM and homemade crack calibration tubes are applied to quantitatively analyze the defects depth on other designed crack test tubes, and then the judgment with quantitative results under crack calibration tube with more accuracy is given. Base on that, weight analysis of influence factors for crack depth quantitative test such as crack orientation, length, volume and so on can be undertaken, which will optimize manufacture technology of calibration tubes. (2) Quantitative optimization of crack depth. Neural network model with multi-calibration curve adopted to optimize natural crack test depth generated in in-service tubes shows preliminary ability to improve quantitative accuracy.
Arlt, Janine; Homeyer, André; Sänger, Constanze; Dahmen, Uta; Dirsch, Olaf
2016-01-01
Quantitative analysis of histologic slides is of importance for pathology and also to address surgical questions. Recently, a novel application was developed for the automated quantification of whole-slide images. The aim of this study was to test and validate the underlying image analysis algorithm with respect to user friendliness, accuracy, and transferability to different histologic scenarios. The algorithm splits the images into tiles of a predetermined size and identifies the tissue class of each tile. In the training procedure, the user specifies example tiles of the different tissue classes. In the subsequent analysis procedure, the algorithm classifies each tile into the previously specified classes. User friendliness was evaluated by recording training time and testing reproducibility of the training procedure of users with different background. Accuracy was determined with respect to single and batch analysis. Transferability was demonstrated by analyzing tissue of different organs (rat liver, kidney, small bowel, and spleen) and with different stainings (glutamine synthetase and hematoxylin-eosin). Users of different educational background could apply the program efficiently after a short introduction. When analyzing images with similar properties, accuracy of >90% was reached in single images as well as in batch mode. We demonstrated that the novel application is user friendly and very accurate. With the "training" procedure the application can be adapted to novel image characteristics simply by giving examples of relevant tissue structures. Therefore, it is suitable for the fast and efficient analysis of high numbers of fully digitalized histologic sections, potentially allowing "high-throughput" quantitative "histomic" analysis.
León-Reina, L; García-Maté, M; Álvarez-Pinazo, G; Santacruz, I; Vallcorba, O; De la Torre, A G; Aranda, M A G
2016-06-01
This study reports 78 Rietveld quantitative phase analyses using Cu K α 1 , Mo K α 1 and synchrotron radiations. Synchrotron powder diffraction has been used to validate the most challenging analyses. From the results for three series with increasing contents of an analyte (an inorganic crystalline phase, an organic crystalline phase and a glass), it is inferred that Rietveld analyses from high-energy Mo K α 1 radiation have slightly better accuracies than those obtained from Cu K α 1 radiation. This behaviour has been established from the results of the calibration graphics obtained through the spiking method and also from Kullback-Leibler distance statistic studies. This outcome is explained, in spite of the lower diffraction power for Mo radiation when compared to Cu radiation, as arising because of the larger volume tested with Mo and also because higher energy allows one to record patterns with fewer systematic errors. The limit of detection (LoD) and limit of quantification (LoQ) have also been established for the studied series. For similar recording times, the LoDs in Cu patterns, ∼0.2 wt%, are slightly lower than those derived from Mo patterns, ∼0.3 wt%. The LoQ for a well crystallized inorganic phase using laboratory powder diffraction was established to be close to 0.10 wt% in stable fits with good precision. However, the accuracy of these analyses was poor with relative errors near to 100%. Only contents higher than 1.0 wt% yielded analyses with relative errors lower than 20%.
Curtis, Tyler E; Roeder, Ryan K
2017-10-01
Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in magnitude by comparison. The material basis matrix calibration was more sensitive to changes in the calibration methods than the scaling factor calibration. The material basis matrix calibration significantly influenced both the quantitative and spatial accuracy of material decomposition, while the scaling factor calibration influenced quantitative but not spatial accuracy. Importantly, the median RMSE of material decomposition was as low as ~1.5 mM (~0.24 mg/mL gadolinium), which was similar in magnitude to that measured by optical spectroscopy on the same samples. The accuracy of quantitative material decomposition in photon-counting spectral CT was significantly influenced by calibration methods which must therefore be carefully considered for the intended diagnostic imaging application. © 2017 American Association of Physicists in Medicine.
Arsenyev, P A; Trezvov, V V; Saratovskaya, N V
1997-01-01
This work represents a method, which allows to determine phase composition of calcium hydroxylapatite basing on its infrared spectrum. The method uses factor analysis of the spectral data of calibration set of samples to determine minimal number of factors required to reproduce the spectra within experimental error. Multiple linear regression is applied to establish correlation between factor scores of calibration standards and their properties. The regression equations can be used to predict the property value of unknown sample. The regression model was built for determination of beta-tricalcium phosphate content in hydroxylapatite. Statistical estimation of quality of the model was carried out. Application of the factor analysis on spectral data allows to increase accuracy of beta-tricalcium phosphate determination and expand the range of determination towards its less concentration. Reproducibility of results is retained.
NASA Astrophysics Data System (ADS)
Narita, Y.; Iida, H.; Ebert, S.; Nakamura, T.
1997-12-01
Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for three numerical phantoms for /sup 201/Tl. Data were reconstructed with ordered-subset EM algorithm including noise-less transmission data based attenuation correction. Accuracy of TDCS and TEW scatter corrections were assessed by comparison with simulated true primary data. The uniform cylindrical phantom simulation demonstrated better quantitative accuracy with TDCS than with TEW (-2.0% vs. 16.7%) and better S/N (6.48 vs. 5.05). A uniform ring myocardial phantom simulation demonstrated better homogeneity with TDCS than TEW in the myocardium; i.e., anterior-to-posterior wall count ratios were 0.99 and 0.76 with TDCS and TEW, respectively. For the MCAT phantom, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.
Guo, L B; Hao, Z Q; Shen, M; Xiong, W; He, X N; Xie, Z Q; Gao, M; Li, X Y; Zeng, X Y; Lu, Y F
2013-07-29
To improve the accuracy of quantitative analysis in laser-induced breakdown spectroscopy, the plasma produced by a Nd:YAG laser from steel targets was confined by a cavity. A number of elements with low concentrations, such as vanadium (V), chromium (Cr), and manganese (Mn), in the steel samples were investigated. After the optimization of the cavity dimension and laser fluence, significant enhancement factors of 4.2, 3.1, and 2.87 in the emission intensity of V, Cr, and Mn lines, respectively, were achieved at a laser fluence of 42.9 J/cm(2) using a hemispherical cavity (diameter: 5 mm). More importantly, the correlation coefficient of the V I 440.85/Fe I 438.35 nm was increased from 0.946 (without the cavity) to 0.981 (with the cavity); and similar results for Cr I 425.43/Fe I 425.08 nm and Mn I 476.64/Fe I 492.05 nm were also obtained. Therefore, it was demonstrated that the accuracy of quantitative analysis with low concentration elements in steel samples was improved, because the plasma became uniform with spatial confinement. The results of this study provide a new pathway for improving the accuracy of quantitative analysis of LIBS.
NASA Technical Reports Server (NTRS)
Korb, C. L.; Gentry, Bruce M.
1995-01-01
The goal of the Army Research Office (ARO) Geosciences Program is to measure the three dimensional wind field in the planetary boundary layer (PBL) over a measurement volume with a 50 meter spatial resolution and with measurement accuracies of the order of 20 cm/sec. The objective of this work is to develop and evaluate a high vertical resolution lidar experiment using the edge technique for high accuracy measurement of the atmospheric wind field to meet the ARO requirements. This experiment allows the powerful capabilities of the edge technique to be quantitatively evaluated. In the edge technique, a laser is located on the steep slope of a high resolution spectral filter. This produces large changes in measured signal for small Doppler shifts. A differential frequency technique renders the Doppler shift measurement insensitive to both laser and filter frequency jitter and drift. The measurement is also relatively insensitive to the laser spectral width for widths less than the width of the edge filter. Thus, the goal is to develop a system which will yield a substantial improvement in the state of the art of wind profile measurement in terms of both vertical resolution and accuracy and which will provide a unique capability for atmospheric wind studies.
Increased genomic prediction accuracy in wheat breeding using a large Australian panel.
Norman, Adam; Taylor, Julian; Tanaka, Emi; Telfer, Paul; Edwards, James; Martinant, Jean-Pierre; Kuchel, Haydn
2017-12-01
Genomic prediction accuracy within a large panel was found to be substantially higher than that previously observed in smaller populations, and also higher than QTL-based prediction. In recent years, genomic selection for wheat breeding has been widely studied, but this has typically been restricted to population sizes under 1000 individuals. To assess its efficacy in germplasm representative of commercial breeding programmes, we used a panel of 10,375 Australian wheat breeding lines to investigate the accuracy of genomic prediction for grain yield, physical grain quality and other physiological traits. To achieve this, the complete panel was phenotyped in a dedicated field trial and genotyped using a custom Axiom TM Affymetrix SNP array. A high-quality consensus map was also constructed, allowing the linkage disequilibrium present in the germplasm to be investigated. Using the complete SNP array, genomic prediction accuracies were found to be substantially higher than those previously observed in smaller populations and also more accurate compared to prediction approaches using a finite number of selected quantitative trait loci. Multi-trait genetic correlations were also assessed at an additive and residual genetic level, identifying a negative genetic correlation between grain yield and protein as well as a positive genetic correlation between grain size and test weight.
Will it Blend? Visualization and Accuracy Evaluation of High-Resolution Fuzzy Vegetation Maps
NASA Astrophysics Data System (ADS)
Zlinszky, A.; Kania, A.
2016-06-01
Instead of assigning every map pixel to a single class, fuzzy classification includes information on the class assigned to each pixel but also the certainty of this class and the alternative possible classes based on fuzzy set theory. The advantages of fuzzy classification for vegetation mapping are well recognized, but the accuracy and uncertainty of fuzzy maps cannot be directly quantified with indices developed for hard-boundary categorizations. The rich information in such a map is impossible to convey with a single map product or accuracy figure. Here we introduce a suite of evaluation indices and visualization products for fuzzy maps generated with ensemble classifiers. We also propose a way of evaluating classwise prediction certainty with "dominance profiles" visualizing the number of pixels in bins according to the probability of the dominant class, also showing the probability of all the other classes. Together, these data products allow a quantitative understanding of the rich information in a fuzzy raster map both for individual classes and in terms of variability in space, and also establish the connection between spatially explicit class certainty and traditional accuracy metrics. These map products are directly comparable to widely used hard boundary evaluation procedures, support active learning-based iterative classification and can be applied for operational use.
NASA Astrophysics Data System (ADS)
Traganos, D.; Cerra, D.; Reinartz, P.
2017-05-01
Seagrasses are one of the most productive and widespread yet threatened coastal ecosystems on Earth. Despite their importance, they are declining due to various threats, which are mainly anthropogenic. Lack of data on their distribution hinders any effort to rectify this decline through effective detection, mapping and monitoring. Remote sensing can mitigate this data gap by allowing retrospective quantitative assessment of seagrass beds over large and remote areas. In this paper, we evaluate the quantitative application of Planet high resolution imagery for the detection of seagrasses in the Thermaikos Gulf, NW Aegean Sea, Greece. The low Signal-to-noise Ratio (SNR), which characterizes spectral bands at shorter wavelengths, prompts the application of the Unmixing-based denoising (UBD) as a pre-processing step for seagrass detection. A total of 15 spectral-temporal patterns is extracted from a Planet image time series to restore the corrupted blue and green band in the processed Planet image. Subsequently, we implement Lyzenga's empirical water column correction and Support Vector Machines (SVM) to evaluate quantitative benefits of denoising. Denoising aids detection of Posidonia oceanica seagrass species by increasing its producer and user accuracy by 31.7 % and 10.4 %, correspondingly, with a respective increase in its Kappa value from 0.3 to 0.48. In the near future, our objective is to improve accuracies in seagrass detection by applying more sophisticated, analytical water column correction algorithms to Planet imagery, developing time- and cost-effective monitoring of seagrass distribution that will enable in turn the effective management and conservation of these highly valuable and productive ecosystems.
Lasnon, Charline; Quak, Elske; Briand, Mélanie; Gu, Zheng; Louis, Marie-Hélène; Aide, Nicolas
2013-01-17
The use of iodinated contrast media in small-animal positron emission tomography (PET)/computed tomography (CT) could improve anatomic referencing and tumor delineation but may introduce inaccuracies in the attenuation correction of the PET images. This study evaluated the diagnostic performance and accuracy of quantitative values in contrast-enhanced small-animal PET/CT (CEPET/CT) as compared to unenhanced small animal PET/CT (UEPET/CT). Firstly, a NEMA NU 4-2008 phantom (filled with 18F-FDG or 18F-FDG plus contrast media) and a homemade phantom, mimicking an abdominal tumor surrounded by water or contrast media, were used to evaluate the impact of iodinated contrast media on the image quality parameters and accuracy of quantitative values for a pertinent-sized target. Secondly, two studies in 22 abdominal tumor-bearing mice and rats were performed. The first animal experiment studied the impact of a dual-contrast media protocol, comprising the intravenous injection of a long-lasting contrast agent mixed with 18F-FDG and the intraperitoneal injection of contrast media, on tumor delineation and the accuracy of quantitative values. The second animal experiment compared the diagnostic performance and quantitative values of CEPET/CT versus UEPET/CT by sacrificing the animals after the tracer uptake period and imaging them before and after intraperitoneal injection of contrast media. There was minimal impact on IQ parameters (%SDunif and spillover ratios in air and water) when the NEMA NU 4-2008 phantom was filled with 18F-FDG plus contrast media. In the homemade phantom, measured activity was similar to true activity (-0.02%) and overestimated by 10.30% when vials were surrounded by water or by an iodine solution, respectively. The first animal experiment showed excellent tumor delineation and a good correlation between small-animal (SA)-PET and ex vivo quantification (r2 = 0.87, P < 0.0001). The second animal experiment showed a good correlation between CEPET/CT and UEPET/CT quantitative values (r2 = 0.99, P < 0.0001). Receiver operating characteristic analysis demonstrated better diagnostic accuracy of CEPET/CT versus UEPET/CT (senior researcher, area under the curve (AUC) 0.96 versus 0.77, P = 0.004; junior researcher, AUC 0.78 versus 0.58, P = 0.004). The use of iodinated contrast media for small-animal PET imaging significantly improves tumor delineation and diagnostic performance, without significant alteration of SA-PET quantitative accuracy and NEMA NU 4-2008 IQ parameters.
Medvedovici, Andrei; Albu, Florin; Farca, Alexandru; David, Victor
2004-01-27
A new method for the determination of 2-[(dimethylamino)methyl]cyclohexanone (DAMC) in Tramadol (as active substance or active ingredient in pharmaceutical formulations) is described. The method is based on the derivatisation of 2-[(dimethylamino)methyl]cyclohexanone with 2,4-dinitrophenylhydrazine (2,4-DNPH) in acidic conditions followed by a reversed-phase liquid chromatographic separation with UV detection. The method is simple, selective, quantitative and allows the determination of 2-[(dimethylamino)methyl]cyclohexanone at the low ppm level. The proposed method was validated with respect to selectivity, precision, linearity, accuracy and robustness.
[Radiocarbon dating of pollen and spores in wedge ice from Iamal and Kolyma].
Vasil'chuk, A K
2004-01-01
Radiocarbon dating of pollen concentrate from late Pleistocene syngenetic wedge ice was carried out using acceleration mass spectrometry (AMS) in Seyakha and Bizon sections. Comparison of the obtained dating with palynological analysis and AMS radiocarbon dating previously obtained for other organic fractions of the same samples allowed us to evaluate accuracy of dating of different fractions. Quantitative tests for data evaluation were considered in terms of possible autochthonous or allochthonous accumulation of the material on the basis of pre-Pleistocene pollen content in these samples. Paleoecological information content of pollen spectra from late Pleistocene syngenetic wedge ice was evaluated.
Ketkov, Sergey Yu; Markin, Gennady V; Tzeng, Sheng Y; Tzeng, Wen B
2016-03-24
Mass-analyzed threshold ionization spectra of jet-cooled [(η(6) -PhMe)(η(6) -PhH)Cr] and [(η(6) -Ph2 )(η(6) -PhH)Cr] reveal with unprecedented accuracy the effects of methyl and phenyl groups on the electronic structure of bis(η(6) -benzene)chromium. These "pure" substituent effects allow quantitative experimental determination of the ionization energy changes caused by the mutual substituent influence in bisarene systems. Two types of such influence have been revealed for the first time in bis(η(6) -toluene)chromium. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Zhai, Juping; Ding, Mengyuan; Yang, Tianjie; Zuo, Bin; Weng, Zhen; Zhao, Yunxiao; He, Jun; Wu, Qingyu; Ruan, Changgeng; He, Yang
2017-10-23
Platelet autoantibody detection is critical for immune thrombocytopenia (ITP) diagnosis and prognosis. Therefore, we aimed to establish a quantitative flow cytometric immunobead assay (FCIA) for ITP platelet autoantibodies evaluation. Capture microbeads coupled with anti-GPIX, -GPIb, -GPIIb, -GPIIIa and P-selectin antibodies were used to bind the platelet-bound autoantibodies complex generated from plasma samples of 250 ITP patients, 163 non-ITP patients and 243 healthy controls, a fluorescein isothiocyanate (FITC)-conjugated secondary antibody was the detector reagent and mean fluorescence intensity (MFI) signals were recorded by flow cytometry. Intra- and inter-assay variations of the quantitative FCIA assay were assessed. Comparisons of the specificity, sensitivity and accuracy between quantitative and qualitative FCIA or monoclonal antibody immobilization of platelet antigen (MAIPA) assay were performed. Finally, treatment process was monitored by our quantitative FCIA in 8 newly diagnosed ITPs. The coefficient of variations (CV) of the quantitative FCIA assay were respectively 9.4, 3.8, 5.4, 5.1 and 5.8% for anti-GPIX, -GPIb, -GPIIIa, -GPIIb and -P-selectin autoantibodies. Elevated levels of autoantibodies against platelet glycoproteins GPIX, GPIb, GPIIIa, GPIIb and P-selectin were detected by our quantitative FCIA in ITP patients compared to non-ITP patients or healthy controls. The sensitivity, specificity and accuracy of our quantitative assay were respectively 73.13, 81.98 and 78.65% when combining all 5 autoantibodies, while the sensitivity, specificity and accuracy of MAIPA assay were respectively 41.46, 90.41 and 72.81%. A quantitative FCIA assay was established. Reduced levels of platelet autoantibodies could be confirmed by our quantitative FCIA in ITP patients after corticosteroid treatment. Our quantitative assay is not only good for ITP diagnosis but also for ITP treatment monitoring.
Li, Zhi; Chen, Weidong; Lian, Feiyu; Ge, Hongyi; Guan, Aihong
2017-12-01
Quantitative analysis of component mixtures is an important application of terahertz time-domain spectroscopy (THz-TDS) and has attracted broad interest in recent research. Although the accuracy of quantitative analysis using THz-TDS is affected by a host of factors, wavelength selection from the sample's THz absorption spectrum is the most crucial component. The raw spectrum consists of signals from the sample and scattering and other random disturbances that can critically influence the quantitative accuracy. For precise quantitative analysis using THz-TDS, the signal from the sample needs to be retained while the scattering and other noise sources are eliminated. In this paper, a novel wavelength selection method based on differential evolution (DE) is investigated. By performing quantitative experiments on a series of binary amino acid mixtures using THz-TDS, we demonstrate the efficacy of the DE-based wavelength selection method, which yields an error rate below 5%.
Jones, Barry R; Schultz, Gary A; Eckstein, James A; Ackermann, Bradley L
2012-10-01
Quantitation of biomarkers by LC-MS/MS is complicated by the presence of endogenous analytes. This challenge is most commonly overcome by calibration using an authentic standard spiked into a surrogate matrix devoid of the target analyte. A second approach involves use of a stable-isotope-labeled standard as a surrogate analyte to allow calibration in the actual biological matrix. For both methods, parallelism between calibration standards and the target analyte in biological matrix must be demonstrated in order to ensure accurate quantitation. In this communication, the surrogate matrix and surrogate analyte approaches are compared for the analysis of five amino acids in human plasma: alanine, valine, methionine, leucine and isoleucine. In addition, methodology based on standard addition is introduced, which enables a robust examination of parallelism in both surrogate analyte and surrogate matrix methods prior to formal validation. Results from additional assays are presented to introduce the standard-addition methodology and to highlight the strengths and weaknesses of each approach. For the analysis of amino acids in human plasma, comparable precision and accuracy were obtained by the surrogate matrix and surrogate analyte methods. Both assays were well within tolerances prescribed by regulatory guidance for validation of xenobiotic assays. When stable-isotope-labeled standards are readily available, the surrogate analyte approach allows for facile method development. By comparison, the surrogate matrix method requires greater up-front method development; however, this deficit is offset by the long-term advantage of simplified sample analysis.
Zhan, Xue-yan; Zhao, Na; Lin, Zhao-zhou; Wu, Zhi-sheng; Yuan, Rui-juan; Qiao, Yan-jiang
2014-12-01
The appropriate algorithm for calibration set selection was one of the key technologies for a good NIR quantitative model. There are different algorithms for calibration set selection, such as Random Sampling (RS) algorithm, Conventional Selection (CS) algorithm, Kennard-Stone(KS) algorithm and Sample set Portioning based on joint x-y distance (SPXY) algorithm, et al. However, there lack systematic comparisons between two algorithms of the above algorithms. The NIR quantitative models to determine the asiaticoside content in Centella total glucosides were established in the present paper, of which 7 indexes were classified and selected, and the effects of CS algorithm, KS algorithm and SPXY algorithm for calibration set selection on the accuracy and robustness of NIR quantitative models were investigated. The accuracy indexes of NIR quantitative models with calibration set selected by SPXY algorithm were significantly different from that with calibration set selected by CS algorithm or KS algorithm, while the robustness indexes, such as RMSECV and |RMSEP-RMSEC|, were not significantly different. Therefore, SPXY algorithm for calibration set selection could improve the predicative accuracy of NIR quantitative models to determine asiaticoside content in Centella total glucosides, and have no significant effect on the robustness of the models, which provides a reference to determine the appropriate algorithm for calibration set selection when NIR quantitative models are established for the solid system of traditional Chinese medcine.
Occupational exposure decisions: can limited data interpretation training help improve accuracy?
Logan, Perry; Ramachandran, Gurumurthy; Mulhausen, John; Hewett, Paul
2009-06-01
Accurate exposure assessments are critical for ensuring that potentially hazardous exposures are properly identified and controlled. The availability and accuracy of exposure assessments can determine whether resources are appropriately allocated to engineering and administrative controls, medical surveillance, personal protective equipment and other programs designed to protect workers. A desktop study was performed using videos, task information and sampling data to evaluate the accuracy and potential bias of participants' exposure judgments. Desktop exposure judgments were obtained from occupational hygienists for material handling jobs with small air sampling data sets (0-8 samples) and without the aid of computers. In addition, data interpretation tests (DITs) were administered to participants where they were asked to estimate the 95th percentile of an underlying log-normal exposure distribution from small data sets. Participants were presented with an exposure data interpretation or rule of thumb training which included a simple set of rules for estimating 95th percentiles for small data sets from a log-normal population. DIT was given to each participant before and after the rule of thumb training. Results of each DIT and qualitative and quantitative exposure judgments were compared with a reference judgment obtained through a Bayesian probabilistic analysis of the sampling data to investigate overall judgment accuracy and bias. There were a total of 4386 participant-task-chemical judgments for all data collections: 552 qualitative judgments made without sampling data and 3834 quantitative judgments with sampling data. The DITs and quantitative judgments were significantly better than random chance and much improved by the rule of thumb training. In addition, the rule of thumb training reduced the amount of bias in the DITs and quantitative judgments. The mean DIT % correct scores increased from 47 to 64% after the rule of thumb training (P < 0.001). The accuracy for quantitative desktop judgments increased from 43 to 63% correct after the rule of thumb training (P < 0.001). The rule of thumb training did not significantly impact accuracy for qualitative desktop judgments. The finding that even some simple statistical rules of thumb improve judgment accuracy significantly suggests that hygienists need to routinely use statistical tools while making exposure judgments using monitoring data.
Development of quantitative screen for 1550 chemicals with GC-MS.
Bergmann, Alan J; Points, Gary L; Scott, Richard P; Wilson, Glenn; Anderson, Kim A
2018-05-01
With hundreds of thousands of chemicals in the environment, effective monitoring requires high-throughput analytical techniques. This paper presents a quantitative screening method for 1550 chemicals based on statistical modeling of responses with identification and integration performed using deconvolution reporting software. The method was evaluated with representative environmental samples. We tested biological extracts, low-density polyethylene, and silicone passive sampling devices spiked with known concentrations of 196 representative chemicals. A multiple linear regression (R 2 = 0.80) was developed with molecular weight, logP, polar surface area, and fractional ion abundance to predict chemical responses within a factor of 2.5. Linearity beyond the calibration had R 2 > 0.97 for three orders of magnitude. Median limits of quantitation were estimated to be 201 pg/μL (1.9× standard deviation). The number of detected chemicals and the accuracy of quantitation were similar for environmental samples and standard solutions. To our knowledge, this is the most precise method for the largest number of semi-volatile organic chemicals lacking authentic standards. Accessible instrumentation and software make this method cost effective in quantifying a large, customizable list of chemicals. When paired with silicone wristband passive samplers, this quantitative screen will be very useful for epidemiology where binning of concentrations is common. Graphical abstract A multiple linear regression of chemical responses measured with GC-MS allowed quantitation of 1550 chemicals in samples such as silicone wristbands.
Ostovaneh, Mohammad R; Vavere, Andrea L; Mehra, Vishal C; Kofoed, Klaus F; Matheson, Matthew B; Arbab-Zadeh, Armin; Fujisawa, Yasuko; Schuijf, Joanne D; Rochitte, Carlos E; Scholte, Arthur J; Kitagawa, Kakuya; Dewey, Marc; Cox, Christopher; DiCarli, Marcelo F; George, Richard T; Lima, Joao A C
To determine the diagnostic accuracy of semi-automatic quantitative metrics compared to expert reading for interpretation of computed tomography perfusion (CTP) imaging. The CORE320 multicenter diagnostic accuracy clinical study enrolled patients between 45 and 85 years of age who were clinically referred for invasive coronary angiography (ICA). Computed tomography angiography (CTA), CTP, single photon emission computed tomography (SPECT), and ICA images were interpreted manually in blinded core laboratories by two experienced readers. Additionally, eight quantitative CTP metrics as continuous values were computed semi-automatically from myocardial and blood attenuation and were combined using logistic regression to derive a final quantitative CTP metric score. For the reference standard, hemodynamically significant coronary artery disease (CAD) was defined as a quantitative ICA stenosis of 50% or greater and a corresponding perfusion defect by SPECT. Diagnostic accuracy was determined by area under the receiver operating characteristic curve (AUC). Of the total 377 included patients, 66% were male, median age was 62 (IQR: 56, 68) years, and 27% had prior myocardial infarction. In patient based analysis, the AUC (95% CI) for combined CTA-CTP expert reading and combined CTA-CTP semi-automatic quantitative metrics was 0.87(0.84-0.91) and 0.86 (0.83-0.9), respectively. In vessel based analyses the AUC's were 0.85 (0.82-0.88) and 0.84 (0.81-0.87), respectively. No significant difference in AUC was found between combined CTA-CTP expert reading and CTA-CTP semi-automatic quantitative metrics in patient based or vessel based analyses(p > 0.05 for all). Combined CTA-CTP semi-automatic quantitative metrics is as accurate as CTA-CTP expert reading to detect hemodynamically significant CAD. Copyright © 2018 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
Akgul Kalkan, Esin; Sahiner, Mehtap; Ulker Cakir, Dilek; Alpaslan, Duygu; Yilmaz, Selehattin
2016-01-01
Using high-performance liquid chromatography (HPLC) and 2,4-dinitrophenylhydrazine (2,4-DNPH) as a derivatizing reagent, an analytical method was developed for the quantitative determination of acetone in human blood. The determination was carried out at 365 nm using an ultraviolet-visible (UV-Vis) diode array detector (DAD). For acetone as its 2,4-dinitrophenylhydrazone derivative, a good separation was achieved with a ThermoAcclaim C18 column (15 cm × 4.6 mm × 3 μm) at retention time (t R) 12.10 min and flowrate of 1 mL min−1 using a (methanol/acetonitrile) water elution gradient. The methodology is simple, rapid, sensitive, and of low cost, exhibits good reproducibility, and allows the analysis of acetone in biological fluids. A calibration curve was obtained for acetone using its standard solutions in acetonitrile. Quantitative analysis of acetone in human blood was successfully carried out using this calibration graph. The applied method was validated in parameters of linearity, limit of detection and quantification, accuracy, and precision. We also present acetone as a useful tool for the HPLC-based metabolomic investigation of endogenous metabolism and quantitative clinical diagnostic analysis. PMID:27298750
QUESP and QUEST revisited - fast and accurate quantitative CEST experiments.
Zaiss, Moritz; Angelovski, Goran; Demetriou, Eleni; McMahon, Michael T; Golay, Xavier; Scheffler, Klaus
2018-03-01
Chemical exchange saturation transfer (CEST) NMR or MRI experiments allow detection of low concentrated molecules with enhanced sensitivity via their proton exchange with the abundant water pool. Be it endogenous metabolites or exogenous contrast agents, an exact quantification of the actual exchange rate is required to design optimal pulse sequences and/or specific sensitive agents. Refined analytical expressions allow deeper insight and improvement of accuracy for common quantification techniques. The accuracy of standard quantification methodologies, such as quantification of exchange rate using varying saturation power or varying saturation time, is improved especially for the case of nonequilibrium initial conditions and weak labeling conditions, meaning the saturation amplitude is smaller than the exchange rate (γB 1 < k). The improved analytical 'quantification of exchange rate using varying saturation power/time' (QUESP/QUEST) equations allow for more accurate exchange rate determination, and provide clear insights on the general principles to execute the experiments and to perform numerical evaluation. The proposed methodology was evaluated on the large-shift regime of paramagnetic chemical-exchange-saturation-transfer agents using simulated data and data of the paramagnetic Eu(III) complex of DOTA-tetraglycineamide. The refined formulas yield improved exchange rate estimation. General convergence intervals of the methods that would apply for smaller shift agents are also discussed. Magn Reson Med 79:1708-1721, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Devolatilization Analysis in a Twin Screw Extruder by using the Flow Analysis Network (FAN) Method
NASA Astrophysics Data System (ADS)
Tomiyama, Hideki; Takamoto, Seiji; Shintani, Hiroaki; Inoue, Shigeki
We derived the theoretical formulas for three mechanisms of devolatilization in a twin screw extruder. These are flash, surface refreshment and forced expansion. The method for flash devolatilization is based on the equation of equilibrium concentration which shows that volatiles break off from polymer when they are relieved from high pressure condition. For surface refreshment devolatilization, we applied Latinen's model to allow estimation of polymer behavior in the unfilled screw conveying condition. Forced expansion devolatilization is based on the expansion theory in which foams are generated under reduced pressure and volatiles are diffused on the exposed surface layer after mixing with the injected devolatilization agent. Based on these models, we developed the simulation software of twin-screw extrusion by the FAN method and it allows us to quantitatively estimate volatile concentration and polymer temperature with a high accuracy in the actual multi-vent extrusion process for LDPE + n-hexane.
Modeling of solid-state and excimer laser processes for 3D micromachining
NASA Astrophysics Data System (ADS)
Holmes, Andrew S.; Onischenko, Alexander I.; George, David S.; Pedder, James E.
2005-04-01
An efficient simulation method has recently been developed for multi-pulse ablation processes. This is based on pulse-by-pulse propagation of the machined surface according to one of several phenomenological models for the laser-material interaction. The technique allows quantitative predictions to be made about the surface shapes of complex machined parts, given only a minimal set of input data for parameter calibration. In the case of direct-write machining of polymers or glasses with ns-duration pulses, this data set can typically be limited to the surface profiles of a small number of standard test patterns. The use of phenomenological models for the laser-material interaction, calibrated by experimental feedback, allows fast simulation, and can achieve a high degree of accuracy for certain combinations of material, laser and geometry. In this paper, the capabilities and limitations of the approach are discussed, and recent results are presented for structures machined in SU8 photoresist.
Towards quantitative classification of folded proteins in terms of elementary functions.
Hu, Shuangwei; Krokhotin, Andrei; Niemi, Antti J; Peng, Xubiao
2011-04-01
A comparative classification scheme provides a good basis for several approaches to understand proteins, including prediction of relations between their structure and biological function. But it remains a challenge to combine a classification scheme that describes a protein starting from its well-organized secondary structures and often involves direct human involvement, with an atomary-level physics-based approach where a protein is fundamentally nothing more than an ensemble of mutually interacting carbon, hydrogen, oxygen, and nitrogen atoms. In order to bridge these two complementary approaches to proteins, conceptually novel tools need to be introduced. Here we explain how an approach toward geometric characterization of entire folded proteins can be based on a single explicit elementary function that is familiar from nonlinear physical systems where it is known as the kink soliton. Our approach enables the conversion of hierarchical structural information into a quantitative form that allows for a folded protein to be characterized in terms of a small number of global parameters that are in principle computable from atomary-level considerations. As an example we describe in detail how the native fold of the myoglobin 1M6C emerges from a combination of kink solitons with a very high atomary-level accuracy. We also verify that our approach describes longer loops and loops connecting α helices with β strands, with the same overall accuracy. ©2011 American Physical Society
Jézéquel, Tangi; Silvestre, Virginie; Dinis, Katy; Giraudeau, Patrick; Akoka, Serge
2018-04-01
Isotope ratio monitoring by 13 C NMR spectrometry (irm- 13 C NMR) provides the complete 13 C intramolecular position-specific composition at natural abundance. It represents a powerful tool to track the (bio)chemical pathway which has led to the synthesis of targeted molecules, since it allows Position-specific Isotope Analysis (PSIA). Due to the very small composition range (which represents the range of variation of the isotopic composition of a given nuclei) of 13 C natural abundance values (50‰), irm- 13 C NMR requires a 1‰ accuracy and thus highly quantitative analysis by 13 C NMR. Until now, the conventional strategy to determine the position-specific abundance x i relies on the combination of irm-MS (isotopic ratio monitoring Mass Spectrometry) and 13 C quantitative NMR. However this approach presents a serious drawback since it relies on two different techniques and requires to measure separately the signal of all the carbons of the analyzed compound, which is not always possible. To circumvent this constraint, we recently proposed a new methodology to perform 13 C isotopic analysis using an internal reference method and relying on NMR only. The method combines a highly quantitative 1 H NMR pulse sequence (named DWET) with a 13 C isotopic NMR measurement. However, the recently published DWET sequence is unsuited for samples with short T 1 , which forms a serious limitation for irm- 13 C NMR experiments where a relaxing agent is added. In this context, we suggest two variants of the DWET called Multi-WET and Profiled-WET, developed and optimized to reach the same accuracy of 1‰ with a better immunity towards T 1 variations. Their performance is evaluated on the determination of the 13 C isotopic profile of vanillin. Both pulse sequences show a 1‰ accuracy with an increased robustness to pulse miscalibrations compared to the initial DWET method. This constitutes a major advance in the context of irm- 13 C NMR since it is now possible to perform isotopic analysis with high relaxing agent concentrations, leading to a strong reduction of the overall experiment time. Copyright © 2018 Elsevier Inc. All rights reserved.
Nijran, Kuldip S; Houston, Alex S; Fleming, John S; Jarritt, Peter H; Heikkinen, Jari O; Skrypniuk, John V
2014-07-01
In this second UK audit of quantitative parameters obtained from renography, phantom simulations were used in cases in which the 'true' values could be estimated, allowing the accuracy of the parameters measured to be assessed. A renal physical phantom was used to generate a set of three phantom simulations (six kidney functions) acquired on three different gamma camera systems. A total of nine phantom simulations and three real patient studies were distributed to UK hospitals participating in the audit. Centres were asked to provide results for the following parameters: relative function and time-to-peak (whole kidney and cortical region). As with previous audits, a questionnaire collated information on methodology. Errors were assessed as the root mean square deviation from the true value. Sixty-one centres responded to the audit, with some hospitals providing multiple sets of results. Twenty-one centres provided a complete set of parameter measurements. Relative function and time-to-peak showed a reasonable degree of accuracy and precision in most UK centres. The overall average root mean squared deviation of the results for (i) the time-to-peak measurement for the whole kidney and (ii) the relative function measurement from the true value was 7.7 and 4.5%, respectively. These results showed a measure of consistency in the relative function and time-to-peak that was similar to the results reported in a previous renogram audit by our group. Analysis of audit data suggests a reasonable degree of accuracy in the quantification of renography function using relative function and time-to-peak measurements. However, it is reasonable to conclude that the objectives of the audit could not be fully realized because of the limitations of the mechanical phantom in providing true values for renal parameters.
Quantitative phase imaging to improve the diagnostic accuracy of urine cytology.
Pham, Hoa V; Pantanowitz, Liron; Liu, Yang
2016-09-01
A definitive diagnosis of urothelial carcinoma in urine cytology is often challenging and subjective. Many urine cytology samples receive an indeterminate diagnosis. Ancillary techniques such as fluorescence in situ hybridization (FISH) have been used to improve the diagnostic sensitivity, but FISH is not approved as a routine screening test, and the complex fluorescent staining protocol also limits its widespread clinical use. Quantitative phase imaging (QPI) is an emerging technology allowing accurate measurements of the single-cell dry mass. This study was undertaken to explore the ability of QPI to improve the diagnostic accuracy of urine cytology for malignancy. QPI was performed on unstained, ThinPrep-prepared urine cytology slides from 28 patients with 4 categories of cytological diagnoses (negative, atypical, suspicious, and positive for malignancy). The nuclear/cell dry mass, the entropy, and the nucleus-to-cell mass ratio were calculated for several hundred cells for each patient, and they were then correlated with the follow-up diagnoses. The nuclear mass and nuclear mass entropy of urothelial cells showed significant differences between negative and positive groups. These data showed a progressive increase from patients with negative diagnosis, to patients with atypical/suspicious and positive cytologic diagnosis. Most importantly, among the patients in the atypical and suspicious diagnosis, the nuclear mass and its entropy were significantly higher for those patients with a follow-up diagnosis of malignancy than those patients without a subsequent follow-up diagnosis of malignancy. QPI shows potential for improving the diagnostic accuracy of urine cytology, especially for indeterminate cases, and should be further evaluated as an ancillary test for urine cytology. Cancer Cytopathol 2016;124:641-50. © 2016 American Cancer Society. © 2016 American Cancer Society.
NASA Astrophysics Data System (ADS)
Chen, Alvin U.; Basaran, Osman A.
2000-11-01
Drop formation from a capillary --- dripping mode --- or an ink jet nozzle --- drop-on-demand (DOD) mode --- falls into a class of scientifically challenging yet practically useful free surface flows that exhibit a finite time singularity, i.e. the breakup of an initially single liquid mass into two or more fragments. While computational tools to model such problems have been developed recently, they lack the accuracy needed to quantitatively predict all the dynamics observed in experiments. Here we present a new finite element method (FEM) based on a robust algorithm for elliptic mesh generation and remeshing to handle extremely large interface deformations. The new algorithm allows continuation of computations beyond the first singularity to track fates of both primary and any satellite drops. The accuracy of the computations is demonstrated by comparison of simulations with experimental measurements made possible with an ultra high-speed digital imager capable of recording 100 million frames per second.
NASA Astrophysics Data System (ADS)
Sadler, Karen L.
2009-04-01
The purpose of this study was to quantitatively examine the impact of third-party support service providers on the quality of science information available to deaf students in regular science classrooms. Three different videotapes that were developed by NASA for high school science classrooms were selected for the study, allowing for different concepts and vocabulary to be examined. The focus was on the accuracy of translation as measured by the number of key science words included in the transcripts (captions) or videos (interpreted). Data were collected via transcripts completed by CART (computer assisted real-time captionists) or through videos of sign language interpreters. All participants were required to listen to and translate these NASA educational videos with no prior experience with this information so as not to influence their delivery. CART personnel using captions were found to be significantly more accurate in the delivery of science words as compared to the sign language interpreters in this study.
A GATE evaluation of the sources of error in quantitative {sup 90}Y PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strydhorst, Jared, E-mail: jared.strydhorst@gmail.
Purpose: Accurate reconstruction of the dose delivered by {sup 90}Y microspheres using a postembolization PET scan would permit the establishment of more accurate dose–response relationships for treatment of hepatocellular carcinoma with {sup 90}Y. However, the quality of the PET data obtained is compromised by several factors, including poor count statistics and a very high random fraction. This work uses Monte Carlo simulations to investigate what impact factors other than low count statistics have on the quantification of {sup 90}Y PET. Methods: PET acquisitions of two phantoms—a NEMA PET phantom and the NEMA IEC PET body phantom-containing either {sup 90}Y ormore » {sup 18}F were simulated using GATE. Simulated projections were created with subsets of the simulation data allowing the contributions of random, scatter, and LSO background to be independently evaluated. The simulated projections were reconstructed using the commercial software for the simulated scanner, and the quantitative accuracy of the reconstruction and the contrast recovery of the reconstructed images were evaluated. Results: The quantitative accuracy of the {sup 90}Y reconstructions were not strongly influenced by the high random fraction present in the projection data, and the activity concentration was recovered to within 5% of the known value. The contrast recovery measured for simulated {sup 90}Y data was slightly poorer than that for simulated {sup 18}F data with similar count statistics. However, the degradation was not strongly linked to any particular factor. Using a more restricted energy range to reduce the random fraction in the projections had no significant effect. Conclusions: Simulations of {sup 90}Y PET confirm that quantitative {sup 90}Y is achievable with the same approach as that used for {sup 18}F, and that there is likely very little margin for improvement by attempting to model aspects unique to {sup 90}Y, such as the much higher random fraction or the presence of bremsstrahlung in the singles data.« less
Jansen, Sanne M; de Bruin, Daniel M; van Berge Henegouwen, Mark I; Strackee, Simon D; Veelo, Denise P; van Leeuwen, Ton G; Gisbertz, Suzanne S
2017-01-01
Compromised perfusion as a result of surgical intervention causes a reduction of oxygen and nutrients in tissue and therefore decreased tissue vitality. Quantitative imaging of tissue perfusion during reconstructive surgery, therefore, may reduce the incidence of complications. Non-invasive optical techniques allow real-time tissue imaging, with high resolution and high contrast. The objectives of this study are, first, to assess the feasibility and accuracy of optical coherence tomography (OCT), sidestream darkfield microscopy (SDF), laser speckle contrast imaging (LSCI), and fluorescence imaging (FI) for quantitative perfusion imaging and, second, to identify/search for criteria that enable risk prediction of necrosis during gastric tube and free flap reconstruction. This prospective, multicenter, observational in vivo pilot study will assess tissue perfusion using four optical technologies: OCT, SDF, LSCI, and FI in 40 patients: 20 patients who will undergo gastric tube reconstruction after esophagectomy and 20 patients who will undergo free flap surgery. Intra-operative images of gastric perfusion will be obtained directly after reconstruction at four perfusion areas. Feasibility of perfusion imaging will be analyzed per technique. Quantitative parameters directly related to perfusion will be scored per perfusion area, and differences between biologically good versus reduced perfusion will be tested statistically. Patient outcome will be correlated to images and perfusion parameters. Differences in perfusion parameters before and after a bolus of ephedrine will be tested for significance. This study will identify quantitative perfusion-related parameters for an objective assessment of tissue perfusion during surgery. This will likely allow early risk stratification of necrosis development, which will aid in achieving a reduction of complications in gastric tube reconstruction and free flap transplantation. Clinicaltrials.gov registration number NCT02902549. Dutch Central Committee on Research Involving Human Subjects registration number NL52377.018.15.
The SCHEIE Visual Field Grading System
Sankar, Prithvi S.; O’Keefe, Laura; Choi, Daniel; Salowe, Rebecca; Miller-Ellis, Eydie; Lehman, Amanda; Addis, Victoria; Ramakrishnan, Meera; Natesh, Vikas; Whitehead, Gideon; Khachatryan, Naira; O’Brien, Joan
2017-01-01
Objective No method of grading visual field (VF) defects has been widely accepted throughout the glaucoma community. The SCHEIE (Systematic Classification of Humphrey visual fields-Easy Interpretation and Evaluation) grading system for glaucomatous visual fields was created to convey qualitative and quantitative information regarding visual field defects in an objective, reproducible, and easily applicable manner for research purposes. Methods The SCHEIE grading system is composed of a qualitative and quantitative score. The qualitative score consists of designation in one or more of the following categories: normal, central scotoma, paracentral scotoma, paracentral crescent, temporal quadrant, nasal quadrant, peripheral arcuate defect, expansive arcuate, or altitudinal defect. The quantitative component incorporates the Humphrey visual field index (VFI), location of visual defects for superior and inferior hemifields, and blind spot involvement. Accuracy and speed at grading using the qualitative and quantitative components was calculated for non-physician graders. Results Graders had a median accuracy of 96.67% for their qualitative scores and a median accuracy of 98.75% for their quantitative scores. Graders took a mean of 56 seconds per visual field to assign a qualitative score and 20 seconds per visual field to assign a quantitative score. Conclusion The SCHEIE grading system is a reproducible tool that combines qualitative and quantitative measurements to grade glaucomatous visual field defects. The system aims to standardize clinical staging and to make specific visual field defects more easily identifiable. Specific patterns of visual field loss may also be associated with genetic variants in future genetic analysis. PMID:28932621
Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris
2011-09-01
Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography leading to underestimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multiresolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low-resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model, which may introduce artifacts in regions where no significant correlation exists between anatomical and functional details. A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present, the new model outperformed the 2D global approach, avoiding artifacts and significantly improving quality of the corrected images and their quantitative accuracy. A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multiresolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information.
Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E.; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris
2011-01-01
Purpose Partial volume effects (PVE) are consequences of the limited spatial resolution in emission tomography leading to under-estimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multi-resolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model which may introduce artefacts in regions where no significant correlation exists between anatomical and functional details. Methods A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Results Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present the new model outperformed the 2D global approach, avoiding artefacts and significantly improving quality of the corrected images and their quantitative accuracy. Conclusions A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multi-resolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information. PMID:21978037
Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases
Zhang, Hongpo
2018-01-01
Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN) is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart) and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively. PMID:29854369
Mohammed, Yassene; Percy, Andrew J; Chambers, Andrew G; Borchers, Christoph H
2015-02-06
Multiplexed targeted quantitative proteomics typically utilizes multiple reaction monitoring and allows the optimized quantification of a large number of proteins. One challenge, however, is the large amount of data that needs to be reviewed, analyzed, and interpreted. Different vendors provide software for their instruments, which determine the recorded responses of the heavy and endogenous peptides and perform the response-curve integration. Bringing multiplexed data together and generating standard curves is often an off-line step accomplished, for example, with spreadsheet software. This can be laborious, as it requires determining the concentration levels that meet the required accuracy and precision criteria in an iterative process. We present here a computer program, Qualis-SIS, that generates standard curves from multiplexed MRM experiments and determines analyte concentrations in biological samples. Multiple level-removal algorithms and acceptance criteria for concentration levels are implemented. When used to apply the standard curve to new samples, the software flags each measurement according to its quality. From the user's perspective, the data processing is instantaneous due to the reactivity paradigm used, and the user can download the results of the stepwise calculations for further processing, if necessary. This allows for more consistent data analysis and can dramatically accelerate the downstream data analysis.
Abuali, M M; Katariwala, R; LaBombardi, V J
2012-05-01
The agar proportion method (APM) for determining Mycobacterium tuberculosis susceptibilities is a qualitative method that requires 21 days in order to produce the results. The Sensititre method allows for a quantitative assessment. Our objective was to compare the accuracy, time to results, and ease of use of the Sensititre method to the APM. 7H10 plates in the APM and 96-well microtiter dry MYCOTB panels containing 12 antibiotics at full dilution ranges in the Sensititre method were inoculated with M. tuberculosis and read for colony growth. Thirty-seven clinical isolates were tested using both methods and 26 challenge strains of blinded susceptibilities were tested using the Sensititre method only. The Sensititre method displayed 99.3% concordance with the APM. The APM provided reliable results on day 21, whereas the Sensititre method displayed consistent results by day 10. The Sensititre method provides a more rapid, quantitative, and efficient method of testing both first- and second-line drugs when compared to the gold standard. It will give clinicians a sense of the degree of susceptibility, thus, guiding the therapeutic decision-making process. Furthermore, the microwell plate format without the need for instrumentation will allow its use in resource-poor settings.
Temporal lobe epilepsy: quantitative MR volumetry in detection of hippocampal atrophy.
Farid, Nikdokht; Girard, Holly M; Kemmotsu, Nobuko; Smith, Michael E; Magda, Sebastian W; Lim, Wei Y; Lee, Roland R; McDonald, Carrie R
2012-08-01
To determine the ability of fully automated volumetric magnetic resonance (MR) imaging to depict hippocampal atrophy (HA) and to help correctly lateralize the seizure focus in patients with temporal lobe epilepsy (TLE). This study was conducted with institutional review board approval and in compliance with HIPAA regulations. Volumetric MR imaging data were analyzed for 34 patients with TLE and 116 control subjects. Structural volumes were calculated by using U.S. Food and Drug Administration-cleared software for automated quantitative MR imaging analysis (NeuroQuant). Results of quantitative MR imaging were compared with visual detection of atrophy, and, when available, with histologic specimens. Receiver operating characteristic analyses were performed to determine the optimal sensitivity and specificity of quantitative MR imaging for detecting HA and asymmetry. A linear classifier with cross validation was used to estimate the ability of quantitative MR imaging to help lateralize the seizure focus. Quantitative MR imaging-derived hippocampal asymmetries discriminated patients with TLE from control subjects with high sensitivity (86.7%-89.5%) and specificity (92.2%-94.1%). When a linear classifier was used to discriminate left versus right TLE, hippocampal asymmetry achieved 94% classification accuracy. Volumetric asymmetries of other subcortical structures did not improve classification. Compared with invasive video electroencephalographic recordings, lateralization accuracy was 88% with quantitative MR imaging and 85% with visual inspection of volumetric MR imaging studies but only 76% with visual inspection of clinical MR imaging studies. Quantitative MR imaging can depict the presence and laterality of HA in TLE with accuracy rates that may exceed those achieved with visual inspection of clinical MR imaging studies. Thus, quantitative MR imaging may enhance standard visual analysis, providing a useful and viable means for translating volumetric analysis into clinical practice.
Heijtel, D F R; Mutsaerts, H J M M; Bakker, E; Schober, P; Stevens, M F; Petersen, E T; van Berckel, B N M; Majoie, C B L M; Booij, J; van Osch, M J P; Vanbavel, E; Boellaard, R; Lammertsma, A A; Nederveen, A J
2014-05-15
Measurements of the cerebral blood flow (CBF) and cerebrovascular reactivity (CVR) provide useful information about cerebrovascular condition and regional metabolism. Pseudo-continuous arterial spin labeling (pCASL) is a promising non-invasive MRI technique to quantitatively measure the CBF, whereas additional hypercapnic pCASL measurements are currently showing great promise to quantitatively assess the CVR. However, the introduction of pCASL at a larger scale awaits further evaluation of the exact accuracy and precision compared to the gold standard. (15)O H₂O positron emission tomography (PET) is currently regarded as the most accurate and precise method to quantitatively measure both CBF and CVR, though it is one of the more invasive methods as well. In this study we therefore assessed the accuracy and precision of quantitative pCASL-based CBF and CVR measurements by performing a head-to-head comparison with (15)O H₂O PET, based on quantitative CBF measurements during baseline and hypercapnia. We demonstrate that pCASL CBF imaging is accurate during both baseline and hypercapnia with respect to (15)O H₂O PET with a comparable precision. These results pave the way for quantitative usage of pCASL MRI in both clinical and research settings. Copyright © 2014 Elsevier Inc. All rights reserved.
42 CFR 493.933 - Endocrinology.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) To determine the accuracy of a laboratory's response for qualitative and quantitative endocrinology... determined under paragraph (c)(2) or (c)(3) of this section. (2) For quantitative endocrinology tests or...
42 CFR 493.933 - Endocrinology.
Code of Federal Regulations, 2013 CFR
2013-10-01
...) To determine the accuracy of a laboratory's response for qualitative and quantitative endocrinology... determined under paragraph (c)(2) or (c)(3) of this section. (2) For quantitative endocrinology tests or...
42 CFR 493.933 - Endocrinology.
Code of Federal Regulations, 2011 CFR
2011-10-01
...) To determine the accuracy of a laboratory's response for qualitative and quantitative endocrinology... determined under paragraph (c)(2) or (c)(3) of this section. (2) For quantitative endocrinology tests or...
42 CFR 493.933 - Endocrinology.
Code of Federal Regulations, 2014 CFR
2014-10-01
...) To determine the accuracy of a laboratory's response for qualitative and quantitative endocrinology... determined under paragraph (c)(2) or (c)(3) of this section. (2) For quantitative endocrinology tests or...
42 CFR 493.933 - Endocrinology.
Code of Federal Regulations, 2012 CFR
2012-10-01
...) To determine the accuracy of a laboratory's response for qualitative and quantitative endocrinology... determined under paragraph (c)(2) or (c)(3) of this section. (2) For quantitative endocrinology tests or...
Naik, P K; Singh, T; Singh, H
2009-07-01
Quantitative structure-activity relationship (QSAR) analyses were performed independently on data sets belonging to two groups of insecticides, namely the organophosphates and carbamates. Several types of descriptors including topological, spatial, thermodynamic, information content, lead likeness and E-state indices were used to derive quantitative relationships between insecticide activities and structural properties of chemicals. A systematic search approach based on missing value, zero value, simple correlation and multi-collinearity tests as well as the use of a genetic algorithm allowed the optimal selection of the descriptors used to generate the models. The QSAR models developed for both organophosphate and carbamate groups revealed good predictability with r(2) values of 0.949 and 0.838 as well as [image omitted] values of 0.890 and 0.765, respectively. In addition, a linear correlation was observed between the predicted and experimental LD(50) values for the test set data with r(2) of 0.871 and 0.788 for both the organophosphate and carbamate groups, indicating that the prediction accuracy of the QSAR models was acceptable. The models were also tested successfully from external validation criteria. QSAR models developed in this study should help further design of novel potent insecticides.
The quality of our drinking water: aluminium determination with an acoustic wave sensor.
Veríssimo, Marta I S; Gomes, M Teresa S R
2008-06-09
A new methodology based on an inexpensive aluminium acoustic wave sensor is presented. Although the aluminium sensor has already been reported, and the composition of the selective membrane is known, the low detection limits required for the analysis of drinking water, demanded the inclusion of a preconcentration stage, as well as an optimization of the sensor. The necessary coating amount was established, as well as the best preconcentration protocol, in terms of oxidation of organic matter and aluminium elution from the Chelex-100. The methodology developed with the acoustic wave sensor allowed aluminium quantitation above 0.07 mg L(-1). Several water samples from Portugal were analysed using the acoustic wave sensor, as well as by UV-vis spectrophotometry. Results obtained with both methodologies were not statistically different (alpha=0.05), both in terms of accuracy and precision. This new methodology proved to be adequate for aluminium quantitation in drinking water and showed to be faster and less reagent consuming than the UV spectrophotometric methodology.
Fuzzy method of recognition of high molecular substances in evidence-based biology
NASA Astrophysics Data System (ADS)
Olevskyi, V. I.; Smetanin, V. T.; Olevska, Yu. B.
2017-10-01
Nowadays modern requirements to achieving reliable results along with high quality of researches put mathematical analysis methods of results at the forefront. Because of this, evidence-based methods of processing experimental data have become increasingly popular in the biological sciences and medicine. Their basis is meta-analysis, a method of quantitative generalization of a large number of randomized trails contributing to a same special problem, which are often contradictory and performed by different authors. It allows identifying the most important trends and quantitative indicators of the data, verification of advanced hypotheses and discovering new effects in the population genotype. The existing methods of recognizing high molecular substances by gel electrophoresis of proteins under denaturing conditions are based on approximate methods for comparing the contrast of electrophoregrams with a standard solution of known substances. We propose a fuzzy method for modeling experimental data to increase the accuracy and validity of the findings of the detection of new proteins.
Single-exposure quantitative phase imaging in color-coded LED microscopy.
Lee, Wonchan; Jung, Daeseong; Ryu, Suho; Joo, Chulmin
2017-04-03
We demonstrate single-shot quantitative phase imaging (QPI) in a platform of color-coded LED microscopy (cLEDscope). The light source in a conventional microscope is replaced by a circular LED pattern that is trisected into subregions with equal area, assigned to red, green, and blue colors. Image acquisition with a color image sensor and subsequent computation based on weak object transfer functions allow for the QPI of a transparent specimen. We also provide a correction method for color-leakage, which may be encountered in implementing our method with consumer-grade LEDs and image sensors. Most commercially available LEDs and image sensors do not provide spectrally isolated emissions and pixel responses, generating significant error in phase estimation in our method. We describe the correction scheme for this color-leakage issue, and demonstrate improved phase measurement accuracy. The computational model and single-exposure QPI capability of our method are presented by showing images of calibrated phase samples and cellular specimens.
Wang, Xianghong; Jiang, Daiming; Yang, Daichang
2015-01-01
The selection of homozygous lines is a crucial step in the characterization of newly generated transgenic plants. This is particularly time- and labor-consuming when transgenic stacking is required. Here, we report a fast and accurate method based on quantitative real-time PCR with a rice gene RBE4 as a reference gene for selection of homozygous lines when using multiple transgenic stacking in rice. Use of this method allowed can be used to determine the stacking of up to three transgenes within four generations. Selection accuracy reached 100 % for a single locus and 92.3 % for two loci. This method confers distinct advantages over current transgenic research methodologies, as it is more accurate, rapid, and reliable. Therefore, this protocol could be used to efficiently select homozygous plants and to expedite time- and labor-consuming processes normally required for multiple transgene stacking. This protocol was standardized for determination of multiple gene stacking in molecular breeding via marker-assisted selection.
Aknoun, Sherazade; Savatier, Julien; Bon, Pierre; Galland, Frédéric; Abdeladim, Lamiae; Wattellier, Benoit; Monneret, Serge
2015-01-01
Single-cell dry mass measurement is used in biology to follow cell cycle, to address effects of drugs, or to investigate cell metabolism. Quantitative phase imaging technique with quadriwave lateral shearing interferometry (QWLSI) allows measuring cell dry mass. The technique is very simple to set up, as it is integrated in a camera-like instrument. It simply plugs onto a standard microscope and uses a white light illumination source. Its working principle is first explained, from image acquisition to automated segmentation algorithm and dry mass quantification. Metrology of the whole process, including its sensitivity, repeatability, reliability, sources of error, over different kinds of samples and under different experimental conditions, is developed. We show that there is no influence of magnification or spatial light coherence on dry mass measurement; effect of defocus is more critical but can be calibrated. As a consequence, QWLSI is a well-suited technique for fast, simple, and reliable cell dry mass study, especially for live cells.
An HPLC method for determination of azadirachtin residues in bovine muscle.
Gai, María Nella; Álvarez, Christian; Venegas, Raúl; Morales, Javier
2011-04-01
A high-performance liquid chromatography (HPLC) method for the determination of azadirachtin (A and B) residues in bovine muscle has been developed. Azadirachtin is a neutral triterpene and chemotherapeutic agent effective in controlling some pest flies in horses, stables, horns and fruit. The actual HPLC method uses an isocratic elution and UV detection. Liquid-liquid extraction and solid-phase purification was used for the clean-up of the biological matrix. The chromatographic determination of these components is achieved using a C18 analytical column with water-acetonitrile mixture (27.5:72.5, v/v) as mobile phase, 1 mL/min as flow rate, 45 °C column temperature and UV detector at 215 nm. The azadirachtin peaks are well resolved and free of interference from matrix components. The extraction and analytical method developed in this work allows the quantitation of azadirachtin with precision and accuracy, establishing a lower limit of quantitation of azadirachtin, extracted from the biological matrix.
Quantitative analysis of single-molecule force spectroscopy on folded chromatin fibers
Meng, He; Andresen, Kurt; van Noort, John
2015-01-01
Single-molecule techniques allow for picoNewton manipulation and nanometer accuracy measurements of single chromatin fibers. However, the complexity of the data, the heterogeneity of the composition of individual fibers and the relatively large fluctuations in extension of the fibers complicate a structural interpretation of such force-extension curves. Here we introduce a statistical mechanics model that quantitatively describes the extension of individual fibers in response to force on a per nucleosome basis. Four nucleosome conformations can be distinguished when pulling a chromatin fiber apart. A novel, transient conformation is introduced that coexists with single wrapped nucleosomes between 3 and 7 pN. Comparison of force-extension curves between single nucleosomes and chromatin fibers shows that embedding nucleosomes in a fiber stabilizes the nucleosome by 10 kBT. Chromatin fibers with 20- and 50-bp linker DNA follow a different unfolding pathway. These results have implications for accessibility of DNA in fully folded and partially unwrapped chromatin fibers and are vital for understanding force unfolding experiments on nucleosome arrays. PMID:25779043
Xu, Yi-Fan; Lu, Wenyun; Rabinowitz, Joshua D.
2015-01-15
Liquid chromatography–mass spectrometry (LC-MS) technology allows for rapid quantitation of cellular metabolites, with metabolites identified by mass spectrometry and chromatographic retention time. Recently, with the development of rapid scanning high-resolution high accuracy mass spectrometers and the desire for high throughput screening, minimal or no chromatographic separation has become increasingly popular. Furthermore, when analyzing complex cellular extracts, however, the lack of chromatographic separation could potentially result in misannotation of structurally related metabolites. Here, we show that, even using electrospray ionization, a soft ionization method, in-source fragmentation generates unwanted byproducts of identical mass to common metabolites. For example, nucleotide-triphosphates generate nucleotide-diphosphates, andmore » hexose-phosphates generate triose-phosphates. We also evaluated yeast intracellular metabolite extracts and found more than 20 cases of in-source fragments that mimic common metabolites. Finally and accordingly, chromatographic separation is required for accurate quantitation of many common cellular metabolites.« less
Zhang, Cheng-Cheng; Li, Ru; Jiang, Honghui; Lin, Shujun; Rogalski, Jason C; Liu, Kate; Kast, Juergen
2015-02-06
Small GTPases are a family of key signaling molecules that are ubiquitously expressed in various types of cells. Their activity is often analyzed by western blot, which is limited by its multiplexing capability, the quality of isoform-specific antibodies, and the accuracy of quantification. To overcome these issues, a quantitative multiplexed small GTPase activity assay has been developed. Using four different binding domains, this assay allows the binding of up to 12 active small GTPase isoforms simultaneously in a single experiment. To accurately quantify the closely related small GTPase isoforms, a targeted proteomic approach, i.e., selected/multiple reaction monitoring, was developed, and its functionality and reproducibility were validated. This assay was successfully applied to human platelets and revealed time-resolved coactivation of multiple small GTPase isoforms in response to agonists and differential activation of these isoforms in response to inhibitor treatment. This widely applicable approach can be used for signaling pathway studies and inhibitor screening in many cellular systems.
Quantitative evaluation of software packages for single-molecule localization microscopy.
Sage, Daniel; Kirshner, Hagai; Pengo, Thomas; Stuurman, Nico; Min, Junhong; Manley, Suliana; Unser, Michael
2015-08-01
The quality of super-resolution images obtained by single-molecule localization microscopy (SMLM) depends largely on the software used to detect and accurately localize point sources. In this work, we focus on the computational aspects of super-resolution microscopy and present a comprehensive evaluation of localization software packages. Our philosophy is to evaluate each package as a whole, thus maintaining the integrity of the software. We prepared synthetic data that represent three-dimensional structures modeled after biological components, taking excitation parameters, noise sources, point-spread functions and pixelation into account. We then asked developers to run their software on our data; most responded favorably, allowing us to present a broad picture of the methods available. We evaluated their results using quantitative and user-interpretable criteria: detection rate, accuracy, quality of image reconstruction, resolution, software usability and computational resources. These metrics reflect the various tradeoffs of SMLM software packages and help users to choose the software that fits their needs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montanini, R.; Freni, F.; Rossi, G. L.
This paper reports one of the first experimental results on the application of ultrasound activated lock-in vibrothermography for quantitative assessment of buried flaws in complex cast parts. The use of amplitude modulated ultrasonic heat generation allowed selective response of defective areas within the part, as the defect itself is turned into a local thermal wave emitter. Quantitative evaluation of hidden damages was accomplished by estimating independently both the area and the depth extension of the buried flaws, while x-ray 3D computed tomography was used as reference for sizing accuracy assessment. To retrieve flaw's area, a simple yet effective histogram-based phasemore » image segmentation algorithm with automatic pixels classification has been developed. A clear correlation was found between the thermal (phase) signature measured by the infrared camera on the target surface and the actual mean cross-section area of the flaw. Due to the very fast cycle time (<30 s/part), the method could potentially be applied for 100% quality control of casting components.« less
Liu, Jian; Gao, Yun-Hua; Li, Ding-Dong; Gao, Yan-Chun; Hou, Ling-Mi; Xie, Ting
2014-01-01
To compare the value of contrast-enhanced ultrasound (CEUS) qualitative and quantitative analysis in the identification of breast tumor lumps. Qualitative and quantitative indicators of CEUS for 73 cases of breast tumor lumps were retrospectively analyzed by univariate and multivariate approaches. Logistic regression was applied and ROC curves were drawn for evaluation and comparison. The CEUS qualitative indicator-generated regression equation contained three indicators, namely enhanced homogeneity, diameter line expansion and peak intensity grading, which demonstrated prediction accuracy for benign and malignant breast tumor lumps of 91.8%; the quantitative indicator-generated regression equation only contained one indicator, namely the relative peak intensity, and its prediction accuracy was 61.5%. The corresponding areas under the ROC curve for qualitative and quantitative analyses were 91.3% and 75.7%, respectively, which exhibited a statistically significant difference by the Z test (P<0.05). The ability of CEUS qualitative analysis to identify breast tumor lumps is better than with quantitative analysis.
Alaska national hydrography dataset positional accuracy assessment study
Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy
2013-01-01
Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.
NASA Astrophysics Data System (ADS)
Pratt, Jon R.; Kramar, John A.; Newell, David B.; Smith, Douglas T.
2005-05-01
If nanomechanical testing is to evolve into a tool for process and quality control in semiconductor fabrication, great advances in throughput, repeatability, and accuracy of the associated instruments and measurements will be required. A recent grant awarded by the NIST Advanced Technology Program seeks to address the throughput issue by developing a high-speed AFM-based platform for quantitative nanomechanical measurements. The following paper speaks to the issue of quantitative accuracy by presenting an overview of various standards and techniques under development at NIST and other national metrology institutes (NMIs) that can provide a metrological basis for nanomechanical testing. The infrastructure we describe places firm emphasis on traceability to the International System of Units, paving the way for truly quantitative, rather than qualitative, physical property testing.
NASA Technical Reports Server (NTRS)
Wallace, William T.; Limero, Thomas F.; Gazda, Daniel B.; Minton, John M.; Macatangay, Ariel V.; Dwivedi, Prabha; Fernandez, Facundo M.
2014-01-01
Real-time environmental monitoring on ISS is necessary to provide data in a timely fashion and to help ensure astronaut health. Current real-time water TOC monitoring provides high-quality trending information, but compound-specific data is needed. The combination of ETV with the AQM showed that compounds of interest could be liberated from water and analyzed in the same manner as air sampling. Calibration of the AQM using water samples allowed for the quantitative analysis of ISS archival samples. Some calibration issues remain, but the excellent accuracy of DMSD indicates that ETV holds promise for as a sample introduction method for water analysis in spaceflight.
Garson, Christopher D; Li, Bing; Acton, Scott T; Hossack, John A
2008-06-01
The active surface technique using gradient vector flow allows semi-automated segmentation of ventricular borders. The accuracy of the algorithm depends on the optimal selection of several key parameters. We investigated the use of conservation of myocardial volume for quantitative assessment of each of these parameters using synthetic and in vivo data. We predicted that for a given set of model parameters, strong conservation of volume would correlate with accurate segmentation. The metric was most useful when applied to the gradient vector field weighting and temporal step-size parameters, but less effective in guiding an optimal choice of the active surface tension and rigidity parameters.
Quantitative data standardization of X-ray based densitometry methods
NASA Astrophysics Data System (ADS)
Sergunova, K. A.; Petraikin, A. V.; Petrjajkin, F. A.; Akhmad, K. S.; Semenov, D. S.; Potrakhov, N. N.
2018-02-01
In the present work is proposed the design of special liquid phantom for assessing the accuracy of quantitative densitometric data. Also are represented the dependencies between the measured bone mineral density values and the given values for different X-ray based densitometry techniques. Shown linear graphs make it possible to introduce correction factors to increase the accuracy of BMD measurement by QCT, DXA and DECT methods, and to use them for standardization and comparison of measurements.
NASA Astrophysics Data System (ADS)
Gupta, Arun; Kim, Kyeong Yun; Hwang, Donghwi; Lee, Min Sun; Lee, Dong Soo; Lee, Jae Sung
2018-06-01
SPECT plays important role in peptide receptor targeted radionuclide therapy using theranostic radionuclides such as Lu-177 for the treatment of various cancers. However, SPECT studies must be quantitatively accurate because the reliable assessment of tumor uptake and tumor-to-normal tissue ratios can only be performed using quantitatively accurate images. Hence, it is important to evaluate performance parameters and quantitative accuracy of preclinical SPECT systems for therapeutic radioisotopes before conducting pre- and post-therapy SPECT imaging or dosimetry studies. In this study, we evaluated system performance and quantitative accuracy of NanoSPECT/CT scanner for Lu-177 imaging using point source and uniform phantom studies. We measured recovery coefficient, uniformity, spatial resolution, system sensitivity and calibration factor for mouse whole body standard aperture. We also performed the experiments using Tc-99m to compare the results with that of Lu-177. We found that the recovery coefficient of more than 70% for Lu-177 at the optimum noise level when nine iterations were used. The spatial resolutions of Lu-177 with and without adding uniform background was comparable to that of Tc-99m in axial, radial and tangential directions. System sensitivity measured for Lu-177 was almost three times less than that of Tc-99m.
Confidence estimation for quantitative photoacoustic imaging
NASA Astrophysics Data System (ADS)
Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena
2018-02-01
Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.
Analytical robustness of quantitative NIR chemical imaging for Islamic paper characterization
NASA Astrophysics Data System (ADS)
Mahgoub, Hend; Gilchrist, John R.; Fearn, Thomas; Strlič, Matija
2017-07-01
Recently, spectral imaging techniques such as Multispectral (MSI) and Hyperspectral Imaging (HSI) have gained importance in the field of heritage conservation. This paper explores the analytical robustness of quantitative chemical imaging for Islamic paper characterization by focusing on the effect of different measurement and processing parameters, i.e. acquisition conditions and calibration on the accuracy of the collected spectral data. This will provide a better understanding of the technique that can provide a measure of change in collections through imaging. For the quantitative model, special calibration target was devised using 105 samples from a well-characterized reference Islamic paper collection. Two material properties were of interest: starch sizing and cellulose degree of polymerization (DP). Multivariate data analysis methods were used to develop discrimination and regression models which were used as an evaluation methodology for the metrology of quantitative NIR chemical imaging. Spectral data were collected using a pushbroom HSI scanner (Gilden Photonics Ltd) in the 1000-2500 nm range with a spectral resolution of 6.3 nm using a mirror scanning setup and halogen illumination. Data were acquired at different measurement conditions and acquisition parameters. Preliminary results showed the potential of the evaluation methodology to show that measurement parameters such as the use of different lenses and different scanning backgrounds may not have a great influence on the quantitative results. Moreover, the evaluation methodology allowed for the selection of the best pre-treatment method to be applied to the data.
Photogrammetry and Its Potential Application in Medical Science on the Basis of Selected Literature.
Ey-Chmielewska, Halina; Chruściel-Nogalska, Małgorzata; Frączak, Bogumiła
2015-01-01
Photogrammetry is a science and technology which allows quantitative traits to be determined, i.e. the reproduction of object shapes, sizes and positions on the basis of their photographs. Images can be recorded in a wide range of wavelengths of electromagnetic radiation. The most common is the visible range, but near- and medium-infrared, thermal infrared, microwaves and X-rays are also used. The importance of photogrammetry has increased with the development of computer software. Digital image processing and real-time measurement have allowed the automation of many complex manufacturing processes. Photogrammetry has been widely used in many areas, especially in geodesy and cartography. In medicine, this method is used for measuring the widely understood human body for the planning and monitoring of therapeutic treatment and its results. Digital images obtained from optical-electronic sensors combined with computer technology have the potential of objective measurement thanks to the remote nature of the data acquisition, with no contact with the measured object and with high accuracy. Photogrammetry also allows the adoption of common standards for archiving and processing patient data.
Phillips, Melissa M
2015-04-01
Vitamins are essential for improving and maintaining human health, and the main source of vitamins is the diet. Measurement of the quantities of water-soluble vitamins in common food materials is important to understand the impact of vitamin intake on human health, and also to provide necessary information for regulators to determine adequate intakes. Liquid chromatography (LC) and mass spectrometry (MS) based methods for water-soluble vitamin analysis are abundant in the literature, but most focus on only fortified foods or dietary supplements or allow determination of only a single vitamin. In this work, a method based on LC/MS and LC/MS/MS has been developed to allow simultaneous quantitation of eight water-soluble vitamins, including multiple forms of vitamins B3 and B6, in a variety of fortified and unfortified food-matrix Standard Reference Materials (SRMs). Optimization of extraction of unbound vitamin forms and confirmation using data from external laboratories ensured accuracy in the assigned values, and addition of stable isotope labeled internal standards for each of the vitamins allowed for increased precision.
Aslan, Kerim; Gunbey, Hediye Pinar; Tomak, Leman; Ozmen, Zafer; Incesu, Lutfi
The aim of this study was to investigate whether the use of combination quantitative metrics (mamillopontine distance [MPD], pontomesencephalic angle, and mesencephalon anterior-posterior/medial-lateral diameter ratios) with qualitative signs (dural enhancement, subdural collections/hematoma, venous engorgement, pituitary gland enlargements, and tonsillar herniations) provides a more accurate diagnosis of intracranial hypotension (IH). The quantitative metrics and qualitative signs of 34 patients and 34 control subjects were assessed by 2 independent observers. Receiver operating characteristic (ROC) curve was used to evaluate the diagnostic performance of quantitative metrics and qualitative signs, and for the diagnosis of IH, optimum cutoff values of quantitative metrics were found with ROC analysis. Combined ROC curve was measured for the quantitative metrics, and qualitative signs combinations in determining diagnostic accuracy and sensitivity, specificity, and positive and negative predictive values were found, and the best model combination was formed. Whereas MPD and pontomesencephalic angle were significantly lower in patients with IH when compared with the control group (P < 0.001), mesencephalon anterior-posterior/medial-lateral diameter ratio was significantly higher (P < 0.001). For qualitative signs, the highest individual distinctive power was dural enhancement with area under the ROC curve (AUC) of 0.838. For quantitative metrics, the highest individual distinctive power was MPD with AUC of 0.947. The best accuracy in the diagnosis of IH was obtained by combination of dural enhancement, venous engorgement, and MPD with an AUC of 1.00. This study showed that the combined use of dural enhancement, venous engorgement, and MPD had diagnostic accuracy of 100 % for the diagnosis of IH. Therefore, a more accurate IH diagnosis can be provided with combination of quantitative metrics with qualitative signs.
Glauser, Gaétan; Grund, Baptiste; Gassner, Anne-Laure; Menin, Laure; Henry, Hugues; Bromirski, Maciej; Schütz, Frédéric; McMullen, Justin; Rochat, Bertrand
2016-03-15
A paradigm shift is underway in the field of quantitative liquid chromatography-mass spectrometry (LC-MS) analysis thanks to the arrival of recent high-resolution mass spectrometers (HRMS). The capability of HRMS to perform sensitive and reliable quantifications of a large variety of analytes in HR-full scan mode is showing that it is now realistic to perform quantitative and qualitative analysis with the same instrument. Moreover, HR-full scan acquisition offers a global view of sample extracts and allows retrospective investigations as virtually all ionized compounds are detected with a high sensitivity. In time, the versatility of HRMS together with the increasing need for relative quantification of hundreds of endogenous metabolites should promote a shift from triple-quadrupole MS to HRMS. However, a current "pitfall" in quantitative LC-HRMS analysis is the lack of HRMS-specific guidance for validated quantitative analyses. Indeed, false positive and false negative HRMS detections are rare, albeit possible, if inadequate parameters are used. Here, we investigated two key parameters for the validation of LC-HRMS quantitative analyses: the mass accuracy (MA) and the mass-extraction-window (MEW) that is used to construct the extracted-ion-chromatograms. We propose MA-parameters, graphs, and equations to calculate rational MEW width for the validation of quantitative LC-HRMS methods. MA measurements were performed on four different LC-HRMS platforms. Experimentally determined MEW values ranged between 5.6 and 16.5 ppm and depended on the HRMS platform, its working environment, the calibration procedure, and the analyte considered. The proposed procedure provides a fit-for-purpose MEW determination and prevents false detections.
[A new method of processing quantitative PCR data].
Ke, Bing-Shen; Li, Guang-Yun; Chen, Shi-Min; Huang, Xiang-Yan; Chen, Ying-Jian; Xu, Jun
2003-05-01
Today standard PCR can't satisfy the need of biotechnique development and clinical research any more. After numerous dynamic research, PE company found there is a linear relation between initial template number and cycling time when the accumulating fluorescent product is detectable.Therefore,they developed a quantitative PCR technique to be used in PE7700 and PE5700. But the error of this technique is too great to satisfy the need of biotechnique development and clinical research. A better quantitative PCR technique is needed. The mathematical model submitted here is combined with the achievement of relative science,and based on the PCR principle and careful analysis of molecular relationship of main members in PCR reaction system. This model describes the function relation between product quantity or fluorescence intensity and initial template number and other reaction conditions, and can reflect the accumulating rule of PCR product molecule accurately. Accurate quantitative PCR analysis can be made use this function relation. Accumulated PCR product quantity can be obtained from initial template number. Using this model to do quantitative PCR analysis,result error is only related to the accuracy of fluorescence intensity or the instrument used. For an example, when the fluorescence intensity is accurate to 6 digits and the template size is between 100 to 1,000,000, the quantitative result accuracy will be more than 99%. The difference of result error is distinct using same condition,same instrument but different analysis method. Moreover,if the PCR quantitative analysis system is used to process data, it will get result 80 times of accuracy than using CT method.
NASA Astrophysics Data System (ADS)
Diesing, Markus; Green, Sophie L.; Stephens, David; Lark, R. Murray; Stewart, Heather A.; Dove, Dayton
2014-08-01
Marine spatial planning and conservation need underpinning with sufficiently detailed and accurate seabed substrate and habitat maps. Although multibeam echosounders enable us to map the seabed with high resolution and spatial accuracy, there is still a lack of fit-for-purpose seabed maps. This is due to the high costs involved in carrying out systematic seabed mapping programmes and the fact that the development of validated, repeatable, quantitative and objective methods of swath acoustic data interpretation is still in its infancy. We compared a wide spectrum of approaches including manual interpretation, geostatistics, object-based image analysis and machine-learning to gain further insights into the accuracy and comparability of acoustic data interpretation approaches based on multibeam echosounder data (bathymetry, backscatter and derivatives) and seabed samples with the aim to derive seabed substrate maps. Sample data were split into a training and validation data set to allow us to carry out an accuracy assessment. Overall thematic classification accuracy ranged from 67% to 76% and Cohen's kappa varied between 0.34 and 0.52. However, these differences were not statistically significant at the 5% level. Misclassifications were mainly associated with uncommon classes, which were rarely sampled. Map outputs were between 68% and 87% identical. To improve classification accuracy in seabed mapping, we suggest that more studies on the effects of factors affecting the classification performance as well as comparative studies testing the performance of different approaches need to be carried out with a view to developing guidelines for selecting an appropriate method for a given dataset. In the meantime, classification accuracy might be improved by combining different techniques to hybrid approaches and multi-method ensembles.
Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis
Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.
2015-01-01
Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S; Kang, S; Eom, J
Purpose: Photon-counting detectors (PCDs) allow multi-energy X-ray imaging without additional exposures and spectral overlap. This capability results in the improvement of accuracy of material decomposition for dual-energy X-ray imaging and the reduction of radiation dose. In this study, the PCD-based contrast-enhanced dual-energy mammography (CEDM) was compared with the conventional CDEM in terms of radiation dose, image quality and accuracy of material decomposition. Methods: A dual-energy model was designed by using Beer-Lambert’s law and rational inverse fitting function for decomposing materials from a polychromatic X-ray source. A cadmium zinc telluride (CZT)-based PCD, which has five energy thresholds, and iodine solutions includedmore » in a 3D half-cylindrical phantom, which composed of 50% glandular and 50% adipose tissues, were simulated by using a Monte Carlo simulation tool. The low- and high-energy images were obtained in accordance with the clinical exposure conditions for the conventional CDEM. Energy bins of 20–33 and 34–50 keV were defined from X-ray energy spectra simulated at 50 kVp with different dose levels for implementing the PCD-based CDEM. The dual-energy mammographic techniques were compared by means of absorbed dose, noise property and normalized root-mean-square error (NRMSE). Results: Comparing to the conventional CEDM, the iodine solutions were clearly decomposed for the PCD-based CEDM. Although the radiation dose for the PCD-based CDEM was lower than that for the conventional CEDM, the PCD-based CDEM improved the noise property and accuracy of decomposition images. Conclusion: This study demonstrates that the PCD-based CDEM allows the quantitative material decomposition, and reduces radiation dose in comparison with the conventional CDEM. Therefore, the PCD-based CDEM is able to provide useful information for detecting breast tumor and enhancing diagnostic accuracy in mammography.« less
Quantitative characterisation of sedimentary grains
NASA Astrophysics Data System (ADS)
Tunwal, Mohit; Mulchrone, Kieran F.; Meere, Patrick A.
2016-04-01
Analysis of sedimentary texture helps in determining the formation, transportation and deposition processes of sedimentary rocks. Grain size analysis is traditionally quantitative, whereas grain shape analysis is largely qualitative. A semi-automated approach to quantitatively analyse shape and size of sand sized sedimentary grains is presented. Grain boundaries are manually traced from thin section microphotographs in the case of lithified samples and are automatically identified in the case of loose sediments. Shape and size paramters can then be estimated using a software package written on the Mathematica platform. While automated methodology already exists for loose sediment analysis, the available techniques for the case of lithified samples are limited to cases of high definition thin section microphotographs showing clear contrast between framework grains and matrix. Along with the size of grain, shape parameters such as roundness, angularity, circularity, irregularity and fractal dimension are measured. A new grain shape parameter developed using Fourier descriptors has also been developed. To test this new approach theoretical examples were analysed and produce high quality results supporting the accuracy of the algorithm. Furthermore sandstone samples from known aeolian and fluvial environments from the Dingle Basin, County Kerry, Ireland were collected and analysed. Modern loose sediments from glacial till from County Cork, Ireland and aeolian sediments from Rajasthan, India have also been collected and analysed. A graphical summary of the data is presented and allows for quantitative distinction between samples extracted from different sedimentary environments.
Quantitative analysis of SMEX'02 AIRSAR data for soil moisture inversion
NASA Technical Reports Server (NTRS)
Zyl, J. J. van; Njoku, E.; Jackson, T.
2003-01-01
This paper discusses in detail the characteristics of the AIRSAR data acquired, and provides an initial quantitative assessment of the accuracy of the radar inversion algorithms under these vegetated conditions.
Larger core size has superior technical and analytical accuracy in bladder tissue microarray.
Eskaros, Adel Rh; Egloff, Shanna A Arnold; Boyd, Kelli L; Richardson, Joyce E; Hyndman, M Eric; Zijlstra, Andries
2017-03-01
The construction of tissue microarrays (TMAs) with cores from a large number of paraffin-embedded tissues (donors) into a single paraffin block (recipient) is an effective method of analyzing samples from many patient specimens simultaneously. For the TMA to be successful, the cores within it must capture the correct histologic areas from the donor blocks (technical accuracy) and maintain concordance with the tissue of origin (analytical accuracy). This can be particularly challenging for tissues with small histological features such as small islands of carcinoma in situ (CIS), thin layers of normal urothelial lining of the bladder, or cancers that exhibit intratumor heterogeneity. In an effort to create a comprehensive TMA of a bladder cancer patient cohort that accurately represents the tumor heterogeneity and captures the small features of normal and CIS, we determined how core size (0.6 vs 1.0 mm) impacted the technical and analytical accuracy of the TMA. The larger 1.0 mm core exhibited better technical accuracy for all tissue types at 80.9% (normal), 94.2% (tumor), and 71.4% (CIS) compared with 58.6%, 85.9%, and 63.8% for 0.6 mm cores. Although the 1.0 mm core provided better tissue capture, increasing the number of replicates from two to three allowed with the 0.6 mm core compensated for this reduced technical accuracy. However, quantitative image analysis of proliferation using both Ki67+ immunofluorescence counts and manual mitotic counts demonstrated that the 1.0 mm core size also exhibited significantly greater analytical accuracy (P=0.004 and 0.035, respectively, r 2 =0.979 and 0.669, respectively). Ultimately, our findings demonstrate that capturing two or more 1.0 mm cores for TMA construction provides superior technical and analytical accuracy over the smaller 0.6 mm cores, especially for tissues harboring small histological features or substantial heterogeneity.
Performance of 12 DIR algorithms in low-contrast regions for mass and density conserving deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeo, U. J.; Supple, J. R.; Franich, R. D.
2013-10-15
Purpose: Deformable image registration (DIR) has become a key tool for adaptive radiotherapy to account for inter- and intrafraction organ deformation. Of contemporary interest, the application to deformable dose accumulation requires accurate deformation even in low contrast regions where dose gradients may exist within near-uniform tissues. One expects high-contrast features to generally be deformed more accurately by DIR algorithms. The authors systematically assess the accuracy of 12 DIR algorithms and quantitatively examine, in particular, low-contrast regions, where accuracy has not previously been established.Methods: This work investigates DIR algorithms in three dimensions using deformable gel (DEFGEL) [U. J. Yeo, M. L.more » Taylor, L. Dunn, R. L. Smith, T. Kron, and R. D. Franich, “A novel methodology for 3D deformable dosimetry,” Med. Phys. 39, 2203–2213 (2012)], for application to mass- and density-conserving deformations. CT images of DEFGEL phantoms with 16 fiducial markers (FMs) implanted were acquired in deformed and undeformed states for three different representative deformation geometries. Nonrigid image registration was performed using 12 common algorithms in the public domain. The optimum parameter setup was identified for each algorithm and each was tested for deformation accuracy in three scenarios: (I) original images of the DEFGEL with 16 FMs; (II) images with eight of the FMs mathematically erased; and (III) images with all FMs mathematically erased. The deformation vector fields obtained for scenarios II and III were then applied to the original images containing all 16 FMs. The locations of the FMs estimated by the algorithms were compared to actual locations determined by CT imaging. The accuracy of the algorithms was assessed by evaluation of three-dimensional vectors between true marker locations and predicted marker locations.Results: The mean magnitude of 16 error vectors per sample ranged from 0.3 to 3.7, 1.0 to 6.3, and 1.3 to 7.5 mm across algorithms for scenarios I to III, respectively. The greatest accuracy was exhibited by the original Horn and Schunck optical flow algorithm. In this case, for scenario III (erased FMs not contributing to driving the DIR calculation), the mean error was half that of the modified demons algorithm (which exhibited the greatest error), across all deformations. Some algorithms failed to reproduce the geometry at all, while others accurately deformed high contrast features but not low-contrast regions—indicating poor interpolation between landmarks.Conclusions: The accuracy of DIR algorithms was quantitatively evaluated using a tissue equivalent, mass, and density conserving DEFGEL phantom. For the model studied, optical flow algorithms performed better than demons algorithms, with the original Horn and Schunck performing best. The degree of error is influenced more by the magnitude of displacement than the geometric complexity of the deformation. As might be expected, deformation is estimated less accurately for low-contrast regions than for high-contrast features, and the method presented here allows quantitative analysis of the differences. The evaluation of registration accuracy through observation of the same high contrast features that drive the DIR calculation is shown to be circular and hence misleading.« less
Quantitative study of FORC diagrams in thermally corrected Stoner- Wohlfarth nanoparticles systems
NASA Astrophysics Data System (ADS)
De Biasi, E.; Curiale, J.; Zysler, R. D.
2016-12-01
The use of FORC diagrams is becoming increasingly popular among researchers devoted to magnetism and magnetic materials. However, a thorough interpretation of this kind of diagrams, in order to achieve quantitative information, requires an appropriate model of the studied system. For that reason most of the FORC studies are used for a qualitative analysis. In magnetic systems thermal fluctuations "blur" the signatures of the anisotropy, volume and particle interactions distributions, therefore thermal effects in nanoparticles systems conspire against a proper interpretation and analysis of these diagrams. Motivated by this fact, we have quantitatively studied the degree of accuracy of the information extracted from FORC diagrams for the special case of single-domain thermal corrected Stoner- Wohlfarth (easy axes along the external field orientation) nanoparticles systems. In this work, the starting point is an analytical model that describes the behavior of a magnetic nanoparticles system as a function of field, anisotropy, temperature and measurement time. In order to study the quantitative degree of accuracy of our model, we built FORC diagrams for different archetypical cases of magnetic nanoparticles. Our results show that from the quantitative information obtained from the diagrams, under the hypotheses of the proposed model, is possible to recover the features of the original system with accuracy above 95%. This accuracy is improved at low temperatures and also it is possible to access to the anisotropy distribution directly from the FORC coercive field profile. Indeed, our simulations predict that the volume distribution plays a secondary role being the mean value and its deviation the only important parameters. Therefore it is possible to obtain an accurate result for the inversion and interaction fields despite the features of the volume distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minelli, Annalisa, E-mail: Annalisa.Minelli@univ-brest.fr; Marchesini, Ivan, E-mail: Ivan.Marchesini@irpi.cnr.it; Taylor, Faith E., E-mail: Faith.Taylor@kcl.ac.uk
Although there are clear economic and environmental incentives for producing energy from solar and wind power, there can be local opposition to their installation due to their impact upon the landscape. To date, no international guidelines exist to guide quantitative visual impact assessment of these facilities, making the planning process somewhat subjective. In this paper we demonstrate the development of a method and an Open Source GIS tool to quantitatively assess the visual impact of these facilities using line-of-site techniques. The methods here build upon previous studies by (i) more accurately representing the shape of energy producing facilities, (ii) takingmore » into account the distortion of the perceived shape and size of facilities caused by the location of the observer, (iii) calculating the possible obscuring of facilities caused by terrain morphology and (iv) allowing the combination of various facilities to more accurately represent the landscape. The tool has been applied to real and synthetic case studies and compared to recently published results from other models, and demonstrates an improvement in accuracy of the calculated visual impact of facilities. The tool is named r.wind.sun and is freely available from GRASS GIS AddOns. - Highlights: • We develop a tool to quantify wind turbine and photovoltaic panel visual impact. • The tool is freely available to download and edit as a module of GRASS GIS. • The tool takes into account visual distortion of the shape and size of objects. • The accuracy of calculation of visual impact is improved over previous methods.« less
NASA Astrophysics Data System (ADS)
Gao, Liang; Hammoudi, Ahmad A.; Li, Fuhai; Thrall, Michael J.; Cagle, Philip T.; Chen, Yuanxin; Yang, Jian; Xia, Xiaofeng; Fan, Yubo; Massoud, Yehia; Wang, Zhiyong; Wong, Stephen T. C.
2012-06-01
The advent of molecularly targeted therapies requires effective identification of the various cell types of non-small cell lung carcinomas (NSCLC). Currently, cell type diagnosis is performed using small biopsies or cytology specimens that are often insufficient for molecular testing after morphologic analysis. Thus, the ability to rapidly recognize different cancer cell types, with minimal tissue consumption, would accelerate diagnosis and preserve tissue samples for subsequent molecular testing in targeted therapy. We report a label-free molecular vibrational imaging framework enabling three-dimensional (3-D) image acquisition and quantitative analysis of cellular structures for identification of NSCLC cell types. This diagnostic imaging system employs superpixel-based 3-D nuclear segmentation for extracting such disease-related features as nuclear shape, volume, and cell-cell distance. These features are used to characterize cancer cell types using machine learning. Using fresh unstained tissue samples derived from cell lines grown in a mouse model, the platform showed greater than 97% accuracy for diagnosis of NSCLC cell types within a few minutes. As an adjunct to subsequent histology tests, our novel system would allow fast delineation of cancer cell types with minimum tissue consumption, potentially facilitating on-the-spot diagnosis, while preserving specimens for additional tests. Furthermore, 3-D measurements of cellular structure permit evaluation closer to the native state of cells, creating an alternative to traditional 2-D histology specimen evaluation, potentially increasing accuracy in diagnosing cell type of lung carcinomas.
Žuvela, Petar; Liu, J Jay; Macur, Katarzyna; Bączek, Tomasz
2015-10-06
In this work, performance of five nature-inspired optimization algorithms, genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), firefly algorithm (FA), and flower pollination algorithm (FPA), was compared in molecular descriptor selection for development of quantitative structure-retention relationship (QSRR) models for 83 peptides that originate from eight model proteins. The matrix with 423 descriptors was used as input, and QSRR models based on selected descriptors were built using partial least squares (PLS), whereas root mean square error of prediction (RMSEP) was used as a fitness function for their selection. Three performance criteria, prediction accuracy, computational cost, and the number of selected descriptors, were used to evaluate the developed QSRR models. The results show that all five variable selection methods outperform interval PLS (iPLS), sparse PLS (sPLS), and the full PLS model, whereas GA is superior because of its lowest computational cost and higher accuracy (RMSEP of 5.534%) with a smaller number of variables (nine descriptors). The GA-QSRR model was validated initially through Y-randomization. In addition, it was successfully validated with an external testing set out of 102 peptides originating from Bacillus subtilis proteomes (RMSEP of 22.030%). Its applicability domain was defined, from which it was evident that the developed GA-QSRR exhibited strong robustness. All the sources of the model's error were identified, thus allowing for further application of the developed methodology in proteomics.
NASA Astrophysics Data System (ADS)
Guha, Daipayan; Jakubovic, Raphael; Gupta, Shaurya; Yang, Victor X. D.
2017-02-01
Computer-assisted navigation (CAN) may guide spinal surgeries, reliably reducing screw breach rates. Definitions of screw breach, if reported, vary widely across studies. Absolute quantitative error is theoretically a more precise and generalizable metric of navigation accuracy, but has been computed variably and reported in fewer than 25% of clinical studies of CAN-guided pedicle screw accuracy. We reviewed a prospectively-collected series of 209 pedicle screws placed with CAN guidance to characterize the correlation between clinical pedicle screw accuracy, based on postoperative imaging, and absolute quantitative navigation accuracy. We found that acceptable screw accuracy was achieved for significantly fewer screws based on 2mm grade vs. Heary grade, particularly in the lumbar spine. Inter-rater agreement was good for the Heary classification and moderate for the 2mm grade, significantly greater among radiologists than surgeon raters. Mean absolute translational/angular accuracies were 1.75mm/3.13° and 1.20mm/3.64° in the axial and sagittal planes, respectively. There was no correlation between clinical and absolute navigation accuracy, in part because surgeons appear to compensate for perceived translational navigation error by adjusting screw medialization angle. Future studies of navigation accuracy should therefore report absolute translational and angular errors. Clinical screw grades based on post-operative imaging, if reported, may be more reliable if performed in multiple by radiologist raters.
Discovering the Quantity of Quality: Scoring "Regional Identity" for Quantitative Research
ERIC Educational Resources Information Center
Miller, Daniel A.
2008-01-01
The variationist paradigm in sociolinguistics is at a disadvantage when dealing with variables that are traditionally treated qualitatively, e.g., "identity". This study essays to level the accuracy and descriptive value of qualitative research in a quantitative setting by rendering such a variable quantitatively accessible. To this end,…
Pfammatter, Sibylle; Bonneil, Eric; Thibault, Pierre
2016-12-02
Quantitative proteomics using isobaric reagent tandem mass tags (TMT) or isobaric tags for relative and absolute quantitation (iTRAQ) provides a convenient approach to compare changes in protein abundance across multiple samples. However, the analysis of complex protein digests by isobaric labeling can be undermined by the relative large proportion of co-selected peptide ions that lead to distorted reporter ion ratios and affect the accuracy and precision of quantitative measurements. Here, we investigated the use of high-field asymmetric waveform ion mobility spectrometry (FAIMS) in proteomic experiments to reduce sample complexity and improve protein quantification using TMT isobaric labeling. LC-FAIMS-MS/MS analyses of human and yeast protein digests led to significant reductions in interfering ions, which increased the number of quantifiable peptides by up to 68% while significantly improving the accuracy of abundance measurements compared to that with conventional LC-MS/MS. The improvement in quantitative measurements using FAIMS is further demonstrated for the temporal profiling of protein abundance of HEK293 cells following heat shock treatment.
Blancett, Candace D; Fetterer, David P; Koistinen, Keith A; Morazzani, Elaine M; Monninger, Mitchell K; Piper, Ashley E; Kuehl, Kathleen A; Kearney, Brian J; Norris, Sarah L; Rossi, Cynthia A; Glass, Pamela J; Sun, Mei G
2017-10-01
A method for accurate quantitation of virus particles has long been sought, but a perfect method still eludes the scientific community. Electron Microscopy (EM) quantitation is a valuable technique because it provides direct morphology information and counts of all viral particles, whether or not they are infectious. In the past, EM negative stain quantitation methods have been cited as inaccurate, non-reproducible, and with detection limits that were too high to be useful. To improve accuracy and reproducibility, we have developed a method termed Scanning Transmission Electron Microscopy - Virus Quantitation (STEM-VQ), which simplifies sample preparation and uses a high throughput STEM detector in a Scanning Electron Microscope (SEM) coupled with commercially available software. In this paper, we demonstrate STEM-VQ with an alphavirus stock preparation to present the method's accuracy and reproducibility, including a comparison of STEM-VQ to viral plaque assay and the ViroCyt Virus Counter. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Sakunpak, Apirak; Suksaeree, Jirapornchai; Monton, Chaowalit; Pathompak, Pathamaporn; Kraisintu, Krisana
2014-02-01
To develop and validate an image analysis method for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. TLC-densitometric and TLC-image analysis methods were developed, validated, and used for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. The results obtained by these two different quantification methods were compared by paired t-test. Both assays provided good linearity, accuracy, reproducibility and selectivity for determination of γ-oryzanol. The TLC-densitometric and TLC-image analysis methods provided a similar reproducibility, accuracy and selectivity for the quantitative determination of γ-oryzanol in cold pressed rice bran oil. A statistical comparison of the quantitative determinations of γ-oryzanol in samples did not show any statistically significant difference between TLC-densitometric and TLC-image analysis methods. As both methods were found to be equal, they therefore can be used for the determination of γ-oryzanol in cold pressed rice bran oil.
Sakunpak, Apirak; Suksaeree, Jirapornchai; Monton, Chaowalit; Pathompak, Pathamaporn; Kraisintu, Krisana
2014-01-01
Objective To develop and validate an image analysis method for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. Methods TLC-densitometric and TLC-image analysis methods were developed, validated, and used for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. The results obtained by these two different quantification methods were compared by paired t-test. Results Both assays provided good linearity, accuracy, reproducibility and selectivity for determination of γ-oryzanol. Conclusions The TLC-densitometric and TLC-image analysis methods provided a similar reproducibility, accuracy and selectivity for the quantitative determination of γ-oryzanol in cold pressed rice bran oil. A statistical comparison of the quantitative determinations of γ-oryzanol in samples did not show any statistically significant difference between TLC-densitometric and TLC-image analysis methods. As both methods were found to be equal, they therefore can be used for the determination of γ-oryzanol in cold pressed rice bran oil. PMID:25182282
Rigour in quantitative research.
Claydon, Leica Sarah
2015-07-22
This article which forms part of the research series addresses scientific rigour in quantitative research. It explores the basis and use of quantitative research and the nature of scientific rigour. It examines how the reader may determine whether quantitative research results are accurate, the questions that should be asked to determine accuracy and the checklists that may be used in this process. Quantitative research has advantages in nursing, since it can provide numerical data to help answer questions encountered in everyday practice.
Temporal Lobe Epilepsy: Quantitative MR Volumetry in Detection of Hippocampal Atrophy
Farid, Nikdokht; Girard, Holly M.; Kemmotsu, Nobuko; Smith, Michael E.; Magda, Sebastian W.; Lim, Wei Y.; Lee, Roland R.
2012-01-01
Purpose: To determine the ability of fully automated volumetric magnetic resonance (MR) imaging to depict hippocampal atrophy (HA) and to help correctly lateralize the seizure focus in patients with temporal lobe epilepsy (TLE). Materials and Methods: This study was conducted with institutional review board approval and in compliance with HIPAA regulations. Volumetric MR imaging data were analyzed for 34 patients with TLE and 116 control subjects. Structural volumes were calculated by using U.S. Food and Drug Administration–cleared software for automated quantitative MR imaging analysis (NeuroQuant). Results of quantitative MR imaging were compared with visual detection of atrophy, and, when available, with histologic specimens. Receiver operating characteristic analyses were performed to determine the optimal sensitivity and specificity of quantitative MR imaging for detecting HA and asymmetry. A linear classifier with cross validation was used to estimate the ability of quantitative MR imaging to help lateralize the seizure focus. Results: Quantitative MR imaging–derived hippocampal asymmetries discriminated patients with TLE from control subjects with high sensitivity (86.7%–89.5%) and specificity (92.2%–94.1%). When a linear classifier was used to discriminate left versus right TLE, hippocampal asymmetry achieved 94% classification accuracy. Volumetric asymmetries of other subcortical structures did not improve classification. Compared with invasive video electroencephalographic recordings, lateralization accuracy was 88% with quantitative MR imaging and 85% with visual inspection of volumetric MR imaging studies but only 76% with visual inspection of clinical MR imaging studies. Conclusion: Quantitative MR imaging can depict the presence and laterality of HA in TLE with accuracy rates that may exceed those achieved with visual inspection of clinical MR imaging studies. Thus, quantitative MR imaging may enhance standard visual analysis, providing a useful and viable means for translating volumetric analysis into clinical practice. © RSNA, 2012 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.12112638/-/DC1 PMID:22723496
Hasse, Katelyn; Neylon, John; Sheng, Ke; Santhanam, Anand P
2016-03-01
Breast elastography is a critical tool for improving the targeted radiotherapy treatment of breast tumors. Current breast radiotherapy imaging protocols only involve prone and supine CT scans. There is a lack of knowledge on the quantitative accuracy with which breast elasticity can be systematically measured using only prone and supine CT datasets. The purpose of this paper is to describe a quantitative elasticity estimation technique for breast anatomy using only these supine/prone patient postures. Using biomechanical, high-resolution breast geometry obtained from CT scans, a systematic assessment was performed in order to determine the feasibility of this methodology for clinically relevant elasticity distributions. A model-guided inverse analysis approach is presented in this paper. A graphics processing unit (GPU)-based linear elastic biomechanical model was employed as a forward model for the inverse analysis with the breast geometry in a prone position. The elasticity estimation was performed using a gradient-based iterative optimization scheme and a fast-simulated annealing (FSA) algorithm. Numerical studies were conducted to systematically analyze the feasibility of elasticity estimation. For simulating gravity-induced breast deformation, the breast geometry was anchored at its base, resembling the chest-wall/breast tissue interface. Ground-truth elasticity distributions were assigned to the model, representing tumor presence within breast tissue. Model geometry resolution was varied to estimate its influence on convergence of the system. A priori information was approximated and utilized to record the effect on time and accuracy of convergence. The role of the FSA process was also recorded. A novel error metric that combined elasticity and displacement error was used to quantify the systematic feasibility study. For the authors' purposes, convergence was set to be obtained when each voxel of tissue was within 1 mm of ground-truth deformation. The authors' analyses showed that a ∼97% model convergence was systematically observed with no-a priori information. Varying the model geometry resolution showed no significant accuracy improvements. The GPU-based forward model enabled the inverse analysis to be completed within 10-70 min. Using a priori information about the underlying anatomy, the computation time decreased by as much as 50%, while accuracy improved from 96.81% to 98.26%. The use of FSA was observed to allow the iterative estimation methodology to converge more precisely. By utilizing a forward iterative approach to solve the inverse elasticity problem, this work indicates the feasibility and potential of the fast reconstruction of breast tissue elasticity using supine/prone patient postures.
First-principles calculations of mobility
NASA Astrophysics Data System (ADS)
Krishnaswamy, Karthik
First-principles calculations can be a powerful predictive tool for studying, modeling and understanding the fundamental scattering mechanisms impacting carrier transport in materials. In the past, calculations have provided important qualitative insights, but numerical accuracy has been limited due to computational challenges. In this talk, we will discuss some of the challenges involved in calculating electron-phonon scattering and carrier mobility, and outline approaches to overcome them. Topics will include the limitations of models for electron-phonon interaction, the importance of grid sampling, and the use of Gaussian smearing to replace energy-conserving delta functions. Using prototypical examples of oxides that are of technological importance-SrTiO3, BaSnO3, Ga2O3, and WO3-we will demonstrate computational approaches to overcome these challenges and improve the accuracy. One approach that leads to a distinct improvement in the accuracy is the use of analytic functions for the band dispersion, which allows for an exact solution of the energy-conserving delta function. For select cases, we also discuss direct quantitative comparisons with experimental results. The computational approaches and methodologies discussed in the talk are general and applicable to other materials, and greatly improve the numerical accuracy of the calculated transport properties, such as carrier mobility, conductivity and Seebeck coefficient. This work was performed in collaboration with B. Himmetoglu, Y. Kang, W. Wang, A. Janotti and C. G. Van de Walle, and supported by the LEAST Center, the ONR EXEDE MURI, and NSF.
Petrillo, Antonella; Fusco, Roberta; Petrillo, Mario; Granata, Vincenza; Delrio, Paolo; Bianco, Francesco; Pecori, Biagio; Botti, Gerardo; Tatangelo, Fabiana; Caracò, Corradina; Aloj, Luigi; Avallone, Antonio; Lastoria, Secondo
2017-01-01
Purpose To investigate dynamic contrast enhanced-MRI (DCE-MRI) in the preoperative chemo-radiotherapy (CRT) assessment for locally advanced rectal cancer (LARC) compared to18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT). Methods 75 consecutive patients with LARC were enrolled in a prospective study. DCE-MRI analysis was performed measuring SIS: linear combination of percentage change (Δ) of maximum signal difference (MSD) and wash-out slope (WOS). 18F-FDG PET/CT analysis was performed using SUV maximum (SUVmax). Tumor regression grade (TRG) were estimated after surgery. Non-parametric tests, receiver operating characteristic were evaluated. Results 55 patients (TRG1-2) were classified as responders while 20 subjects as non responders. ΔSIS reached sensitivity of 93%, specificity of 80% and accuracy of 89% (cut-off 6%) to differentiate responders by non responders, sensitivity of 93%, specificity of 69% and accuracy of 79% (cut-off 30%) to identify pathological complete response (pCR). Therapy assessment via ΔSUVmax reached sensitivity of 67%, specificity of 75% and accuracy of 70% (cut-off 60%) to differentiate responders by non responders and sensitivity of 80%, specificity of 31% and accuracy of 51% (cut-off 44%) to identify pCR. Conclusions CRT response assessment by DCE-MRI analysis shows a higher predictive ability than 18F-FDG PET/CT in LARC patients allowing to better discriminate significant and pCR. PMID:28042958
Meta-analysis of diagnostic accuracy studies in mental health
Takwoingi, Yemisi; Riley, Richard D; Deeks, Jonathan J
2015-01-01
Objectives To explain methods for data synthesis of evidence from diagnostic test accuracy (DTA) studies, and to illustrate different types of analyses that may be performed in a DTA systematic review. Methods We described properties of meta-analytic methods for quantitative synthesis of evidence. We used a DTA review comparing the accuracy of three screening questionnaires for bipolar disorder to illustrate application of the methods for each type of analysis. Results The discriminatory ability of a test is commonly expressed in terms of sensitivity (proportion of those with the condition who test positive) and specificity (proportion of those without the condition who test negative). There is a trade-off between sensitivity and specificity, as an increasing threshold for defining test positivity will decrease sensitivity and increase specificity. Methods recommended for meta-analysis of DTA studies --such as the bivariate or hierarchical summary receiver operating characteristic (HSROC) model --jointly summarise sensitivity and specificity while taking into account this threshold effect, as well as allowing for between study differences in test performance beyond what would be expected by chance. The bivariate model focuses on estimation of a summary sensitivity and specificity at a common threshold while the HSROC model focuses on the estimation of a summary curve from studies that have used different thresholds. Conclusions Meta-analyses of diagnostic accuracy studies can provide answers to important clinical questions. We hope this article will provide clinicians with sufficient understanding of the terminology and methods to aid interpretation of systematic reviews and facilitate better patient care. PMID:26446042
Petrillo, Antonella; Fusco, Roberta; Petrillo, Mario; Granata, Vincenza; Delrio, Paolo; Bianco, Francesco; Pecori, Biagio; Botti, Gerardo; Tatangelo, Fabiana; Caracò, Corradina; Aloj, Luigi; Avallone, Antonio; Lastoria, Secondo
2017-01-31
To investigate dynamic contrast enhanced-MRI (DCE-MRI) in the preoperative chemo-radiotherapy (CRT) assessment for locally advanced rectal cancer (LARC) compared to18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT). 75 consecutive patients with LARC were enrolled in a prospective study. DCE-MRI analysis was performed measuring SIS: linear combination of percentage change (Δ) of maximum signal difference (MSD) and wash-out slope (WOS). 18F-FDG PET/CT analysis was performed using SUV maximum (SUVmax). Tumor regression grade (TRG) were estimated after surgery. Non-parametric tests, receiver operating characteristic were evaluated. 55 patients (TRG1-2) were classified as responders while 20 subjects as non responders. ΔSIS reached sensitivity of 93%, specificity of 80% and accuracy of 89% (cut-off 6%) to differentiate responders by non responders, sensitivity of 93%, specificity of 69% and accuracy of 79% (cut-off 30%) to identify pathological complete response (pCR). Therapy assessment via ΔSUVmax reached sensitivity of 67%, specificity of 75% and accuracy of 70% (cut-off 60%) to differentiate responders by non responders and sensitivity of 80%, specificity of 31% and accuracy of 51% (cut-off 44%) to identify pCR. CRT response assessment by DCE-MRI analysis shows a higher predictive ability than 18F-FDG PET/CT in LARC patients allowing to better discriminate significant and pCR.
A Quantitative Gas Chromatographic Ethanol Determination.
ERIC Educational Resources Information Center
Leary, James J.
1983-01-01
Describes a gas chromatographic experiment for the quantitative determination of volume percent ethanol in water ethanol solutions. Background information, procedures, and typical results are included. Accuracy and precision of results are both on the order of two percent. (JN)
Gigerenzer, Gerd
2009-01-01
In their comment on Marewski et al. (good judgments do not require complex cognition, 2009) Evans and Over (heuristic thinking and human intelligence: a commentary on Marewski, Gaissmaier and Gigerenzer, 2009) conjectured that heuristics can often lead to biases and are not error free. This is a most surprising critique. The computational models of heuristics we have tested allow for quantitative predictions of how many errors a given heuristic will make, and we and others have measured the amount of error by analysis, computer simulation, and experiment. This is clear progress over simply giving heuristics labels, such as availability, that do not allow for quantitative comparisons of errors. Evans and Over argue that the reason people rely on heuristics is the accuracy-effort trade-off. However, the comparison between heuristics and more effortful strategies, such as multiple regression, has shown that there are many situations in which a heuristic is more accurate with less effort. Finally, we do not see how the fast and frugal heuristics program could benefit from a dual-process framework unless the dual-process framework is made more precise. Instead, the dual-process framework could benefit if its two “black boxes” (Type 1 and Type 2 processes) were substituted by computational models of both heuristics and other processes. PMID:19784854
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burnum-Johnson, Kristin E.; Nie, Song; Casey, Cameron P.
Current proteomics approaches are comprised of both broad discovery measurements as well as more quantitative targeted measurements. These two different measurement types are used to initially identify potentially important proteins (e.g., candidate biomarkers) and then enable improved quantification for a limited number of selected proteins. However, both approaches suffer from limitations, particularly the lower sensitivity, accuracy, and quantitation precision for discovery approaches compared to targeted approaches, and the limited proteome coverage provided by targeted approaches. Herein, we describe a new proteomics approach that allows both discovery and targeted monitoring (DTM) in a single analysis using liquid chromatography, ion mobility spectrometrymore » and mass spectrometry (LC-IMS-MS). In DTM, heavy labeled peptides for target ions are spiked into tryptic digests and both the labeled and unlabeled peptides are broadly detected using LC-IMS-MS instrumentation, allowing the benefits of discovery and targeted approaches. To understand the possible improvement of the DTM approach, it was compared to LC-MS broad measurements using an accurate mass and time tag database and selected reaction monitoring (SRM) targeted measurements. The DTM results yielded greater peptide/protein coverage and a significant improvement in the detection of lower abundance species compared to LC-MS discovery measurements. DTM was also observed to have similar detection limits as SRM for the targeted measurements indicating its potential for combining the discovery and targeted approaches.« less
Moldoveanu, Serban; Scott, Wayne; Zhu, Jeff
2015-11-01
Bioactive botanicals contain natural compounds with specific biological activity, such as antibacterial, antioxidant, immune stimulating, and taste improving. A full characterization of the chemical composition of these botanicals is frequently necessary. A study of small carbohydrates from the plant materials of 18 bioactive botanicals is further described. The study presents the identification of the carbohydrate using a gas chromatographic-mass spectrometric analysis that allows detection of molecules as large as maltotetraose, after changing them into trimethylsilyl derivatives. A number of carbohydrates in the plant (fructose, glucose, mannose, sucrose, maltose, xylose, sorbitol, and myo-, chiro-, and scyllo-inositols) were quantitated using a novel liquid chromatography with tandem mass spectrometric technique. Both techniques involved new method developments. The gas chromatography with mass spectrometric analysis involved derivatization and separation on a Rxi(®)-5Sil MS column with H2 as a carrier gas. The liquid chromatographic separation was obtained using a hydrophilic interaction type column, YMC-PAC Polyamine II. The tandem mass spectrometer used an electrospray ionization source in multiple reaction monitoring positive ion mode with the detection of the adducts of the carbohydrates with Cs(+) ions. The validated quantitative procedure showed excellent precision and accuracy allowing the analysis in a wide range of concentrations of the analytes. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Kirby, Michelle R.
2002-01-01
The TIES method is a forecasting environment whereby the decision-maker has the ability to easily assess and trade-off the impact of various technologies without sophisticated and time-consuming mathematical formulations. TIES provides a methodical approach where technically feasible alternatives can be identified with accuracy and speed to reduce design cycle time, and subsequently, life cycle costs, and was achieved through the use of various probabilistic methods, such as Response Surface Methodology and Monte Carlo Simulations. Furthermore, structured and systematic techniques are utilized from other fields to identify possible concepts and evaluation criteria by which comparisons can be made. This objective is achieved by employing the use of Morphological Matrices and Multi-Attribute Decision Making techniques. Through the execution of each step, a family of design alternatives for a given set of customer requirements can be identified and assessed subjectively or objectively. This methodology allows for more information (knowledge) to be brought into the earlier phases of the design process and will have direct implications on the affordability of the system. The increased knowledge allows for optimum allocation of company resources and quantitative justification for program decisions. Finally, the TIES method provided novel results and quantitative justification to facilitate decision making in the early stages of design so as to produce affordable and quality products.
Quantitative optical diagnostics in pathology recognition and monitoring of tissue reaction to PDT
NASA Astrophysics Data System (ADS)
Kirillin, Mikhail; Shakhova, Maria; Meller, Alina; Sapunov, Dmitry; Agrba, Pavel; Khilov, Alexander; Pasukhin, Mikhail; Kondratieva, Olga; Chikalova, Ksenia; Motovilova, Tatiana; Sergeeva, Ekaterina; Turchin, Ilya; Shakhova, Natalia
2017-07-01
Optical coherence tomography (OCT) is currently actively introduced into clinical practice. Besides diagnostics, it can be efficiently employed for treatment monitoring allowing for timely correction of the treatment procedure. In monitoring of photodynamic therapy (PDT) traditionally employed fluorescence imaging (FI) can benefit from complementary use of OCT. Additional diagnostic efficiency can be derived from numerical processing of optical diagnostics data providing more information compared to visual evaluation. In this paper we report on application of OCT together with numerical processing for clinical diagnostic in gynecology and otolaryngology, for monitoring of PDT in otolaryngology and on OCT and FI applications in clinical and aesthetic dermatology. Image numerical processing and quantification provides increase in diagnostic accuracy. Keywords: optical coherence tomography, fluorescence imaging, photod
Salehi, Reza; Tsoi, Stephen C M; Colazo, Marcos G; Ambrose, Divakar J; Robert, Claude; Dyck, Michael K
2017-01-30
Early embryonic loss is a large contributor to infertility in cattle. Moreover, bovine becomes an interesting model to study human preimplantation embryo development due to their similar developmental process. Although genetic factors are known to affect early embryonic development, the discovery of such factors has been a serious challenge. Microarray technology allows quantitative measurement and gene expression profiling of transcript levels on a genome-wide basis. One of the main decisions that have to be made when planning a microarray experiment is whether to use a one- or two-color approach. Two-color design increases technical replication, minimizes variability, improves sensitivity and accuracy as well as allows having loop designs, defining the common reference samples. Although microarray is a powerful biological tool, there are potential pitfalls that can attenuate its power. Hence, in this technical paper we demonstrate an optimized protocol for RNA extraction, amplification, labeling, hybridization of the labeled amplified RNA to the array, array scanning and data analysis using the two-color analysis strategy.
NASA Astrophysics Data System (ADS)
Toe, David; Mentani, Alessio; Govoni, Laura; Bourrier, Franck; Gottardi, Guido; Lambert, Stéphane
2018-04-01
The paper presents a new approach to assess the effecctiveness of rockfall protection barriers, accounting for the wide variety of impact conditions observed on natural sites. This approach makes use of meta-models, considering a widely used rockfall barrier type and was developed from on FE simulation results. Six input parameters relevant to the block impact conditions have been considered. Two meta-models were developed concerning the barrier capability either of stopping the block or in reducing its kinetic energy. The outcome of the parameters range on the meta-model accuracy has been also investigated. The results of the study reveal that the meta-models are effective in reproducing with accuracy the response of the barrier to any impact conditions, providing a formidable tool to support the design of these structures. Furthermore, allowing to accommodate the effects of the impact conditions on the prediction of the block-barrier interaction, the approach can be successfully used in combination with rockfall trajectory simulation tools to improve rockfall quantitative hazard assessment and optimise rockfall mitigation strategies.
Automation of fluorescent differential display with digital readout.
Meade, Jonathan D; Cho, Yong-Jig; Fisher, Jeffrey S; Walden, Jamie C; Guo, Zhen; Liang, Peng
2006-01-01
Since its invention in 1992, differential display (DD) has become the most commonly used technique for identifying differentially expressed genes because of its many advantages over competing technologies such as DNA microarray, serial analysis of gene expression (SAGE), and subtractive hybridization. Despite the great impact of the method on biomedical research, there has been a lack of automation of DD technology to increase its throughput and accuracy for systematic gene expression analysis. Most of previous DD work has taken a "shot-gun" approach of identifying one gene at a time, with a limited number of polymerase chain reaction (PCR) reactions set up manually, giving DD a low-tech and low-throughput image. We have optimized the DD process with a new platform that incorporates fluorescent digital readout, automated liquid handling, and large-format gels capable of running entire 96-well plates. The resulting streamlined fluorescent DD (FDD) technology offers an unprecedented accuracy, sensitivity, and throughput in comprehensive and quantitative analysis of gene expression. These major improvements will allow researchers to find differentially expressed genes of interest, both known and novel, quickly and easily.
NASA Astrophysics Data System (ADS)
Fernández-Ruiz, Ramón; Friedrich K., E. Josue; Redrejo, M. J.
2018-02-01
The main goal of this work was to investigate, in a systematic way, the influence of the controlled modulation of the particle size distribution of a representative solid sample with respect to the more relevant analytical parameters of the Direct Solid Analysis (DSA) by Total-reflection X-Ray Fluorescence (TXRF) quantitative method. In particular, accuracy, uncertainty, linearity and detection limits were correlated with the main parameters of their size distributions for the following elements; Al, Si, P, S, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, As, Se, Rb, Sr, Ba and Pb. In all cases strong correlations were finded. The main conclusion of this work can be resumed as follows; the modulation of particles shape to lower average sizes next to a minimization of the width of particle size distributions, produce a strong increment of accuracy, minimization of uncertainties and limit of detections for DSA-TXRF methodology. These achievements allow the future use of the DSA-TXRF analytical methodology for development of ISO norms and standardized protocols for the direct analysis of solids by mean of TXRF.
Uncertainty characterization of particle location from refocused plenoptic images.
Hall, Elise M; Guildenbecher, Daniel R; Thurow, Brian S
2017-09-04
Plenoptic imaging is a 3D imaging technique that has been applied for quantification of 3D particle locations and sizes. This work experimentally evaluates the accuracy and precision of such measurements by investigating a static particle field translated to known displacements. Measured 3D displacement values are determined from sharpness metrics applied to volumetric representations of the particle field created using refocused plenoptic images, corrected using a recently developed calibration technique. Comparison of measured and known displacements for many thousands of particles allows for evaluation of measurement uncertainty. Mean displacement error, as a measure of accuracy, is shown to agree with predicted spatial resolution over the entire measurement domain, indicating robustness of the calibration methods. On the other hand, variation in the error, as a measure of precision, fluctuates as a function of particle depth in the optical direction. Error shows the smallest variation within the predicted depth of field of the plenoptic camera, with a gradual increase outside this range. The quantitative uncertainty values provided here can guide future measurement optimization and will serve as useful metrics for design of improved processing algorithms.
Measuring depth profiles of residual stress with Raman spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enloe, W.S.; Sparks, R.G.; Paesler, M.A.
Knowledge of the variation of residual stress is a very important factor in understanding the properties of machined surfaces. The nature of the residual stress can determine a part`s susceptibility to wear deformation, and cracking. Raman spectroscopy is known to be a very useful technique for measuring residual stress in many materials. These measurements are routinely made with a lateral resolution of 1{mu}m and an accuracy of 0.1 kbar. The variation of stress with depth; however, has not received much attention in the past. A novel technique has been developed that allows quantitative measurement of the variation of the residualmore » stress with depth with an accuracy of 10nm in the z direction. Qualitative techniques for determining whether the stress is varying with depth are presented. It is also demonstrated that when the stress is changing over the volume sampled, errors can be introduced if the variation of the stress with depth is ignored. Computer aided data analysis is used to determine the depth dependence of the residual stress.« less
D'Ambrosio, Michele
2013-06-15
γ-Oryzanol is an important phytochemical used in pharmaceutical, alimentary and cosmetic preparations. The present article, for the first time, discloses the performances of NP-HPLC in separating γ-oryzanol components and develops a validated method for its routine quantification. The analysis is performed on a cyanopropyl bonded column using the hexane/MTBE gradient elution and UV detection at 325 nm. The method allows: the separation of steryl ferulate, p-coumarate and caffeate esters, the separation of cis- from trans-ferulate isomers, the splitting of steroid moieties into saturated and unsaturated at the side chain. The optimised method provides excellent linear response (R(2)=0.99997), high precision (RSD<1.0%) and satisfactory accuracy (R(∗)=70-86%). In conclusion, the established method presents the details of the procedure and the experimental conditions in order to achieve the required precision and instrumental accuracy. The method is fast and sensitive and it could be a suitable tool for quality assurance and determination of origin. Copyright © 2012 Elsevier Ltd. All rights reserved.
Interval timing in genetically modified mice: a simple paradigm
Balci, F.; Papachristos, E. B.; Gallistel, C. R.; Brunner, D.; Gibson, J.; Shumyatsky, G. P.
2009-01-01
We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using D-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently. PMID:17696995
Interval timing in genetically modified mice: a simple paradigm.
Balci, F; Papachristos, E B; Gallistel, C R; Brunner, D; Gibson, J; Shumyatsky, G P
2008-04-01
We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using d-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently.
Pitkänen, Leena; Montoro Bustos, Antonio R; Murphy, Karen E; Winchester, Michael R; Striegel, André M
2017-08-18
The physicochemical characterization of nanoparticles (NPs) is of paramount importance for tailoring and optimizing the properties of these materials as well as for evaluating the environmental fate and impact of the NPs. Characterizing the size and chemical identity of disperse NP sample populations can be accomplished by coupling size-based separation methods to physical and chemical detection methods. Informed decisions regarding the NPs can only be made, however, if the separations themselves are quantitative, i.e., if all or most of the analyte elutes from the column within the course of the experiment. We undertake here the size-exclusion chromatographic characterization of Au NPs spanning a six-fold range in mean size. The main problem which has plagued the size-exclusion chromatography (SEC) analysis of Au NPs, namely lack of quantitation accountability due to generally poor NP recovery from the columns, is overcome by carefully matching eluent formulation with the appropriate stationary phase chemistry, and by the use of on-line inductively coupled plasma mass spectrometry (ICP-MS) detection. Here, for the first time, we demonstrate the quantitative analysis of Au NPs by SEC/ICP-MS, including the analysis of a ternary NP blend. The SEC separations are contrasted to HDC/ICP-MS (HDC: hydrodynamic chromatography) separations employing the same stationary phase chemistry. Additionally, analysis of Au NPs by HDC with on-line quasi-elastic light scattering (QELS) allowed for continuous determination of NP size across the chromatographic profiles, circumventing issues related to the shedding of fines from the SEC columns. The use of chemically homogeneous reference materials with well-defined size range allowed for better assessment of the accuracy and precision of the analyses, and for a more direct interpretation of results, than would be possible employing less rigorously characterized analytes. Published by Elsevier B.V.
Technical and financial evaluation of assays for progesterone in canine practice in the UK.
Moxon, R; Copley, D; England, G C W
2010-10-02
The concentration of progesterone was measured in 60 plasma samples from bitches at various stages of the oestrous cycle, using commercially available quantitative and semi-quantitative ELISA test kits, as well as by two commercial laboratories undertaking radioimmunoassay (RIA). The RIA, which was assumed to be the 'gold standard' in terms of reliability and accuracy, was the most expensive method when analysing more than one sample per week, and had the longest delay in obtaining results, but had minimal requirements for practice staff time. When compared with the RIA, the quantitative ELISA had a strong positive correlation (r=0.97, P<0.05) and a sensitivity and specificity of 70.6 per cent and 100.0 per cent, respectively, and positive and negative predictive values of 100.0 per cent and 71.0 per cent, respectively, with an overall accuracy of 90.0 per cent. This method was the least expensive when analysing five or more samples per week, but had longer turnaround times than that of the semi-quantitative ELISA and required more staff time. When compared with the RIA, the semi-quantitative ELISA had a sensitivity and specificity of 100.0 per cent and 95.5 per cent, respectively, and positive and negative predictive values of 73.9 per cent and 77.8 per cent, respectively, with an overall accuracy of 89.2 per cent. This method was more expensive than the quantitative ELISA when analysing five or more samples per week, but had the shortest turnaround time and low requirements in terms of staff time.
Quantity discrimination in canids: Dogs (Canis familiaris) and wolves (Canis lupus) compared.
Miletto Petrazzini, Maria Elena; Wynne, Clive D L
2017-11-01
Accumulating evidence indicates that animals are able to discriminate between quantities. Recent studies have shown that dogs' and coyotes' ability to discriminate between quantities of food items decreases with increasing numerical ratio. Conversely, wolves' performance is not affected by numerical ratio. Cross-species comparisons are difficult because of differences in the methodologies employed, and hence it is still unclear whether domestication altered quantitative abilities in canids. Here we used the same procedure to compare pet dogs and wolves in a spontaneous food choice task. Subjects were presented with two quantities of food items and allowed to choose only one option. Four numerical contrasts of increasing difficulty (range 1-4) were used to assess the influence of numerical ratio on the performance of the two species. Dogs' accuracy was affected by numerical ratio, while no ratio effect was observed in wolves. These results align with previous findings and reinforce the idea of different quantitative competences in dogs and wolves. Although we cannot exclude that other variables might have played a role in shaping quantitative abilities in these two species, our results might suggest that the interspecific differences here reported may have arisen as a result of domestication. Copyright © 2017 Elsevier B.V. All rights reserved.
Large-scale label-free quantitative proteomics of the pea aphid-Buchnera symbiosis.
Poliakov, Anton; Russell, Calum W; Ponnala, Lalit; Hoops, Harold J; Sun, Qi; Douglas, Angela E; van Wijk, Klaas J
2011-06-01
Many insects are nutritionally dependent on symbiotic microorganisms that have tiny genomes and are housed in specialized host cells called bacteriocytes. The obligate symbiosis between the pea aphid Acyrthosiphon pisum and the γ-proteobacterium Buchnera aphidicola (only 584 predicted proteins) is particularly amenable for molecular analysis because the genomes of both partners have been sequenced. To better define the symbiotic relationship between this aphid and Buchnera, we used large-scale, high accuracy tandem mass spectrometry (nanoLC-LTQ-Orbtrap) to identify aphid and Buchnera proteins in the whole aphid body, purified bacteriocytes, isolated Buchnera cells and the residual bacteriocyte fraction. More than 1900 aphid and 400 Buchnera proteins were identified. All enzymes in amino acid metabolism annotated in the Buchnera genome were detected, reflecting the high (68%) coverage of the proteome and supporting the core function of Buchnera in the aphid symbiosis. Transporters mediating the transport of predicted metabolites were present in the bacteriocyte. Label-free spectral counting combined with hierarchical clustering, allowed to define the quantitative distribution of a subset of these proteins across both symbiotic partners, yielding no evidence for the selective transfer of protein among the partners in either direction. This is the first quantitative proteome analysis of bacteriocyte symbiosis, providing a wealth of information about molecular function of both the host cell and bacterial symbiont.
Anderson, William J; Nowinska, Kamila; Hutter, Tanya; Mahajan, Sumeet; Fischlechner, Martin
2018-04-19
Surface-enhanced Raman spectroscopy (SERS) is well known for its high sensitivity that emerges due to the plasmonic enhancement of electric fields typically on gold and silver nanostructures. However, difficulties associated with the preparation of nanostructured substrates with uniform and reproducible features limit reliability and quantitation using SERS measurements. In this work we use layer-by-layer (LbL) self-assembly to incorporate multiple functional building blocks of collaborative assemblies of nanoparticles on colloidal spheres to fabricate SERS sensors. Gold nanoparticles (AuNPs) are packaged in discrete layers, effectively 'freezing nano-gaps', on spherical colloidal cores to achieve multifunctionality and reproducible sensing. Coupling between layers tunes the plasmon resonance for optimum SERS signal generation to achieve a 10 nM limit of detection. Significantly, using the layer-by-layer construction, SERS-active AuNP layers are spaced out and thus optically isolated. This uniquely allows the creation of an internal standard within each colloidal sensor to enable highly reproducible self-calibrated sensing. By using 4-mercaptobenzoic acid (4-MBA) as the internal standard adenine concentrations are quantified to an accuracy of 92.6-99.5%. Our versatile approach paves the way for rationally designed yet quantitative colloidal SERS sensors and their use in a variety of sensing applications.
Quantitative fluorescence angiography for neurosurgical interventions.
Weichelt, Claudia; Duscha, Philipp; Steinmeier, Ralf; Meyer, Tobias; Kuß, Julia; Cimalla, Peter; Kirsch, Matthias; Sobottka, Stephan B; Koch, Edmund; Schackert, Gabriele; Morgenstern, Ute
2013-06-01
Present methods for quantitative measurement of cerebral perfusion during neurosurgical operations require additional technology for measurement, data acquisition, and processing. This study used conventional fluorescence video angiography--as an established method to visualize blood flow in brain vessels--enhanced by a quantifying perfusion software tool. For these purposes, the fluorescence dye indocyanine green is given intravenously, and after activation by a near-infrared light source the fluorescence signal is recorded. Video data are analyzed by software algorithms to allow quantification of the blood flow. Additionally, perfusion is measured intraoperatively by a reference system. Furthermore, comparing reference measurements using a flow phantom were performed to verify the quantitative blood flow results of the software and to validate the software algorithm. Analysis of intraoperative video data provides characteristic biological parameters. These parameters were implemented in the special flow phantom for experimental validation of the developed software algorithms. Furthermore, various factors that influence the determination of perfusion parameters were analyzed by means of mathematical simulation. Comparing patient measurement, phantom experiment, and computer simulation under certain conditions (variable frame rate, vessel diameter, etc.), the results of the software algorithms are within the range of parameter accuracy of the reference methods. Therefore, the software algorithm for calculating cortical perfusion parameters from video data presents a helpful intraoperative tool without complex additional measurement technology.
Quantitative Evaluation of a Planetary Renderer for Terrain Relative Navigation
NASA Astrophysics Data System (ADS)
Amoroso, E.; Jones, H.; Otten, N.; Wettergreen, D.; Whittaker, W.
2016-11-01
A ray-tracing computer renderer tool is presented based on LOLA and LROC elevation models and is quantitatively compared to LRO WAC and NAC images for photometric accuracy. We investigated using rendered images for terrain relative navigation.
Clark, Samuel A; Hickey, John M; Daetwyler, Hans D; van der Werf, Julius H J
2012-02-09
The theory of genomic selection is based on the prediction of the effects of genetic markers in linkage disequilibrium with quantitative trait loci. However, genomic selection also relies on relationships between individuals to accurately predict genetic value. This study aimed to examine the importance of information on relatives versus that of unrelated or more distantly related individuals on the estimation of genomic breeding values. Simulated and real data were used to examine the effects of various degrees of relationship on the accuracy of genomic selection. Genomic Best Linear Unbiased Prediction (gBLUP) was compared to two pedigree based BLUP methods, one with a shallow one generation pedigree and the other with a deep ten generation pedigree. The accuracy of estimated breeding values for different groups of selection candidates that had varying degrees of relationships to a reference data set of 1750 animals was investigated. The gBLUP method predicted breeding values more accurately than BLUP. The most accurate breeding values were estimated using gBLUP for closely related animals. Similarly, the pedigree based BLUP methods were also accurate for closely related animals, however when the pedigree based BLUP methods were used to predict unrelated animals, the accuracy was close to zero. In contrast, gBLUP breeding values, for animals that had no pedigree relationship with animals in the reference data set, allowed substantial accuracy. An animal's relationship to the reference data set is an important factor for the accuracy of genomic predictions. Animals that share a close relationship to the reference data set had the highest accuracy from genomic predictions. However a baseline accuracy that is driven by the reference data set size and the overall population effective population size enables gBLUP to estimate a breeding value for unrelated animals within a population (breed), using information previously ignored by pedigree based BLUP methods.
Vandermolen, Brooke I; Hezelgrave, Natasha L; Smout, Elizabeth M; Abbott, Danielle S; Seed, Paul T; Shennan, Andrew H
2016-10-01
Quantitative fetal fibronectin testing has demonstrated accuracy for prediction of spontaneous preterm birth in asymptomatic women with a history of preterm birth. Predictive accuracy in women with previous cervical surgery (a potentially different risk mechanism) is not known. We sought to compare the predictive accuracy of cervicovaginal fluid quantitative fetal fibronectin and cervical length testing in asymptomatic women with previous cervical surgery to that in women with 1 previous preterm birth. We conducted a prospective blinded secondary analysis of a larger observational study of cervicovaginal fluid quantitative fetal fibronectin concentration in asymptomatic women measured with a Hologic 10Q system (Hologic, Marlborough, MA). Prediction of spontaneous preterm birth (<30, <34, and <37 weeks) with cervicovaginal fluid quantitative fetal fibronectin concentration in primiparous women who had undergone at least 1 invasive cervical procedure (n = 473) was compared with prediction in women who had previous spontaneous preterm birth, preterm prelabor rupture of membranes, or late miscarriage (n = 821). Relationship with cervical length was explored. The rate of spontaneous preterm birth <34 weeks in the cervical surgery group was 3% compared with 9% in previous spontaneous preterm birth group. Receiver operating characteristic curves comparing quantitative fetal fibronectin for prediction at all 3 gestational end points were comparable between the cervical surgery and previous spontaneous preterm birth groups (34 weeks: area under the curve, 0.78 [95% confidence interval 0.64-0.93] vs 0.71 [95% confidence interval 0.64-0.78]; P = .39). Prediction of spontaneous preterm birth using cervical length compared with quantitative fetal fibronectin for prediction of preterm birth <34 weeks of gestation offered similar prediction (area under the curve, 0.88 [95% confidence interval 0.79-0.96] vs 0.77 [95% confidence interval 0.62-0.92], P = .12 in the cervical surgery group; and 0.77 [95% confidence interval 0.70-0.84] vs 0.74 [95% confidence interval 0.67-0.81], P = .32 in the previous spontaneous preterm birth group). Prediction of spontaneous preterm birth using cervicovaginal fluid quantitative fetal fibronectin in asymptomatic women with cervical surgery is valid, and has comparative accuracy to that in women with a history of spontaneous preterm birth. Copyright © 2016 Elsevier Inc. All rights reserved.
Coltharp, Carla; Kessler, Rene P.; Xiao, Jie
2012-01-01
Localization-based superresolution microscopy techniques such as Photoactivated Localization Microscopy (PALM) and Stochastic Optical Reconstruction Microscopy (STORM) have allowed investigations of cellular structures with unprecedented optical resolutions. One major obstacle to interpreting superresolution images, however, is the overcounting of molecule numbers caused by fluorophore photoblinking. Using both experimental and simulated images, we determined the effects of photoblinking on the accurate reconstruction of superresolution images and on quantitative measurements of structural dimension and molecule density made from those images. We found that structural dimension and relative density measurements can be made reliably from images that contain photoblinking-related overcounting, but accurate absolute density measurements, and consequently faithful representations of molecule counts and positions in cellular structures, require the application of a clustering algorithm to group localizations that originate from the same molecule. We analyzed how applying a simple algorithm with different clustering thresholds (tThresh and dThresh) affects the accuracy of reconstructed images, and developed an easy method to select optimal thresholds. We also identified an empirical criterion to evaluate whether an imaging condition is appropriate for accurate superresolution image reconstruction with the clustering algorithm. Both the threshold selection method and imaging condition criterion are easy to implement within existing PALM clustering algorithms and experimental conditions. The main advantage of our method is that it generates a superresolution image and molecule position list that faithfully represents molecule counts and positions within a cellular structure, rather than only summarizing structural properties into ensemble parameters. This feature makes it particularly useful for cellular structures of heterogeneous densities and irregular geometries, and allows a variety of quantitative measurements tailored to specific needs of different biological systems. PMID:23251611
NASA Astrophysics Data System (ADS)
Eagle, R.; Howes, E.; Lischka, S.; Rudolph, R.; Büdenbender, J.; Bijma, J.; Gattuso, J. P.; Riebesell, U.
2014-12-01
Understanding and quantifying the response of marine organisms to present and future ocean acidification remains a major challenge encompassing observations on single species in culture and scaling up to the ecosystem and global scale. Understanding calcification changes in culture experiments designed to simulate present and future ocean conditions under potential CO2 emissions scenarios, and especially detecting the likely more subtle changes that may occur prior to the onset of more extreme ocean acidification, depends on the tools available. Here we explore the utility of high-resolution computed tomography (nano-CT) to provide quantitative biometric data on field collected and cultured marine pteropods, using the General Electric Company Phoenix Nanotom S Instrument. The technique is capable of quantitating the whole shell of the organism, allowing shell dimensions to be determined as well as parameters such as average shell thickness, the variation in thickness across the whole shell and in localized areas, total shell volume and surface area and when combined with weight measurements shell density can be calculated. The potential power of the technique is the ability to derive these parameters even on very small organisms less than 1 millimeter in size. Tuning the X-ray strength of the instrument allows organic material to be excluded from the analysis. Through replicate analysis of standards, we assess the reproducibility of data, and by comparison with dimension measurements derived from light microscopy we assess the accuracy of dimension determinations. We present results from historical and modern pteropod populations from the Mediterranean and cultured polar pteropods, resolving statistically significant differences in shell biometrics in both cases that may represent responses to ocean acidification.
Remans, Tony; Keunen, Els; Bex, Geert Jan; Smeets, Karen; Vangronsveld, Jaco; Cuypers, Ann
2014-10-01
Reverse transcription-quantitative PCR (RT-qPCR) has been widely adopted to measure differences in mRNA levels; however, biological and technical variation strongly affects the accuracy of the reported differences. RT-qPCR specialists have warned that, unless researchers minimize this variability, they may report inaccurate differences and draw incorrect biological conclusions. The Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines describe procedures for conducting and reporting RT-qPCR experiments. The MIQE guidelines enable others to judge the reliability of reported results; however, a recent literature survey found low adherence to these guidelines. Additionally, even experiments that use appropriate procedures remain subject to individual variation that statistical methods cannot correct. For example, since ideal reference genes do not exist, the widely used method of normalizing RT-qPCR data to reference genes generates background noise that affects the accuracy of measured changes in mRNA levels. However, current RT-qPCR data reporting styles ignore this source of variation. In this commentary, we direct researchers to appropriate procedures, outline a method to present the remaining uncertainty in data accuracy, and propose an intuitive way to select reference genes to minimize uncertainty. Reporting the uncertainty in data accuracy also serves for quality assessment, enabling researchers and peer reviewers to confidently evaluate the reliability of gene expression data. © 2014 American Society of Plant Biologists. All rights reserved.
Lim, Issel Anne L; Faria, Andreia V; Li, Xu; Hsu, Johnny T C; Airan, Raag D; Mori, Susumu; van Zijl, Peter C M
2013-11-15
The purpose of this paper is to extend the single-subject Eve atlas from Johns Hopkins University, which currently contains diffusion tensor and T1-weighted anatomical maps, by including contrast based on quantitative susceptibility mapping. The new atlas combines a "deep gray matter parcellation map" (DGMPM) derived from a single-subject quantitative susceptibility map with the previously established "white matter parcellation map" (WMPM) from the same subject's T1-weighted and diffusion tensor imaging data into an MNI coordinate map named the "Everything Parcellation Map in Eve Space," also known as the "EvePM." It allows automated segmentation of gray matter and white matter structures. Quantitative susceptibility maps from five healthy male volunteers (30 to 33 years of age) were coregistered to the Eve Atlas with AIR and Large Deformation Diffeomorphic Metric Mapping (LDDMM), and the transformation matrices were applied to the EvePM to produce automated parcellation in subject space. Parcellation accuracy was measured with a kappa analysis for the left and right structures of six deep gray matter regions. For multi-orientation QSM images, the Kappa statistic was 0.85 between automated and manual segmentation, with the inter-rater reproducibility Kappa being 0.89 for the human raters, suggesting "almost perfect" agreement between all segmentation methods. Segmentation seemed slightly more difficult for human raters on single-orientation QSM images, with the Kappa statistic being 0.88 between automated and manual segmentation, and 0.85 and 0.86 between human raters. Overall, this atlas provides a time-efficient tool for automated coregistration and segmentation of quantitative susceptibility data to analyze many regions of interest. These data were used to establish a baseline for normal magnetic susceptibility measurements for over 60 brain structures of 30- to 33-year-old males. Correlating the average susceptibility with age-based iron concentrations in gray matter structures measured by Hallgren and Sourander (1958) allowed interpolation of the average iron concentration of several deep gray matter regions delineated in the EvePM. Copyright © 2013 Elsevier Inc. All rights reserved.
Lim, Issel Anne L.; Faria, Andreia V.; Li, Xu; Hsu, Johnny T.C.; Airan, Raag D.; Mori, Susumu; van Zijl, Peter C. M.
2013-01-01
The purpose of this paper is to extend the single-subject Eve atlas from Johns Hopkins University, which currently contains diffusion tensor and T1-weighted anatomical maps, by including contrast based on quantitative susceptibility mapping. The new atlas combines a “deep gray matter parcellation map” (DGMPM) derived from a single-subject quantitative susceptibility map with the previously established “white matter parcellation map” (WMPM) from the same subject’s T1-weighted and diffusion tensor imaging data into an MNI coordinate map named the “Everything Parcellation Map in Eve Space,” also known as the “EvePM.” It allows automated segmentation of gray matter and white matter structures. Quantitative susceptibility maps from five healthy male volunteers (30 to 33 years of age) were coregistered to the Eve Atlas with AIR and Large Deformation Diffeomorphic Metric Mapping (LDDMM), and the transformation matrices were applied to the EvePM to produce automated parcellation in subject space. Parcellation accuracy was measured with a kappa analysis for the left and right structures of six deep gray matter regions. For multi-orientation QSM images, the Kappa statistic was 0.85 between automated and manual segmentation, with the inter-rater reproducibility Kappa being 0.89 for the human raters, suggesting “almost perfect” agreement between all segmentation methods. Segmentation seemed slightly more difficult for human raters on single-orientation QSM images, with the Kappa statistic being 0.88 between automated and manual segmentation, and 0.85 and 0.86 between human raters. Overall, this atlas provides a time-efficient tool for automated coregistration and segmentation of quantitative susceptibility data to analyze many regions of interest. These data were used to establish a baseline for normal magnetic susceptibility measurements for over 60 brain structures of 30- to 33-year-old males. Correlating the average susceptibility with age-based iron concentrations in gray matter structures measured by Hallgren and Sourander (1958) allowed interpolation of the average iron concentration of several deep gray matter regions delineated in the EvePM. PMID:23769915
Popelka, Stanislav; Stachoň, Zdeněk; Šašinka, Čeněk; Doležalová, Jitka
2016-01-01
The mixed research design is a progressive methodological discourse that combines the advantages of quantitative and qualitative methods. Its possibilities of application are, however, dependent on the efficiency with which the particular research techniques are used and combined. The aim of the paper is to introduce the possible combination of Hypothesis with EyeTribe tracker. The Hypothesis is intended for quantitative data acquisition and the EyeTribe is intended for qualitative (eye-tracking) data recording. In the first part of the paper, Hypothesis software is described. The Hypothesis platform provides an environment for web-based computerized experiment design and mass data collection. Then, evaluation of the accuracy of data recorded by EyeTribe tracker was performed with the use of concurrent recording together with the SMI RED 250 eye-tracker. Both qualitative and quantitative results showed that data accuracy is sufficient for cartographic research. In the third part of the paper, a system for connecting EyeTribe tracker and Hypothesis software is presented. The interconnection was performed with the help of developed web application HypOgama. The created system uses open-source software OGAMA for recording the eye-movements of participants together with quantitative data from Hypothesis. The final part of the paper describes the integrated research system combining Hypothesis and EyeTribe.
Stachoň, Zdeněk; Šašinka, Čeněk; Doležalová, Jitka
2016-01-01
The mixed research design is a progressive methodological discourse that combines the advantages of quantitative and qualitative methods. Its possibilities of application are, however, dependent on the efficiency with which the particular research techniques are used and combined. The aim of the paper is to introduce the possible combination of Hypothesis with EyeTribe tracker. The Hypothesis is intended for quantitative data acquisition and the EyeTribe is intended for qualitative (eye-tracking) data recording. In the first part of the paper, Hypothesis software is described. The Hypothesis platform provides an environment for web-based computerized experiment design and mass data collection. Then, evaluation of the accuracy of data recorded by EyeTribe tracker was performed with the use of concurrent recording together with the SMI RED 250 eye-tracker. Both qualitative and quantitative results showed that data accuracy is sufficient for cartographic research. In the third part of the paper, a system for connecting EyeTribe tracker and Hypothesis software is presented. The interconnection was performed with the help of developed web application HypOgama. The created system uses open-source software OGAMA for recording the eye-movements of participants together with quantitative data from Hypothesis. The final part of the paper describes the integrated research system combining Hypothesis and EyeTribe. PMID:27087805
Fung, Janice Wing Mei; Lim, Sandra Bee Lay; Zheng, Huili; Ho, William Ying Tat; Lee, Bee Guat; Chow, Khuan Yew; Lee, Hin Peng
2016-08-01
To provide a comprehensive evaluation of the quality of the data at the Singapore Cancer Registry (SCR). Quantitative and semi-quantitative methods were used to assess the comparability, completeness, accuracy and timeliness of data for the period of 1968-2013, with focus on the period 2008-2012. The SCR coding and classification systems follow international standards. The overall completeness was estimated at 98.1% using the flow method and 97.5% using the capture-recapture method, for the period of 2008-2012. For the same period, 91.9% of the cases were morphologically verified (site-specific range: 40.4-100%) with 1.1% DCO cases. The under-reporting in 2011 and 2012 due to timely publication was estimated at 0.03% and 0.51% respectively. This review shows that the processes in place at the SCR yields data which are internationally comparable, relatively complete, valid, and timely, allowing for greater confidence in the use of quality data in the areas of cancer prevention, treatment and control. Copyright © 2016 Elsevier Ltd. All rights reserved.
Vimercati, S L; Galli, M; Stella, G; Caiazzo, G; Ancillao, A; Albertini, G
2015-03-01
Drawing tests are commonly used for the clinical evaluation of cognitive capabilities in children with learning disabilities. We analysed quantitatively the drawings of children with Down Syndrome (DS) and of healthy, mental age-matched controls to characterise the features of fine motor skills in DS during a drawing task, with particular attention to clumsiness, a well-known feature of DS gross movements. Twenty-three children with DS and 13 controls hand-copied the figures of a circle, a cross and a square on a sheet. An optoelectronic system allowed the acquisition of the three-dimensional track of the drawing. The participants' posture and upper limb movements were analysed as well. Results showed that the participants with DS tended to draw faster but with less accuracy than controls. While clumsiness in gross movements manifests mainly as slow, less efficient movements, it manifests as high velocity and inaccurate movements in fine motor tasks such as drawing. © 2014 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.
Focussed ion beam thin sample microanalysis using a field emission gun electron probe microanalyser
NASA Astrophysics Data System (ADS)
Kubo, Y.
2018-01-01
Field emission gun electron probe microanalysis (FEG-EPMA) in conjunction with wavelength-dispersive X-ray spectrometry using a low acceleration voltage (V acc) allows elemental analysis with sub-micrometre lateral spatial resolution (SR). However, this degree of SR does not necessarily meet the requirements associated with increasingly miniaturised devices. Another challenge related to performing FEG-EPMA with a low V acc is that the accuracy of quantitative analyses is adversely affected, primarily because low energy X-ray lines such as the L- and M-lines must be employed and due to the potential of line interference. One promising means of obtaining high SR with FEG-EPMA is to use thin samples together with high V acc values. This mini-review covers the basic principles of thin-sample FEG-EPMA and describes an application of this technique to the analysis of optical fibres. Outstanding issues related to this technique that must be addressed are also discussed, which include the potential for electron beam damage during analysis of insulating materials and the development of methods to use thin samples for quantitative analysis.
Semi-quantitative MALDI-TOF for antimicrobial susceptibility testing in Staphylococcus aureus.
Maxson, Tucker; Taylor-Howell, Cheryl L; Minogue, Timothy D
2017-01-01
Antibiotic resistant bacterial infections are a significant problem in the healthcare setting, in many cases requiring the rapid administration of appropriate and effective antibiotic therapy. Diagnostic assays capable of quickly and accurately determining the pathogen resistance profile are therefore crucial to initiate or modify care. Matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry (MS) is a standard method for species identification in many clinical microbiology laboratories and is well positioned to be applied towards antimicrobial susceptibility testing. One recently reported approach utilizes semi-quantitative MALDI-TOF MS for growth rate analysis to provide a resistance profile independent of resistance mechanism. This method was previously successfully applied to Gram-negative pathogens and mycobacteria; here, we evaluated this method with the Gram-positive pathogen Staphylococcus aureus. Specifically, we used 35 strains of S. aureus and four antibiotics to optimize and test the assay, resulting in an overall accuracy rate of 95%. Application of the optimized assay also successfully determined susceptibility from mock blood cultures, allowing both species identification and resistance determination for all four antibiotics within 3 hours of blood culture positivity.
Bernal, Freddy A; Orduz-Diaz, Luisa L; Coy-Barrera, Ericsson
2015-10-15
Anthocyanins are natural pigments known for their color and antioxidant activity. These properties allow their use in various fields, including food and pharmaceutical ones. Quantitative determination of anthocyanins had been performed by non-specific methods that limit the accuracy and reliability of the results. Therefore, a novel, simple spectrophotometric method for the anthocyanins quantification based on a formation of blue-colored complexes by the known reaction between catechol- and pyrogallol-containing anthocyanins and aluminum(III) is presented. The method demonstrated to be reproducible, repetitive (RSD<1.5%) and highly sensitive to ortho-dihydroxylated anthocyanins (LOD = 0.186 μg/mL). Compliance with Beer's law was also evident in a range of concentrations (2-16 μg/mL for cyanidin 3-O-glucoside). Good recoveries (98.8-103.3%) were calculated using anthocyanin-rich plant samples. The described method revealed direct correlation to pH differential method results for several common anthocyanin-containing fruits indicating its great analytical potential. The presented method was successfully validated. Copyright © 2015 Elsevier Ltd. All rights reserved.
Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L; Bakhtina, Marina M; Becker, Donald F; Bedwell, Gregory J; Bekdemir, Ahmet; Besong, Tabot M D; Birck, Catherine; Brautigam, Chad A; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B; Chaton, Catherine T; Cölfen, Helmut; Connaghan, Keith D; Crowley, Kimberly A; Curth, Ute; Daviter, Tina; Dean, William L; Díez, Ana I; Ebel, Christine; Eckert, Debra M; Eisele, Leslie E; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A; Fairman, Robert; Finn, Ron M; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E; Cifre, José G Hernández; Herr, Andrew B; Howell, Elizabeth E; Isaac, Richard S; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A; Kwon, Hyewon; Larson, Adam; Laue, Thomas M; Le Roy, Aline; Leech, Andrew P; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R; Ma, Jia; May, Carrie A; Maynard, Ernest L; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K; Park, Jin-Ku; Pawelek, Peter D; Perdue, Erby E; Perkins, Stephen J; Perugini, Matthew A; Peterson, Craig L; Peverelli, Martin G; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E; Raynal, Bertrand D E; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E; Rosenberg, Rose; Rowe, Arthur J; Rufer, Arne C; Scott, David J; Seravalli, Javier G; Solovyova, Alexandra S; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M; Streicher, Werner W; Sumida, John P; Swygert, Sarah G; Szczepanowski, Roman H; Tessmer, Ingrid; Toth, Ronald T; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F W; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M; Schuck, Peter
2015-01-01
Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies.
Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L.; Bakhtina, Marina M.; Becker, Donald F.; Bedwell, Gregory J.; Bekdemir, Ahmet; Besong, Tabot M. D.; Birck, Catherine; Brautigam, Chad A.; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B.; Chaton, Catherine T.; Cölfen, Helmut; Connaghan, Keith D.; Crowley, Kimberly A.; Curth, Ute; Daviter, Tina; Dean, William L.; Díez, Ana I.; Ebel, Christine; Eckert, Debra M.; Eisele, Leslie E.; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A.; Fairman, Robert; Finn, Ron M.; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E.; Cifre, José G. Hernández; Herr, Andrew B.; Howell, Elizabeth E.; Isaac, Richard S.; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A.; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A.; Kwon, Hyewon; Larson, Adam; Laue, Thomas M.; Le Roy, Aline; Leech, Andrew P.; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R.; Ma, Jia; May, Carrie A.; Maynard, Ernest L.; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J.; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K.; Park, Jin-Ku; Pawelek, Peter D.; Perdue, Erby E.; Perkins, Stephen J.; Perugini, Matthew A.; Peterson, Craig L.; Peverelli, Martin G.; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E.; Raynal, Bertrand D. E.; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E.; Rosenberg, Rose; Rowe, Arthur J.; Rufer, Arne C.; Scott, David J.; Seravalli, Javier G.; Solovyova, Alexandra S.; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M.; Streicher, Werner W.; Sumida, John P.; Swygert, Sarah G.; Szczepanowski, Roman H.; Tessmer, Ingrid; Toth, Ronald T.; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F. W.; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H.; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E.; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M.; Schuck, Peter
2015-01-01
Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies. PMID:25997164
Partovi, Sasan; Yuh, Roger; Pirozzi, Sara; Lu, Ziang; Couturier, Spencer; Grosse, Ulrich; Schluchter, Mark D; Nelson, Aaron; Jones, Robert; O’Donnell, James K; Faulhaber, Peter
2017-01-01
The objective of this study was to assess the ability of a quantitative software-aided approach to improve the diagnostic accuracy of 18F FDG PET for Alzheimer’s dementia over visual analysis alone. Twenty normal subjects (M:F-12:8; mean age 80.6 years) and twenty mild AD subjects (M:F-12:8; mean age 70.6 years) with 18F FDG PET scans were obtained from the ADNI database. Three blinded readers interpreted these PET images first using a visual qualitative approach and then using a quantitative software-aided approach. Images were classified on two five-point scales based on normal/abnormal (1-definitely normal; 5-definitely abnormal) and presence of AD (1-definitely not AD; 5-definitely AD). Diagnostic sensitivity, specificity, and accuracy for both approaches were compared based on the aforementioned scales. The sensitivity, specificity, and accuracy for the normal vs. abnormal readings of all readers combined were higher when comparing the software-aided vs. visual approach (sensitivity 0.93 vs. 0.83 P = 0.0466; specificity 0.85 vs. 0.60 P = 0.0005; accuracy 0.89 vs. 0.72 P<0.0001). The specificity and accuracy for absence vs. presence of AD of all readers combined were higher when comparing the software-aided vs. visual approach (specificity 0.90 vs. 0.70 P = 0.0008; accuracy 0.81 vs. 0.72 P = 0.0356). Sensitivities of the software-aided and visual approaches did not differ significantly (0.72 vs. 0.73 P = 0.74). The quantitative software-aided approach appears to improve the performance of 18F FDG PET for the diagnosis of mild AD. It may be helpful for experienced 18F FDG PET readers analyzing challenging cases. PMID:28123864
White, Robin R; McGill, Tyler; Garnett, Rebecca; Patterson, Robert J; Hanigan, Mark D
2017-04-01
The objective of this work was to evaluate the precision and accuracy of the milk yield predictions made by the PREP10 model in comparison to those from the National Research Council (NRC) Nutrient Requirements of Dairy Cattle. The PREP10 model is a ration-balancing system that allows protein use efficiency to vary with production level. The model also has advanced AA supply and requirement calculations that enable estimation of AA-allowable milk (Milk AA ) based on 10 essential AA. A literature data set of 374 treatment means was collected and used to quantitatively evaluate the estimates of protein-allowable milk (Milk MP ) and energy-allowable milk yields from the NRC and PREP10 models. The PREP10 Milk AA prediction was also evaluated, as were both models' estimates of milk based on the most-limiting nutrient or the mean of the estimated milk yields. For most milk estimates compared, the PREP10 model had reduced root mean squared prediction error (RMSPE), improved concordance correlation coefficient, and reduced mean and slope bias in comparison to the NRC model. In particular, utilizing the variable protein use efficiency for milk production notably improved the estimate of Milk MP when compared with NRC. The PREP10 Milk MP estimate had an RMSPE of 18.2% (NRC = 25.7%), concordance correlation coefficient of 0.82% (NRC = 0.64), slope bias of -0.14 kg/kg of predicted milk (NRC = -0.34 kg/kg), and mean bias of -0.63 kg (NRC = -2.85 kg). The PREP10 estimate of Milk AA had slightly elevated RMSPE and mean and slope bias when compared with Milk MP . The PREP10 estimate of Milk AA was not advantageous when compared with Milk MP , likely because AA use efficiency for milk was constant whereas MP use was variable. Future work evaluating variable AA use efficiencies for milk production is likely to improve accuracy and precision of models of allowable milk. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Air traffic control surveillance accuracy and update rate study
NASA Technical Reports Server (NTRS)
Craigie, J. H.; Morrison, D. D.; Zipper, I.
1973-01-01
The results of an air traffic control surveillance accuracy and update rate study are presented. The objective of the study was to establish quantitative relationships between the surveillance accuracies, update rates, and the communication load associated with the tactical control of aircraft for conflict resolution. The relationships are established for typical types of aircraft, phases of flight, and types of airspace. Specific cases are analyzed to determine the surveillance accuracies and update rates required to prevent two aircraft from approaching each other too closely.
Gu, Z.; Sam, S. S.; Sun, Y.; Tang, L.; Pounds, S.; Caliendo, A. M.
2016-01-01
A potential benefit of digital PCR is a reduction in result variability across assays and platforms. Three sets of PCR reagents were tested on two digital PCR systems (Bio-Rad and RainDance), using three different sets of PCR reagents for quantitation of cytomegalovirus (CMV). Both commercial quantitative viral standards and 16 patient samples (n = 16) were tested. Quantitative accuracy (compared to nominal values) and variability were determined based on viral standard testing results. Quantitative correlation and variability were assessed with pairwise comparisons across all reagent-platform combinations for clinical plasma sample results. The three reagent sets, when used to assay quantitative standards on the Bio-Rad system, all showed a high degree of accuracy, low variability, and close agreement with one another. When used on the RainDance system, one of the three reagent sets appeared to have a much better correlation to nominal values than did the other two. Quantitative results for patient samples showed good correlation in most pairwise comparisons, with some showing poorer correlations when testing samples with low viral loads. Digital PCR is a robust method for measuring CMV viral load. Some degree of result variation may be seen, depending on platform and reagents used; this variation appears to be greater in samples with low viral load values. PMID:27535685
Assessing genomic selection prediction accuracy in a dynamic barley breeding
USDA-ARS?s Scientific Manuscript database
Genomic selection is a method to improve quantitative traits in crops and livestock by estimating breeding values of selection candidates using phenotype and genome-wide marker data sets. Prediction accuracy has been evaluated through simulation and cross-validation, however validation based on prog...
SVM-Based Synthetic Fingerprint Discrimination Algorithm and Quantitative Optimization Strategy
Chen, Suhang; Chang, Sheng; Huang, Qijun; He, Jin; Wang, Hao; Huang, Qiangui
2014-01-01
Synthetic fingerprints are a potential threat to automatic fingerprint identification systems (AFISs). In this paper, we propose an algorithm to discriminate synthetic fingerprints from real ones. First, four typical characteristic factors—the ridge distance features, global gray features, frequency feature and Harris Corner feature—are extracted. Then, a support vector machine (SVM) is used to distinguish synthetic fingerprints from real fingerprints. The experiments demonstrate that this method can achieve a recognition accuracy rate of over 98% for two discrete synthetic fingerprint databases as well as a mixed database. Furthermore, a performance factor that can evaluate the SVM's accuracy and efficiency is presented, and a quantitative optimization strategy is established for the first time. After the optimization of our synthetic fingerprint discrimination task, the polynomial kernel with a training sample proportion of 5% is the optimized value when the minimum accuracy requirement is 95%. The radial basis function (RBF) kernel with a training sample proportion of 15% is a more suitable choice when the minimum accuracy requirement is 98%. PMID:25347063
Quantitative measurement of MLC leaf displacements using an electronic portal image device
NASA Astrophysics Data System (ADS)
Yang, Yong; Xing, Lei
2004-04-01
The success of an IMRT treatment relies on the positioning accuracy of the MLC (multileaf collimator) leaves for both step-and-shoot and dynamic deliveries. In practice, however, there exists no effective and quantitative means for routine MLC QA and this has become one of the bottleneck problems in IMRT implementation. In this work we present an electronic portal image device (EPID) based method for fast and accurate measurement of MLC leaf positions at arbitrary locations within the 40 cm × 40 cm radiation field. The new technique utilizes the fact that the integral signal in a small region of interest (ROI) is a sensitive and reliable indicator of the leaf displacement. In this approach, the integral signal at a ROI was expressed as a weighted sum of the contributions from the displacements of the leaf above the point and the adjacent leaves. The weighting factors or linear coefficients of the system equations were determined by fitting the integral signal data for a group of pre-designed MLC leaf sequences to the known leaf displacements that were intentionally introduced during the creation of the leaf sequences. Once the calibration is done, the system can be used for routine MLC leaf positioning QA to detect possible leaf errors. A series of tests was carried out to examine the functionality and accuracy of the technique. Our results show that the proposed technique is potentially superior to the conventional edge-detecting approach in two aspects: (i) it deals with the problem in a systematic approach and allows us to take into account the influence of the adjacent MLC leaves effectively; and (ii) it may improve the signal-to-noise ratio and is thus capable of quantitatively measuring extremely small leaf positional displacements. Our results indicate that the technique can detect a leaf positional error as small as 0.1 mm at an arbitrary point within the field in the absence of EPID set-up error and 0.3 mm when the uncertainty is considered. Given its simplicity, efficiency and accuracy, we believe that the technique is ideally suitable for routine MLC leaf positioning QA. This work was presented at the 45th Annual Meeting of American Society of Therapeutic Radiology and Oncology (ASTRO), Salt Lake City, UT, 2003. A US Patent is pending (application no. 10/197,232).
A New Algorithm Using Cross-Assignment for Label-Free Quantitation with LC/LTQ-FT MS
Andreev, Victor P.; Li, Lingyun; Cao, Lei; Gu, Ye; Rejtar, Tomas; Wu, Shiaw-Lin; Karger, Barry L.
2008-01-01
A new algorithm is described for label-free quantitation of relative protein abundances across multiple complex proteomic samples. Q-MEND is based on the denoising and peak picking algorithm, MEND, previously developed in our laboratory. Q-MEND takes advantage of the high resolution and mass accuracy of the hybrid LTQFT MS mass spectrometer (or other high resolution mass spectrometers, such as a Q-TOF MS). The strategy, termed “cross-assignment”, is introduced to increase substantially the number of quantitated proteins. In this approach, all MS/MS identifications for the set of analyzed samples are combined into a master ID list, and then each LC/MS run is searched for the features that can be assigned to a specific identification from that master list. The reliability of quantitation is enhanced by quantitating separately all peptide charge states, along with a scoring procedure to filter out less reliable peptide abundance measurements. The effectiveness of Q-MEND is illustrated in the relative quantitative analysis of E.coli samples spiked with known amounts of non-E.coli protein digests. A mean quantitation accuracy of 7% and mean precision of 15% is demonstrated. Q-MEND can perform relative quantitation of a set of LC/MS datasets without manual intervention and can generate files compatible with the Guidelines for Proteomic Data Publication. PMID:17441747
A new algorithm using cross-assignment for label-free quantitation with LC-LTQ-FT MS.
Andreev, Victor P; Li, Lingyun; Cao, Lei; Gu, Ye; Rejtar, Tomas; Wu, Shiaw-Lin; Karger, Barry L
2007-06-01
A new algorithm is described for label-free quantitation of relative protein abundances across multiple complex proteomic samples. Q-MEND is based on the denoising and peak picking algorithm, MEND, previously developed in our laboratory. Q-MEND takes advantage of the high resolution and mass accuracy of the hybrid LTQ-FT MS mass spectrometer (or other high-resolution mass spectrometers, such as a Q-TOF MS). The strategy, termed "cross-assignment", is introduced to increase substantially the number of quantitated proteins. In this approach, all MS/MS identifications for the set of analyzed samples are combined into a master ID list, and then each LC-MS run is searched for the features that can be assigned to a specific identification from that master list. The reliability of quantitation is enhanced by quantitating separately all peptide charge states, along with a scoring procedure to filter out less reliable peptide abundance measurements. The effectiveness of Q-MEND is illustrated in the relative quantitative analysis of Escherichia coli samples spiked with known amounts of non-E. coli protein digests. A mean quantitation accuracy of 7% and mean precision of 15% is demonstrated. Q-MEND can perform relative quantitation of a set of LC-MS data sets without manual intervention and can generate files compatible with the Guidelines for Proteomic Data Publication.
Quantitative analysis of peel-off degree for printed electronics
NASA Astrophysics Data System (ADS)
Park, Janghoon; Lee, Jongsu; Sung, Ki-Hak; Shin, Kee-Hyun; Kang, Hyunkyoo
2018-02-01
We suggest a facile methodology of peel-off degree evaluation by image processing on printed electronics. The quantification of peeled and printed areas was performed using open source programs. To verify the accuracy of methods, we manually removed areas from the printed circuit that was measured, resulting in 96.3% accuracy. The sintered patterns showed a decreasing tendency in accordance with the increase in the energy density of an infrared lamp, and the peel-off degree increased. Thus, the comparison between both results was presented. Finally, the correlation between performance characteristics was determined by quantitative analysis.
Rusz, Jan; Bonnet, Cecilia; Klempíř, Jiří; Tykalová, Tereza; Baborová, Eva; Novotný, Michal; Rulseh, Aaron; Růžička, Evžen
2015-01-01
Although speech disorder is frequently an early and prominent clinical feature of Parkinson's disease (PD) as well as atypical parkinsonian syndromes (APS) such as progressive supranuclear palsy (PSP) and multiple system atrophy (MSA), there is a lack of objective and quantitative evidence to verify whether any specific speech characteristics allow differentiation between PD, PSP and MSA. Speech samples were acquired from 77 subjects including 15 PD, 12 PSP, 13 MSA and 37 healthy controls. The accurate differential diagnosis of dysarthria subtypes was based on the quantitative acoustic analysis of 16 speech dimensions. Dysarthria was uniformly present in all parkinsonian patients but was more severe in PSP and MSA than in PD. Whilst PD speakers manifested pure hypokinetic dysarthria, ataxic components were more affected in MSA whilst PSP subjects demonstrated severe deficits in hypokinetic and spastic elements of dysarthria. Dysarthria in PSP was dominated by increased dysfluency, decreased slow rate, inappropriate silences, deficits in vowel articulation and harsh voice quality whereas MSA by pitch fluctuations, excess intensity variations, prolonged phonemes, vocal tremor and strained-strangled voice quality. Objective speech measurements were able to discriminate between APS and PD with 95% accuracy and between PSP and MSA with 75% accuracy. Dysarthria severity in APS was related to overall disease severity (r = 0.54, p = 0.006). Dysarthria with various combinations of hypokinetic, spastic and ataxic components reflects differing pathophysiology in PD, PSP and MSA. Thus, motor speech examination may provide useful information in the evaluation of these diseases with similar manifestations.
NASA Astrophysics Data System (ADS)
Kostencka, Julianna; Kozacki, Tomasz; Hennelly, Bryan; Sheridan, John T.
2017-06-01
Holographic tomography (HT) allows noninvasive, quantitative, 3D imaging of transparent microobjects, such as living biological cells and fiber optics elements. The technique is based on acquisition of multiple scattered fields for various sample perspectives using digital holographic microscopy. Then, the captured data is processed with one of the tomographic reconstruction algorithms, which enables 3D reconstruction of refractive index distribution. In our recent works we addressed the issue of spatially variant accuracy of the HT reconstructions, which results from the insufficient model of diffraction that is applied in the widely-used tomographic reconstruction algorithms basing on the Rytov approximation. In the present study, we continue investigating the spatially variant properties of the HT imaging, however, we are now focusing on the limited spatial size of holograms as a source of this problem. Using the Wigner distribution representation and the Ewald sphere approach, we show that the limited size of the holograms results in a decreased quality of tomographic imaging in off-center regions of the HT reconstructions. This is because the finite detector extent becomes a limiting aperture that prohibits acquisition of full information about diffracted fields coming from the out-of-focus structures of a sample. The incompleteness of the data results in an effective truncation of the tomographic transfer function for the out-of-center regions of the tomographic image. In this paper, the described effect is quantitatively characterized for three types of the tomographic systems: the configuration with 1) object rotation, 2) scanning of the illumination direction, 3) the hybrid HT solution combing both previous approaches.
Geith, Tobias; Schmidt, Gerwin; Biffar, Andreas; Dietrich, Olaf; Dürr, Hans Roland; Reiser, Maximilian; Baur-Melnyk, Andrea
2012-11-01
The objective of our study was to compare the diagnostic value of qualitative diffusion-weighted imaging (DWI), quantitative DWI, and chemical-shift imaging in a single prospective cohort of patients with acute osteoporotic and malignant vertebral fractures. The study group was composed of patients with 26 osteoporotic vertebral fractures (18 women, eight men; mean age, 69 years; age range, 31 years 6 months to 86 years 2 months) and 20 malignant vertebral fractures (nine women, 11 men; mean age, 63.4 years; age range, 24 years 8 months to 86 years 4 months). T1-weighted, STIR, and T2-weighted sequences were acquired at 1.5 T. A DW reverse fast imaging with steady-state free precession (PSIF) sequence at different delta values was evaluated qualitatively. A DW echo-planar imaging (EPI) sequence and a DW single-shot turbo spin-echo (TSE) sequence at different b values were evaluated qualitatively and quantitatively using the apparent diffusion coefficient. Opposed-phase sequences were used to assess signal intensity qualitatively. The signal loss between in- and opposed-phase images was determined quantitatively. Two-tailed Fisher exact test, Mann-Whitney test, and receiver operating characteristic analysis were performed. Sensitivities, specificities, and accuracies were determined. Qualitative DW-PSIF imaging (delta = 3 ms) showed the best performance for distinguishing between benign and malignant fractures (sensitivity, 100%; specificity, 88.5%; accuracy, 93.5%). Qualitative DW-EPI (b = 50 s/mm(2) [p = 1.00]; b = 250 s/mm(2) [p = 0.50]) and DW single-shot TSE imaging (b = 100 s/mm(2) [p = 1.00]; b = 250 s/mm(2) [p = 0.18]; b = 400 s/mm(2) [p = 0.18]; b = 600 s/mm(2) [p = 0.39]) did not indicate significant differences between benign and malignant fractures. DW-EPI using a b value of 500 s/mm(2) (p = 0.01) indicated significant differences between benign and malignant vertebral fractures. Quantitative DW-EPI (p = 0.09) and qualitative opposed-phase imaging (p = 0.06) did not exhibit significant differences, quantitative DW single-shot TSE imaging (p = 0.002) and quantitative chemical-shift imaging (p = 0.01) showed significant differences between benign and malignant fractures. The DW-PSIF sequence (delta = 3 ms) had the highest accuracy in differentiating benign from malignant vertebral fractures. Quantitative chemical-shift imaging and quantitative DW single-shot TSE imaging had a lower accuracy than DW-PSIF imaging because of a large overlap. Qualitative assessment of opposed-phase, DW-EPI, and DW single-shot TSE sequences and quantitative assessment of the DW-EPI sequence were not suitable for distinguishing between benign and malignant vertebral fractures.
Differential porosimetry and permeametry for random porous media.
Hilfer, R; Lemmer, A
2015-07-01
Accurate determination of geometrical and physical properties of natural porous materials is notoriously difficult. Continuum multiscale modeling has provided carefully calibrated realistic microstructure models of reservoir rocks with floating point accuracy. Previous measurements using synthetic microcomputed tomography (μ-CT) were based on extrapolation of resolution-dependent properties for discrete digitized approximations of the continuum microstructure. This paper reports continuum measurements of volume and specific surface with full floating point precision. It also corrects an incomplete description of rotations in earlier publications. More importantly, the methods of differential permeametry and differential porosimetry are introduced as precision tools. The continuum microstructure chosen to exemplify the methods is a homogeneous, carefully calibrated and characterized model for Fontainebleau sandstone. The sample has been publicly available since 2010 on the worldwide web as a benchmark for methodical studies of correlated random media. High-precision porosimetry gives the volume and internal surface area of the sample with floating point accuracy. Continuum results with floating point precision are compared to discrete approximations. Differential porosities and differential surface area densities allow geometrical fluctuations to be discriminated from discretization effects and numerical noise. Differential porosimetry and Fourier analysis reveal subtle periodic correlations. The findings uncover small oscillatory correlations with a period of roughly 850μm, thus implying that the sample is not strictly stationary. The correlations are attributed to the deposition algorithm that was used to ensure the grain overlap constraint. Differential permeabilities are introduced and studied. Differential porosities and permeabilities provide scale-dependent information on geometry fluctuations, thereby allowing quantitative error estimates.
Accuracy of the NDI Wave Speech Research System
ERIC Educational Resources Information Center
Berry, Jeffrey J.
2011-01-01
Purpose: This work provides a quantitative assessment of the positional tracking accuracy of the NDI Wave Speech Research System. Method: Three experiments were completed: (a) static rigid-body tracking across different locations in the electromagnetic field volume, (b) dynamic rigid-body tracking across different locations within the…
[Quantitative Analysis of Heavy Metals in Water with LIBS Based on Signal-to-Background Ratio].
Hu, Li; Zhao, Nan-jing; Liu, Wen-qing; Fang, Li; Zhang, Da-hai; Wang, Yin; Meng, De Shuo; Yu, Yang; Ma, Ming-jun
2015-07-01
There are many influence factors in the precision and accuracy of the quantitative analysis with LIBS technology. According to approximately the same characteristics trend of background spectrum and characteristic spectrum along with the change of temperature through in-depth analysis, signal-to-background ratio (S/B) measurement and regression analysis could compensate the spectral line intensity changes caused by system parameters such as laser power, spectral efficiency of receiving. Because the measurement dates were limited and nonlinear, we used support vector machine (SVM) for regression algorithm. The experimental results showed that the method could improve the stability and the accuracy of quantitative analysis of LIBS, and the relative standard deviation and average relative error of test set respectively were 4.7% and 9.5%. Data fitting method based on signal-to-background ratio(S/B) is Less susceptible to matrix elements and background spectrum etc, and provides data processing reference for real-time online LIBS quantitative analysis technology.
NASA Astrophysics Data System (ADS)
Suzuki, Makoto; Kameda, Toshimasa; Doi, Ayumi; Borisov, Sergey; Babin, Sergey
2018-03-01
The interpretation of scanning electron microscopy (SEM) images of the latest semiconductor devices is not intuitive and requires comparison with computed images based on theoretical modeling and simulations. For quantitative image prediction and geometrical reconstruction of the specimen structure, the accuracy of the physical model is essential. In this paper, we review the current models of electron-solid interaction and discuss their accuracy. We perform the comparison of the simulated results with our experiments of SEM overlay of under-layer, grain imaging of copper interconnect, and hole bottom visualization by angular selective detectors, and show that our model well reproduces the experimental results. Remaining issues for quantitative simulation are also discussed, including the accuracy of the charge dynamics, treatment of beam skirt, and explosive increase in computing time.
The subtle business of model reduction for stochastic chemical kinetics
NASA Astrophysics Data System (ADS)
Gillespie, Dan T.; Cao, Yang; Sanft, Kevin R.; Petzold, Linda R.
2009-02-01
This paper addresses the problem of simplifying chemical reaction networks by adroitly reducing the number of reaction channels and chemical species. The analysis adopts a discrete-stochastic point of view and focuses on the model reaction set S1⇌S2→S3, whose simplicity allows all the mathematics to be done exactly. The advantages and disadvantages of replacing this reaction set with a single S3-producing reaction are analyzed quantitatively using novel criteria for measuring simulation accuracy and simulation efficiency. It is shown that in all cases in which such a model reduction can be accomplished accurately and with a significant gain in simulation efficiency, a procedure called the slow-scale stochastic simulation algorithm provides a robust and theoretically transparent way of implementing the reduction.
Universal fragment descriptors for predicting properties of inorganic crystals
NASA Astrophysics Data System (ADS)
Isayev, Olexandr; Oses, Corey; Toher, Cormac; Gossett, Eric; Curtarolo, Stefano; Tropsha, Alexander
2017-06-01
Although historically materials discovery has been driven by a laborious trial-and-error process, knowledge-driven materials design can now be enabled by the rational combination of Machine Learning methods and materials databases. Here, data from the AFLOW repository for ab initio calculations is combined with Quantitative Materials Structure-Property Relationship models to predict important properties: metal/insulator classification, band gap energy, bulk/shear moduli, Debye temperature and heat capacities. The prediction's accuracy compares well with the quality of the training data for virtually any stoichiometric inorganic crystalline material, reciprocating the available thermomechanical experimental data. The universality of the approach is attributed to the construction of the descriptors: Property-Labelled Materials Fragments. The representations require only minimal structural input allowing straightforward implementations of simple heuristic design rules.
NASA Astrophysics Data System (ADS)
Zhukovskiy, Yu L.; Korolev, N. A.; Babanova, I. S.; Boikov, A. V.
2017-10-01
This article is devoted to the prediction of the residual life based on an estimate of the technical state of the induction motor. The proposed system allows to increase the accuracy and completeness of diagnostics by using an artificial neural network (ANN), and also identify and predict faulty states of an electrical equipment in dynamics. The results of the proposed system for estimation the technical condition are probability technical state diagrams and a quantitative evaluation of the residual life, taking into account electrical, vibrational, indirect parameters and detected defects. Based on the evaluation of the technical condition and the prediction of the residual life, a decision is made to change the control of the operating and maintenance modes of the electric motors.
The subtle business of model reduction for stochastic chemical kinetics.
Gillespie, Dan T; Cao, Yang; Sanft, Kevin R; Petzold, Linda R
2009-02-14
This paper addresses the problem of simplifying chemical reaction networks by adroitly reducing the number of reaction channels and chemical species. The analysis adopts a discrete-stochastic point of view and focuses on the model reaction set S(1)<=>S(2)-->S(3), whose simplicity allows all the mathematics to be done exactly. The advantages and disadvantages of replacing this reaction set with a single S(3)-producing reaction are analyzed quantitatively using novel criteria for measuring simulation accuracy and simulation efficiency. It is shown that in all cases in which such a model reduction can be accomplished accurately and with a significant gain in simulation efficiency, a procedure called the slow-scale stochastic simulation algorithm provides a robust and theoretically transparent way of implementing the reduction.
Universal fragment descriptors for predicting properties of inorganic crystals.
Isayev, Olexandr; Oses, Corey; Toher, Cormac; Gossett, Eric; Curtarolo, Stefano; Tropsha, Alexander
2017-06-05
Although historically materials discovery has been driven by a laborious trial-and-error process, knowledge-driven materials design can now be enabled by the rational combination of Machine Learning methods and materials databases. Here, data from the AFLOW repository for ab initio calculations is combined with Quantitative Materials Structure-Property Relationship models to predict important properties: metal/insulator classification, band gap energy, bulk/shear moduli, Debye temperature and heat capacities. The prediction's accuracy compares well with the quality of the training data for virtually any stoichiometric inorganic crystalline material, reciprocating the available thermomechanical experimental data. The universality of the approach is attributed to the construction of the descriptors: Property-Labelled Materials Fragments. The representations require only minimal structural input allowing straightforward implementations of simple heuristic design rules.
Doshi, Urmi; Hamelberg, Donald
2012-11-13
In enhanced sampling techniques, the precision of the reweighted ensemble properties is often decreased due to large variation in statistical weights and reduction in the effective sampling size. To abate this reweighting problem, here, we propose a general accelerated molecular dynamics (aMD) approach in which only the rotatable dihedrals are subjected to aMD (RaMD), unlike the typical implementation wherein all dihedrals are boosted (all-aMD). Nonrotatable and improper dihedrals are marginally important to conformational changes or the different rotameric states. Not accelerating them avoids the sharp increases in the potential energies due to small deviations from their minimum energy conformations and leads to improvement in the precision of RaMD. We present benchmark studies on two model dipeptides, Ace-Ala-Nme and Ace-Trp-Nme, simulated with normal MD, all-aMD, and RaMD. We carry out a systematic comparison between the performances of both forms of aMD using a theory that allows quantitative estimation of the effective number of sampled points and the associated uncertainty. Our results indicate that, for the same level of acceleration and simulation length, as used in all-aMD, RaMD results in significantly less loss in the effective sample size and, hence, increased accuracy in the sampling of φ-ψ space. RaMD yields an accuracy comparable to that of all-aMD, from simulation lengths 5 to 1000 times shorter, depending on the peptide and the acceleration level. Such improvement in speed and accuracy over all-aMD is highly remarkable, suggesting RaMD as a promising method for sampling larger biomolecules.
Song, Na; Du, Yong; He, Bin; Frey, Eric C.
2011-01-01
Purpose: The radionuclide 131I has found widespread use in targeted radionuclide therapy (TRT), partly due to the fact that it emits photons that can be imaged to perform treatment planning or posttherapy dose verification as well as beta rays that are suitable for therapy. In both the treatment planning and dose verification applications, it is necessary to estimate the activity distribution in organs or tumors at several time points. In vivo estimates of the 131I activity distribution at each time point can be obtained from quantitative single-photon emission computed tomography (QSPECT) images and organ activity estimates can be obtained either from QSPECT images or quantification of planar projection data. However, in addition to the photon used for imaging, 131I decay results in emission of a number of other higher-energy photons with significant abundances. These higher-energy photons can scatter in the body, collimator, or detector and be counted in the 364 keV photopeak energy window, resulting in reduced image contrast and degraded quantitative accuracy; these photons are referred to as downscatter. The goal of this study was to develop and evaluate a model-based downscatter compensation method specifically designed for the compensation of high-energy photons emitted by 131I and detected in the imaging energy window. Methods: In the evaluation study, we used a Monte Carlo simulation (MCS) code that had previously been validated for other radionuclides. Thus, in preparation for the evaluation study, we first validated the code for 131I imaging simulation by comparison with experimental data. Next, we assessed the accuracy of the downscatter model by comparing downscatter estimates with MCS results. Finally, we combined the downscatter model with iterative reconstruction-based compensation for attenuation (A) and scatter (S) and the full (D) collimator-detector response of the 364 keV photons to form a comprehensive compensation method. We evaluated this combined method in terms of quantitative accuracy using the realistic 3D NCAT phantom and an activity distribution obtained from patient studies. We compared the accuracy of organ activity estimates in images reconstructed with and without addition of downscatter compensation from projections with and without downscatter contamination. Results: We observed that the proposed method provided substantial improvements in accuracy compared to no downscatter compensation and had accuracies comparable to reconstructions from projections without downscatter contamination. Conclusions: The results demonstrate that the proposed model-based downscatter compensation method is effective and may have a role in quantitative 131I imaging. PMID:21815394
Yang, Lixia; Mu, Yuming; Quaglia, Luiz Augusto; Tang, Qi; Guan, Lina; Wang, Chunmei; Shih, Ming Chi
2012-01-01
The study aim was to compare two different stress echocardiography interpretation techniques based on the correlation with thrombosis in myocardial infarction (TIMI ) flow grading from acute coronary syndrome (ACS) patients. Forty-one patients with suspected ACS were studied before diagnostic coronary angiography with myocardial contrast echocardiography (MCE) at rest and at stress. The correlation of visual interpretation of MCE and TIMI flow grade was significant. The quantitative analysis (myocardial perfusion parameters: A, β, and A × β) and TIMI flow grade were significant. MCE visual interpretation and TIMI flow grade had a high degree of agreement, on diagnosing myocardial perfusion abnormality. If one considers TIMI flow grade <3 as abnormal, MCE visual interpretation at rest had 73.1% accuracy with 58.2% sensitivity and 84.2% specificity and at stress had 80.4% accuracy with 76.6% sensitivity and 83.3% specificity. The MCE quantitative analysis has better accuracy with 100% of agreement with different level of TIMI flow grading. MCE quantitative analysis at stress has showed a direct correlation with TIMI flow grade, more significant than the visual interpretation technique. Further studies could measure the clinical relevance of this more objective approach to managing acute coronary syndrome patient before percutaneous coronary intervention (PCI). PMID:22778555
CSE database: extended annotations and new recommendations for ECG software testing.
Smíšek, Radovan; Maršánová, Lucie; Němcová, Andrea; Vítek, Martin; Kozumplík, Jiří; Nováková, Marie
2017-08-01
Nowadays, cardiovascular diseases represent the most common cause of death in western countries. Among various examination techniques, electrocardiography (ECG) is still a highly valuable tool used for the diagnosis of many cardiovascular disorders. In order to diagnose a person based on ECG, cardiologists can use automatic diagnostic algorithms. Research in this area is still necessary. In order to compare various algorithms correctly, it is necessary to test them on standard annotated databases, such as the Common Standards for Quantitative Electrocardiography (CSE) database. According to Scopus, the CSE database is the second most cited standard database. There were two main objectives in this work. First, new diagnoses were added to the CSE database, which extended its original annotations. Second, new recommendations for diagnostic software quality estimation were established. The ECG recordings were diagnosed by five new cardiologists independently, and in total, 59 different diagnoses were found. Such a large number of diagnoses is unique, even in terms of standard databases. Based on the cardiologists' diagnoses, a four-round consensus (4R consensus) was established. Such a 4R consensus means a correct final diagnosis, which should ideally be the output of any tested classification software. The accuracy of the cardiologists' diagnoses compared with the 4R consensus was the basis for the establishment of accuracy recommendations. The accuracy was determined in terms of sensitivity = 79.20-86.81%, positive predictive value = 79.10-87.11%, and the Jaccard coefficient = 72.21-81.14%, respectively. Within these ranges, the accuracy of the software is comparable with the accuracy of cardiologists. The accuracy quantification of the correct classification is unique. Diagnostic software developers can objectively evaluate the success of their algorithm and promote its further development. The annotations and recommendations proposed in this work will allow for faster development and testing of classification software. As a result, this might facilitate cardiologists' work and lead to faster diagnoses and earlier treatment.
Towards the automatization of the Foucault knife-edge quantitative test
NASA Astrophysics Data System (ADS)
Rodríguez, G.; Villa, J.; Martínez, G.; de la Rosa, I.; Ivanov, R.
2017-08-01
Given the increasing necessity of simple, economical and reliable methods and instruments for performing quality tests of optical surfaces such as mirrors and lenses, in the recent years we resumed the study of the long forgotten Foucault knife-edge test from the point of view of the physical optics, ultimately achieving a closed mathematical expression that directly relates the knife-edge position along the displacement paraxial axis with the observable irradiance pattern, which later allowed us to propose a quantitative methodology for estimating the wavefront error of an aspherical mirror with precision akin to interferometry. In this work, we present a further improved digital image processing algorithm in which the sigmoidal cost-function for calculating the transient slope-point of each associated intensity-illumination profile is replaced for a simplified version of it, thus making the whole process of estimating the wavefront gradient remarkably more stable and efficient, at the same time, the Fourier based algorithm employed for gradient integration has been replaced as well for a regularized quadratic cost-function that allows a considerably easier introduction of the region of interest (ROI) of the function, which solved by means of a linear gradient conjugate method largely increases the overall accuracy and efficiency of the algorithm. This revised approach of our methodology can be easily implemented and handled by most single-board microcontrollers in the market, hence enabling the implementation of a full-integrated automatized test apparatus, opening a realistic path for even the proposal of a stand-alone optical mirror analyzer prototype.
Use magnetic resonance imaging to assess articular cartilage
Wang, Yuanyuan; Wluka, Anita E.; Jones, Graeme; Ding, Changhai
2012-01-01
Magnetic resonance imaging (MRI) enables a noninvasive, three-dimensional assessment of the entire joint, simultaneously allowing the direct visualization of articular cartilage. Thus, MRI has become the imaging modality of choice in both clinical and research settings of musculoskeletal diseases, particular for osteoarthritis (OA). Although radiography, the current gold standard for the assessment of OA, has had recent significant technical advances, radiographic methods have significant limitations when used to measure disease progression. MRI allows accurate and reliable assessment of articular cartilage which is sensitive to change, providing the opportunity to better examine and understand preclinical and very subtle early abnormalities in articular cartilage, prior to the onset of radiographic disease. MRI enables quantitative (cartilage volume and thickness) and semiquantitative assessment of articular cartilage morphology, and quantitative assessment of cartilage matrix composition. Cartilage volume and defects have demonstrated adequate validity, accuracy, reliability and sensitivity to change. They are correlated to radiographic changes and clinical outcomes such as pain and joint replacement. Measures of cartilage matrix composition show promise as they seem to relate to cartilage morphology and symptoms. MRI-derived cartilage measurements provide a useful tool for exploring the effect of modifiable factors on articular cartilage prior to clinical disease and identifying the potential preventive strategies. MRI represents a useful approach to monitoring the natural history of OA and evaluating the effect of therapeutic agents. MRI assessment of articular cartilage has tremendous potential for large-scale epidemiological studies of OA progression, and for clinical trials of treatment response to disease-modifying OA drugs. PMID:22870497
Pahn, Gregor; Skornitzke, Stephan; Schlemmer, Hans-Peter; Kauczor, Hans-Ulrich; Stiller, Wolfram
2016-01-01
Based on the guidelines from "Report 87: Radiation Dose and Image-quality Assessment in Computed Tomography" of the International Commission on Radiation Units and Measurements (ICRU), a software framework for automated quantitative image quality analysis was developed and its usability for a variety of scientific questions demonstrated. The extendable framework currently implements the calculation of the recommended Fourier image quality (IQ) metrics modulation transfer function (MTF) and noise-power spectrum (NPS), and additional IQ quantities such as noise magnitude, CT number accuracy, uniformity across the field-of-view, contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) of simulated lesions for a commercially available cone-beam phantom. Sample image data were acquired with different scan and reconstruction settings on CT systems from different manufacturers. Spatial resolution is analyzed in terms of edge-spread function, line-spread-function, and MTF. 3D NPS is calculated according to ICRU Report 87, and condensed to 2D and radially averaged 1D representations. Noise magnitude, CT numbers, and uniformity of these quantities are assessed on large samples of ROIs. Low-contrast resolution (CNR, SNR) is quantitatively evaluated as a function of lesion contrast and diameter. Simultaneous automated processing of several image datasets allows for straightforward comparative assessment. The presented framework enables systematic, reproducible, automated and time-efficient quantitative IQ analysis. Consistent application of the ICRU guidelines facilitates standardization of quantitative assessment not only for routine quality assurance, but for a number of research questions, e.g. the comparison of different scanner models or acquisition protocols, and the evaluation of new technology or reconstruction methods. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Principles of Quantitative MR Imaging with Illustrated Review of Applicable Modular Pulse Diagrams.
Mills, Andrew F; Sakai, Osamu; Anderson, Stephan W; Jara, Hernan
2017-01-01
Continued improvements in diagnostic accuracy using magnetic resonance (MR) imaging will require development of methods for tissue analysis that complement traditional qualitative MR imaging studies. Quantitative MR imaging is based on measurement and interpretation of tissue-specific parameters independent of experimental design, compared with qualitative MR imaging, which relies on interpretation of tissue contrast that results from experimental pulse sequence parameters. Quantitative MR imaging represents a natural next step in the evolution of MR imaging practice, since quantitative MR imaging data can be acquired using currently available qualitative imaging pulse sequences without modifications to imaging equipment. The article presents a review of the basic physical concepts used in MR imaging and how quantitative MR imaging is distinct from qualitative MR imaging. Subsequently, the article reviews the hierarchical organization of major applicable pulse sequences used in this article, with the sequences organized into conventional, hybrid, and multispectral sequences capable of calculating the main tissue parameters of T1, T2, and proton density. While this new concept offers the potential for improved diagnostic accuracy and workflow, awareness of this extension to qualitative imaging is generally low. This article reviews the basic physical concepts in MR imaging, describes commonly measured tissue parameters in quantitative MR imaging, and presents the major available pulse sequences used for quantitative MR imaging, with a focus on the hierarchical organization of these sequences. © RSNA, 2017.
The adverse outcome pathway (AOP) framework can be used to support the use of mechanistic toxicology data as a basis for risk assessment. For certain risk contexts this includes defining, quantitative linkages between the molecular initiating event (MIE) and subsequent key events...
DMS-prefiltered mass spectrometry for the detection of biomarkers
NASA Astrophysics Data System (ADS)
Coy, Stephen L.; Krylov, Evgeny V.; Nazarov, Erkinjon G.
2008-04-01
Technologies based on Differential Mobility Spectrometry (DMS) are ideally matched to rapid, sensitive, and selective detection of chemicals like biomarkers. Biomarkers linked to exposure to radiation, exposure to CWA's, exposure to toxic materials (TICs and TIMs) and to specific diseases are being examined in a number of laboratories. Screening for these types of exposure can be improved in accuracy and greatly speeded up by using DMS-MS instead of slower techniques like LC-MS and GC-MS. We have performed an extensive series of tests with nanospray-DMS-mass spectroscopy and standalone nanospray-DMS obtaining extensive information on chemistry and detectivity. DMS-MS systems implemented with low-resolution, low-cost, portable mass-spectrometry systems are very promising. Lowresolution mass spectrometry alone would be inadequate for the task, but with DMS pre-filtration to suppress interferences, can be quite effective, even for quantitative measurement. Bio-fluids and digests are well suited to ionization by electrospray and detection by mass-spectrometry, but signals from critical markers are overwhelmed by chemical noise from unrelated species, making essential quantitative analysis impossible. Sionex and collaborators have presented data using DMS to suppress chemical noise, allowing detection of cancer biomarkers in 10,000-fold excess of normal products 1,2. In addition, a linear dynamic range of approximately 2,000 has been demonstrated with accurate quantitation 3. We will review the range of possible applications and present new data on DMS-MS biomarker detection.
Danish, Shabbar F; Baltuch, Gordon H; Jaggi, Jurg L; Wong, Stephen
2008-04-01
Microelectrode recording during deep brain stimulation surgery is a useful adjunct for subthalamic nucleus (STN) localization. We hypothesize that information in the nonspike background activity can help identify STN boundaries. We present results from a novel quantitative analysis that accomplishes this goal. Thirteen consecutive microelectrode recordings were retrospectively analyzed. Spikes were removed from the recordings with an automated algorithm. The remaining "despiked" signals were converted via root mean square amplitude and curve length calculations into "feature profile" time series. Subthalamic nucleus boundaries determined by inspection, based on sustained deviations from baseline for each feature profile, were compared against those determined intraoperatively by the clinical neurophysiologist. Feature profile activity within STN exhibited a sustained rise in 10 of 13 tracks (77%). The sensitivity of STN entry was 60% and 90% for curve length and root mean square amplitude, respectively, when agreement within 0.5 mm of the neurophysiologist's prediction was used. Sensitivities were 70% and 100% for 1 mm accuracy. Exit point sensitivities were 80% and 90% for both features within 0.5 mm and 1.0 mm, respectively. Reproducible activity patterns in deep brain stimulation microelectrode recordings can allow accurate identification of STN boundaries. Quantitative analyses of this type may provide useful adjunctive information for electrode placement in deep brain stimulation surgery.
Aronoff, Justin M; Yoon, Yang-soo; Soli, Sigfrid D
2010-06-01
Stratified sampling plans can increase the accuracy and facilitate the interpretation of a dataset characterizing a large population. However, such sampling plans have found minimal use in hearing aid (HA) research, in part because of a paucity of quantitative data on the characteristics of HA users. The goal of this study was to devise a quantitatively derived stratified sampling plan for HA research, so that such studies will be more representative and generalizable, and the results obtained using this method are more easily reinterpreted as the population changes. Pure-tone average (PTA) and age information were collected for 84,200 HAs acquired in 2006 and 2007. The distribution of PTA and age was quantified for each HA type and for a composite of all HA users. Based on their respective distributions, PTA and age were each divided into three groups, the combination of which defined the stratification plan. The most populous PTA and age group was also subdivided, allowing greater homogeneity within strata. Finally, the percentage of users in each stratum was calculated. This article provides a stratified sampling plan for HA research, based on a quantitative analysis of the distribution of PTA and age for HA users. Adopting such a sampling plan will make HA research results more representative and generalizable. In addition, data acquired using such plans can be reinterpreted as the HA population changes.
Mobility Spectrometer Studies on Hydrazine and Ammonia Detection
NASA Technical Reports Server (NTRS)
Niu, William; Eiceman, Gary; Szumlas, Andrew; Lewis, John
2011-01-01
An airborne vapor analyzer for detecting sub- to low- parts-per-million (ppm) hydrazine in the presence of higher concentration levels of ammonia has been under development for the Orion program. The detector is based on ambient pressure ionization and ion mobility characterization. The detector encompasses: 1) a membrane inlet to exclude particulate and aerosols from the analyzer inlet; 2) a method to separate hydrazine from ammonia which would otherwise lead to loss of calibration and quantitative accuracy for the hydrazine determination; and 3) response and quantitative determinations for both hydrazine and ammonia. Laboratory studies were made to explore some of these features including mobility measurements mindful of power, size, and weight issues. The study recommended the use of a mobility spectrometer of traditional design with a reagent gas and equipped with an inlet transfer line of bonded phase fused silica tube. The inlet transfer line provided gas phase separation of neutrals of ammonia from hydrazine at 50 C simplifying significantly the ionization chemistry that underlies response in a mobility spectrometer. Performance of the analyzer was acceptable between ranges of 30 to 80 C for both the pre-fractionation column and the drift tube. An inlet comprised of a combined membrane with valve-less injector allowed high speed quantitative determination of ammonia and hydrazine without cross reactivity from common metabolites such as alcohols, esters, and aldehydes. Preliminary test results and some of the design features are discussed.
FPGA-based fused smart-sensor for tool-wear area quantitative estimation in CNC machine inserts.
Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto
2010-01-01
Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used.
NASA Astrophysics Data System (ADS)
Clancy, Michael; Belli, Antonio; Davies, David; Lucas, Samuel J. E.; Su, Zhangjie; Dehghani, Hamid
2015-07-01
The subject of superficial contamination and signal origins remains a widely debated topic in the field of Near Infrared Spectroscopy (NIRS), yet the concept of using the technology to monitor an injured brain, in a clinical setting, poses additional challenges concerning the quantitative accuracy of recovered parameters. Using high density diffuse optical tomography probes, quantitatively accurate parameters from different layers (skin, bone and brain) can be recovered from subject specific reconstruction models. This study assesses the use of registered atlas models for situations where subject specific models are not available. Data simulated from subject specific models were reconstructed using the 8 registered atlas models implementing a regional (layered) parameter recovery in NIRFAST. A 3-region recovery based on the atlas model yielded recovered brain saturation values which were accurate to within 4.6% (percentage error) of the simulated values, validating the technique. The recovered saturations in the superficial regions were not quantitatively accurate. These findings highlight differences in superficial (skin and bone) layer thickness between the subject and atlas models. This layer thickness mismatch was propagated through the reconstruction process decreasing the parameter accuracy.
Quantification Bias Caused by Plasmid DNA Conformation in Quantitative Real-Time PCR Assay
Lin, Chih-Hui; Chen, Yu-Chieh; Pan, Tzu-Ming
2011-01-01
Quantitative real-time PCR (qPCR) is the gold standard for the quantification of specific nucleic acid sequences. However, a serious concern has been revealed in a recent report: supercoiled plasmid standards cause significant over-estimation in qPCR quantification. In this study, we investigated the effect of plasmid DNA conformation on the quantification of DNA and the efficiency of qPCR. Our results suggest that plasmid DNA conformation has significant impact on the accuracy of absolute quantification by qPCR. DNA standard curves shifted significantly among plasmid standards with different DNA conformations. Moreover, the choice of DNA measurement method and plasmid DNA conformation may also contribute to the measurement error of DNA standard curves. Due to the multiple effects of plasmid DNA conformation on the accuracy of qPCR, efforts should be made to assure the highest consistency of plasmid standards for qPCR. Thus, we suggest that the conformation, preparation, quantification, purification, handling, and storage of standard plasmid DNA should be described and defined in the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) to assure the reproducibility and accuracy of qPCR absolute quantification. PMID:22194997
Tu, Chengjian; Shen, Shichen; Sheng, Quanhu; Shyr, Yu; Qu, Jun
2017-01-30
Reliable quantification of low-abundance proteins in complex proteomes is challenging largely owing to the limited number of spectra/peptides identified. In this study we developed a straightforward method to improve the quantitative accuracy and precision of proteins by strategically retrieving the less confident peptides that were previously filtered out using the standard target-decoy search strategy. The filtered-out MS/MS spectra matched to confidently-identified proteins were recovered, and the peptide-spectrum-match FDR were re-calculated and controlled at a confident level of FDR≤1%, while protein FDR maintained at ~1%. We evaluated the performance of this strategy in both spectral count- and ion current-based methods. >60% increase of total quantified spectra/peptides was respectively achieved for analyzing a spike-in sample set and a public dataset from CPTAC. Incorporating the peptide retrieval strategy significantly improved the quantitative accuracy and precision, especially for low-abundance proteins (e.g. one-hit proteins). Moreover, the capacity of confidently discovering significantly-altered proteins was also enhanced substantially, as demonstrated with two spike-in datasets. In summary, improved quantitative performance was achieved by this peptide recovery strategy without compromising confidence of protein identification, which can be readily implemented in a broad range of quantitative proteomics techniques including label-free or labeling approaches. We hypothesize that more quantifiable spectra and peptides in a protein, even including less confident peptides, could help reduce variations and improve protein quantification. Hence the peptide retrieval strategy was developed and evaluated in two spike-in sample sets with different LC-MS/MS variations using both MS1- and MS2-based quantitative approach. The list of confidently identified proteins using the standard target-decoy search strategy was fixed and more spectra/peptides with less confidence matched to confident proteins were retrieved. However, the total peptide-spectrum-match false discovery rate (PSM FDR) after retrieval analysis was still controlled at a confident level of FDR≤1%. As expected, the penalty for occasionally incorporating incorrect peptide identifications is negligible by comparison with the improvements in quantitative performance. More quantifiable peptides, lower missing value rate, better quantitative accuracy and precision were significantly achieved for the same protein identifications by this simple strategy. This strategy is theoretically applicable for any quantitative approaches in proteomics and thereby provides more quantitative information, especially on low-abundance proteins. Published by Elsevier B.V.
An ensemble model of QSAR tools for regulatory risk assessment.
Pradeep, Prachi; Povinelli, Richard J; White, Shannon; Merrill, Stephen J
2016-01-01
Quantitative structure activity relationships (QSARs) are theoretical models that relate a quantitative measure of chemical structure to a physical property or a biological effect. QSAR predictions can be used for chemical risk assessment for protection of human and environmental health, which makes them interesting to regulators, especially in the absence of experimental data. For compatibility with regulatory use, QSAR models should be transparent, reproducible and optimized to minimize the number of false negatives. In silico QSAR tools are gaining wide acceptance as a faster alternative to otherwise time-consuming clinical and animal testing methods. However, different QSAR tools often make conflicting predictions for a given chemical and may also vary in their predictive performance across different chemical datasets. In a regulatory context, conflicting predictions raise interpretation, validation and adequacy concerns. To address these concerns, ensemble learning techniques in the machine learning paradigm can be used to integrate predictions from multiple tools. By leveraging various underlying QSAR algorithms and training datasets, the resulting consensus prediction should yield better overall predictive ability. We present a novel ensemble QSAR model using Bayesian classification. The model allows for varying a cut-off parameter that allows for a selection in the desirable trade-off between model sensitivity and specificity. The predictive performance of the ensemble model is compared with four in silico tools (Toxtree, Lazar, OECD Toolbox, and Danish QSAR) to predict carcinogenicity for a dataset of air toxins (332 chemicals) and a subset of the gold carcinogenic potency database (480 chemicals). Leave-one-out cross validation results show that the ensemble model achieves the best trade-off between sensitivity and specificity (accuracy: 83.8 % and 80.4 %, and balanced accuracy: 80.6 % and 80.8 %) and highest inter-rater agreement [kappa ( κ ): 0.63 and 0.62] for both the datasets. The ROC curves demonstrate the utility of the cut-off feature in the predictive ability of the ensemble model. This feature provides an additional control to the regulators in grading a chemical based on the severity of the toxic endpoint under study.
An ensemble model of QSAR tools for regulatory risk assessment
Pradeep, Prachi; Povinelli, Richard J.; White, Shannon; ...
2016-09-22
Quantitative structure activity relationships (QSARs) are theoretical models that relate a quantitative measure of chemical structure to a physical property or a biological effect. QSAR predictions can be used for chemical risk assessment for protection of human and environmental health, which makes them interesting to regulators, especially in the absence of experimental data. For compatibility with regulatory use, QSAR models should be transparent, reproducible and optimized to minimize the number of false negatives. In silico QSAR tools are gaining wide acceptance as a faster alternative to otherwise time-consuming clinical and animal testing methods. However, different QSAR tools often make conflictingmore » predictions for a given chemical and may also vary in their predictive performance across different chemical datasets. In a regulatory context, conflicting predictions raise interpretation, validation and adequacy concerns. To address these concerns, ensemble learning techniques in the machine learning paradigm can be used to integrate predictions from multiple tools. By leveraging various underlying QSAR algorithms and training datasets, the resulting consensus prediction should yield better overall predictive ability. We present a novel ensemble QSAR model using Bayesian classification. The model allows for varying a cut-off parameter that allows for a selection in the desirable trade-off between model sensitivity and specificity. The predictive performance of the ensemble model is compared with four in silico tools (Toxtree, Lazar, OECD Toolbox, and Danish QSAR) to predict carcinogenicity for a dataset of air toxins (332 chemicals) and a subset of the gold carcinogenic potency database (480 chemicals). Leave-one-out cross validation results show that the ensemble model achieves the best trade-off between sensitivity and specificity (accuracy: 83.8 % and 80.4 %, and balanced accuracy: 80.6 % and 80.8 %) and highest inter-rater agreement [kappa (κ): 0.63 and 0.62] for both the datasets. The ROC curves demonstrate the utility of the cut-off feature in the predictive ability of the ensemble model. In conclusion, this feature provides an additional control to the regulators in grading a chemical based on the severity of the toxic endpoint under study.« less
Wang, Shunhai; Bobst, Cedric E.; Kaltashov, Igor A.
2018-01-01
Transferrin (Tf) is an 80 kDa iron-binding protein which is viewed as a promising drug carrier to target the central nervous system due to its ability to penetrate the blood-brain barrier (BBB). Among the many challenges during the development of Tf-based therapeutics, sensitive and accurate quantitation of the administered Tf in cerebrospinal fluid (CSF) remains particularly difficult due to the presence of abundant endogenous Tf. Herein, we describe the development of a new LC-MS based method for sensitive and accurate quantitation of exogenous recombinant human Tf in rat CSF. By taking advantage of a His-tag present in recombinant Tf and applying Ni affinity purification, the exogenous hTf can be greatly enriched from rat CSF, despite the presence of the abundant endogenous protein. Additionally, we applied a newly developed O18-labeling technique that can generate internal standards at the protein level, which greatly improved the accuracy and robustness of quantitation. The developed method was investigated for linearity, accuracy, precision and lower limit of quantitation, all of which met the commonly accepted criteria for bioanalytical method validation. PMID:26307718
Gilliaux, Maxime; Lejeune, Thierry M; Sapin, Julien; Dehez, Bruno; Stoquart, Gaëtan; Detrembleur, Christine
2016-04-01
Kinematics is recommended for the quantitative assessment of upper limb movements. The aims of this study were to determine the age effects on upper limb kinematics and establish normative values in healthy subjects. Three hundred and seventy healthy subjects, aged 3-93 years, participated in the study. They performed two unidirectional and two geometrical tasks ten consecutive times with the REAplan, a distal effector robotic device that allows upper limb displacements in the horizontal plane. Twenty-six kinematic indices were computed for the four tasks. For the four tasks, nineteen of the computed kinematic indices showed an age effect. Seventeen indices (the accuracy, speed and smoothness indices and the reproducibility of the accuracy, speed and smoothness) improved in young subjects aged 3-30 years, showed stabilization in adults aged 30-60 years and declined in elderly subjects aged 60-93 years. Additionally, for both geometrical tasks, the speed index exhibited a decrease throughout life. Finally, a principal component analysis provided the relations between the kinematic indices, tasks and subjects' age. This study is the first to assess age effects on upper limb kinematics and establish normative values in subjects aged 3-93 years.
Detection of bio-signature by microscopy and mass spectrometry
NASA Astrophysics Data System (ADS)
Tulej, M.; Wiesendanger, R.; Neuland, M., B.; Meyer, S.; Wurz, P.; Neubeck, A.; Ivarsson, M.; Riedo, V.; Moreno-Garcia, P.; Riedo, A.; Knopp, G.
2017-09-01
We demonstrate detection of micro-sized fossilized bacteria by means of microscopy and mass spectrometry. The characteristic structures of lifelike forms are visualized with a micrometre spatial resolution and mass spectrometric analyses deliver elemental and isotope composition of host and fossilized materials. Our studies show that high selectivity in isolation of fossilized material from host phase can be achieved while applying a microscope visualization (location), a laser ablation ion source with sufficiently small laser spot size and applying depth profiling method. Our investigations shows that fossilized features can be well isolated from host phase. The mass spectrometric measurements can be conducted with sufficiently high accuracy and precision yielding quantitative elemental and isotope composition of micro-sized objects. The current performance of the instrument allows the measurement of the isotope fractionation in per mill level and yield exclusively definition of the origin of the investigated species by combining optical visualization of investigated samples (morphology and texture), chemical characterization of host and embedded in the host micro-sized structure. Our isotope analyses involved bio-relevant B, C, S, and Ni isotopes which could be measured with sufficiently accuracy to conclude about the nature of the micro-sized objects.
Predicting bone strength with ultrasonic guided waves
Bochud, Nicolas; Vallet, Quentin; Minonzio, Jean-Gabriel; Laugier, Pascal
2017-01-01
Recent bone quantitative ultrasound approaches exploit the multimode waveguide response of long bones for assessing properties such as cortical thickness and stiffness. Clinical applications remain, however, challenging, as the impact of soft tissue on guided waves characteristics is not fully understood yet. In particular, it must be clarified whether soft tissue must be incorporated in waveguide models needed to infer reliable cortical bone properties. We hypothesize that an inverse procedure using a free plate model can be applied to retrieve the thickness and stiffness of cortical bone from experimental data. This approach is first validated on a series of laboratory-controlled measurements performed on assemblies of bone- and soft tissue mimicking phantoms and then on in vivo measurements. The accuracy of the estimates is evaluated by comparison with reference values. To further support our hypothesis, these estimates are subsequently inserted into a bilayer model to test its accuracy. Our results show that the free plate model allows retrieving reliable waveguide properties, despite the presence of soft tissue. They also suggest that the more sophisticated bilayer model, although it is more precise to predict experimental data in the forward problem, could turn out to be hardly manageable for solving the inverse problem. PMID:28256568
Reevaluation of pollen quantitation by an automatic pollen counter.
Muradil, Mutarifu; Okamoto, Yoshitaka; Yonekura, Syuji; Chazono, Hideaki; Hisamitsu, Minako; Horiguchi, Shigetoshi; Hanazawa, Toyoyuki; Takahashi, Yukie; Yokota, Kunihiko; Okumura, Satoshi
2010-01-01
Accurate and detailed pollen monitoring is useful for selection of medication and for allergen avoidance in patients with allergic rhinitis. Burkard and Durham pollen samplers are commonly used, but are labor and time intensive. In contrast, automatic pollen counters allow simple real-time pollen counting; however, these instruments have difficulty in distinguishing pollen from small nonpollen airborne particles. Misidentification and underestimation rates for an automatic pollen counter were examined to improve the accuracy of the pollen count. The characteristics of the automatic pollen counter were determined in a chamber study with exposure to cedar pollens or soil grains. The cedar pollen counts were monitored in 2006 and 2007, and compared with those from a Durham sampler. The pollen counts from the automatic counter showed a good correlation (r > 0.7) with those from the Durham sampler when pollen dispersal was high, but a poor correlation (r < 0.5) when pollen dispersal was low. The new correction method, which took into account the misidentification and underestimation, improved this correlation to r > 0.7 during the pollen season. The accuracy of automatic pollen counting can be improved using a correction to include rates of underestimation and misidentification in a particular geographical area.
Chen, Qianqian; Xie, Qian; Zhao, Min; Chen, Bin; Gao, Shi; Zhang, Haishan; Xing, Hua; Ma, Qingjie
2015-01-01
To compare the diagnostic value of visual and semi-quantitative analysis of technetium-99m-poly-ethylene glycol, 4-arginine-glycine-aspartic acid ((99m)Tc-3PRGD2) scintimammography (SMG) for better differentiation of benign from malignant breast masses, and also investigate the incremental role of semi-quantitative index of SMG. A total of 72 patients with breast lesions were included in the study. Technetium-99m-3PRGD2 SMG was performed with single photon emission computed tomography (SPET) at 60 min after intravenous injection of 749 ± 86MBq of the radiotracer. Images were evaluated by visual interpretation and semi-quantitative indices of tumor to non-tumor (T/N) ratios, which were compared with pathology results. Receiver operating characteristics (ROC) curve analyses were performed to determine the optimal visual grade, to calculate cut-off values of semi-quantitative indices, and to compare visual and semi-quantitative diagnostic values. Among the 72 patients, 89 lesions were confirmed by histopathology after fine needle aspiration biopsy or surgery, 48 malignant and 41 benign lesions. The mean T/N ratio of (99m)Tc-3PRGD2 SMG in malignant lesions was significantly higher than that in benign lesions (P<0.05). When grade 2 of the disease was used as cut-off value for the detection of primary breast cancer, the sensitivity, specificity and accuracy were 81.3%, 70.7%, and 76.4%, respectively. When a T/N ratio of 2.01 was used as cut-off value, the sensitivity, specificity and accuracy were 79.2%, 75.6%, and 77.5%, respectively. According to ROC analysis, the area under the curve for semi-quantitative analysis was higher than that for visual analysis, but the statistical difference was not significant (P=0.372). Compared with visual analysis or semi-quantitative analysis alone, the sensitivity, specificity and accuracy of visual analysis combined with semi-quantitative analysis in diagnosing primary breast cancer were higher, being: 87.5%, 82.9%, and 85.4%, respectively. The area under the curve was 0.891. Results of the present study suggest that the semi-quantitative and visual analysis statistically showed similar results. The semi-quantitative analysis provided incremental value additive to visual analysis of (99m)Tc-3PRGD2 SMG for the detection of breast cancer. It seems from our results that, when the tumor was located in the medial part of the breast, the semi-quantitative analysis gave better diagnostic results.
USDA-ARS?s Scientific Manuscript database
Quantitative real-time polymerase chain reaction (qRT-PCR) is the most important tool in measuring levels of gene expression due to its accuracy, specificity, and sensitivity. However, the accuracy of qRT-PCR analysis strongly depends on transcript normalization using stably expressed reference gene...
Regulation of Memory Accuracy with Multiple Answers: The Plurality Option
ERIC Educational Resources Information Center
Luna, Karlos; Higham, Philip A.; Martin-Luengo, Beatriz
2011-01-01
We report two experiments that investigated the regulation of memory accuracy with a new regulatory mechanism: the plurality option. This mechanism is closely related to the grain-size option but involves control over the number of alternatives contained in an answer rather than the quantitative boundaries of a single answer. Participants were…
Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.
2000-01-01
Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.
Lee, Won-Joon; Wilkinson, Caroline M; Hwang, Hyeon-Shik; Lee, Sang-Mi
2015-05-01
Accuracy is the most important factor supporting the reliability of forensic facial reconstruction (FFR) comparing to the corresponding actual face. A number of methods have been employed to evaluate objective accuracy of FFR. Recently, it has been attempted that the degree of resemblance between computer-generated FFR and actual face is measured by geometric surface comparison method. In this study, three FFRs were produced employing live adult Korean subjects and three-dimensional computerized modeling software. The deviations of the facial surfaces between the FFR and the head scan CT of the corresponding subject were analyzed in reverse modeling software. The results were compared with those from a previous study which applied the same methodology as this study except average facial soft tissue depth dataset. Three FFRs of this study that applied updated dataset demonstrated lesser deviation errors between the facial surfaces of the FFR and corresponding subject than those from the previous study. The results proposed that appropriate average tissue depth data are important to increase quantitative accuracy of FFR. © 2015 American Academy of Forensic Sciences.
Quantifying the effect of colorization enhancement on mammogram images
NASA Astrophysics Data System (ADS)
Wojnicki, Paul J.; Uyeda, Elizabeth; Micheli-Tzanakou, Evangelia
2002-04-01
Current methods of radiological displays provide only grayscale images of mammograms. The limitation of the image space to grayscale provides only luminance differences and textures as cues for object recognition within the image. However, color can be an important and significant cue in the detection of shapes and objects. Increasing detection ability allows the radiologist to interpret the images in more detail, improving object recognition and diagnostic accuracy. Color detection experiments using our stimulus system, have demonstrated that an observer can only detect an average of 140 levels of grayscale. An optimally colorized image can allow a user to distinguish 250 - 1000 different levels, hence increasing potential image feature detection by 2-7 times. By implementing a colorization map, which follows the luminance map of the original grayscale images, the luminance profile is preserved and color is isolated as the enhancement mechanism. The effect of this enhancement mechanism on the shape, frequency composition and statistical characteristics of the Visual Evoked Potential (VEP) are analyzed and presented. Thus, the effectiveness of the image colorization is measured quantitatively using the Visual Evoked Potential (VEP).
Wronkiewicz, Mark; Larson, Eric; Lee, Adrian Kc
2016-10-01
Brain-computer interface (BCI) technology allows users to generate actions based solely on their brain signals. However, current non-invasive BCIs generally classify brain activity recorded from surface electroencephalography (EEG) electrodes, which can hinder the application of findings from modern neuroscience research. In this study, we use source imaging-a neuroimaging technique that projects EEG signals onto the surface of the brain-in a BCI classification framework. This allowed us to incorporate prior research from functional neuroimaging to target activity from a cortical region involved in auditory attention. Classifiers trained to detect attention switches performed better with source imaging projections than with EEG sensor signals. Within source imaging, including subject-specific anatomical MRI information (instead of using a generic head model) further improved classification performance. This source-based strategy also reduced accuracy variability across three dimensionality reduction techniques-a major design choice in most BCIs. Our work shows that source imaging provides clear quantitative and qualitative advantages to BCIs and highlights the value of incorporating modern neuroscience knowledge and methods into BCI systems.
Wu, Lucia R.; Chen, Sherry X.; Wu, Yalei; Patel, Abhijit A.; Zhang, David Yu
2018-01-01
Rare DNA-sequence variants hold important clinical and biological information, but existing detection techniques are expensive, complex, allele-specific, or don’t allow for significant multiplexing. Here, we report a temperature-robust polymerase-chain-reaction method, which we term blocker displacement amplification (BDA), that selectively amplifies all sequence variants, including single-nucleotide variants (SNVs), within a roughly 20-nucleotide window by 1,000-fold over wild-type sequences. This allows for easy detection and quantitation of hundreds of potential variants originally at ≤0.1% in allele frequency. BDA is compatible with inexpensive thermocycler instrumentation and employs a rationally designed competitive hybridization reaction to achieve comparable enrichment performance across annealing temperatures ranging from 56 °C to 64 °C. To show the sequence generality of BDA, we demonstrate enrichment of 156 SNVs and the reliable detection of single-digit copies. We also show that the BDA detection of rare driver mutations in cell-free DNA samples extracted from the blood plasma of lung-cancer patients is highly consistent with deep sequencing using molecular lineage tags, with a receiver operator characteristic accuracy of 95%. PMID:29805844
Jafari, Ramin; Chhabra, Shalini; Prince, Martin R; Wang, Yi; Spincemaille, Pascal
2018-04-01
To propose an efficient algorithm to perform dual input compartment modeling for generating perfusion maps in the liver. We implemented whole field-of-view linear least squares (LLS) to fit a delay-compensated dual-input single-compartment model to very high temporal resolution (four frames per second) contrast-enhanced 3D liver data, to calculate kinetic parameter maps. Using simulated data and experimental data in healthy subjects and patients, whole-field LLS was compared with the conventional voxel-wise nonlinear least-squares (NLLS) approach in terms of accuracy, performance, and computation time. Simulations showed good agreement between LLS and NLLS for a range of kinetic parameters. The whole-field LLS method allowed generating liver perfusion maps approximately 160-fold faster than voxel-wise NLLS, while obtaining similar perfusion parameters. Delay-compensated dual-input liver perfusion analysis using whole-field LLS allows generating perfusion maps with a considerable speedup compared with conventional voxel-wise NLLS fitting. Magn Reson Med 79:2415-2421, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Technical Reports Server (NTRS)
Pliutau, Denis; Prasad, Narasimha S.
2012-01-01
In this paper a modeling method based on data reductions is investigated which includes pre analyzed MERRA atmospheric fields for quantitative estimates of uncertainties introduced in the integrated path differential absorption methods for the sensing of various molecules including CO2. This approach represents the extension of our existing lidar modeling framework previously developed and allows effective on- and offline wavelength optimizations and weighting function analysis to minimize the interference effects such as those due to temperature sensitivity and water vapor absorption. The new simulation methodology is different from the previous implementation in that it allows analysis of atmospheric effects over annual spans and the entire Earth coverage which was achieved due to the data reduction methods employed. The effectiveness of the proposed simulation approach is demonstrated with application to the mixing ratio retrievals for the future ASCENDS mission. Independent analysis of multiple accuracy limiting factors including the temperature, water vapor interferences, and selected system parameters is further used to identify favorable spectral regions as well as wavelength combinations facilitating the reduction in total errors in the retrieved XCO2 values.
[Real time 3D echocardiography
NASA Technical Reports Server (NTRS)
Bauer, F.; Shiota, T.; Thomas, J. D.
2001-01-01
Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.
Evaluation of in vivo quantification accuracy of the Ingenuity-TF PET/MR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maus, Jens, E-mail: j.maus@hzdr.de; Schramm, Georg; Hofheinz, Frank
2015-10-15
Purpose: The quantitative accuracy of standardized uptake values (SUVs) and tracer kinetic uptake parameters in patient investigations strongly depends on accurate determination of regional activity concentrations in positron emission tomography (PET) data. This determination rests on the assumption that the given scanner calibration is valid in vivo. In a previous study, we introduced a method to test this assumption. This method allows to identify discrepancies in quantitative accuracy in vivo by comparison of activity concentrations of urine samples measured in a well-counter with activity concentrations extracted from PET images of the bladder. In the present study, we have applied thismore » method to the Philips Ingenuity-TF PET/MR since at the present stage, absolute quantitative accuracy of combined PET/MR systems is still under investigation. Methods: Twenty one clinical whole-body F18-FDG scans were included in this study. The bladder region was imaged as the last bed position and urine samples were collected afterward. PET images were reconstructed including MR-based attenuation correction with and without truncation compensation and 3D regions-of-interest (ROIs) of the bladder were delineated by three observers. To exclude partial volume effects, ROIs were concentrically shrunk by 8–10 mm. Then, activity concentrations were determined in the PET images for the bladder and for the urine by measuring the samples in a calibrated well-counter. In addition, linearity measurements of SUV vs singles rate and measurements of the stability of the coincidence rate of “true” events of the PET/MR system were performed over a period of 4 months. Results: The measured in vivo activity concentrations were significantly lower in PET/MR than in the well-counter with a ratio of the former to the latter of 0.756 ± 0.060 (mean ± std. dev.), a range of 0.604–0.858, and a P value of 3.9 ⋅ 10{sup −14}. While the stability measurements of the coincidence rate of “true” events showed no relevant deviation over time, the linearity scans revealed a systematic error of 8%–11% (avg. 9%) for the range of singles rates present in the bladder scans. After correcting for this systematic bias caused by shortcomings of the manufacturers calibration procedure, the PET to well-counter ratio increased to 0.832 ± 0.064 (0.668 –0.941), P = 1.1 ⋅ 10{sup −10}. After compensating for truncation of the upper extremities in the MR-based attenuation maps, the ratio further improved to 0.871 ± 0.069 (0.693–0.992), P = 3.9 ⋅ 10{sup −8}. Conclusions: Our results show that the Philips PET/MR underestimates activity concentrations in the bladder by 17%, which is 7 percentage points (pp.) larger than in the previously investigated PET and PET/CT systems. We attribute this increased underestimation to remaining limitations of the MR-based attenuation correction. Our results suggest that only a 2 pp. larger underestimation of activity concentrations compared to PET/CT can be observed if compensation of attenuation truncation of the upper extremities is applied. Thus, quantification accuracy of the Philips Ingenuity-TF PET/MR can be considered acceptable for clinical purposes given the ±10% error margin in the EANM guidelines. The comparison of PET images from the bladder region with urine samples has proven a useful method. It might be interesting for evaluation and comparison of the in vivo quantitative accuracy of PET, PET/CT, and especially PET/MR systems from different manufacturers or in multicenter trials.« less
NASA Astrophysics Data System (ADS)
Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor
2013-04-01
Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is often mixed or unknown. The residual values are found to be dependent on two input parameters (standard deviation and maximum point-plane distance both defining distance thresholds for assigning points to a segment) mainly and the curvature of the surface affected mostly the distributions. The results of the analysis helped to decide which parameter set is the best for further modelling and provides the highest accuracy. With these results in mind the success of quasi-automatic modelling of the planar (for example plateau-like) features became more successful and often provided more accuracy. These studies were carried out partly in the framework of TMIS.ascrea project (Nr. 2001978) financed by the Austrian Research Promotion Agency (FFG); the contribution of ZsK was partly funded by Campus Hungary Internship TÁMOP-424B1.
Influence of neighbourhood information on 'Local Climate Zone' mapping in heterogeneous cities
NASA Astrophysics Data System (ADS)
Verdonck, Marie-Leen; Okujeni, Akpona; van der Linden, Sebastian; Demuzere, Matthias; De Wulf, Robert; Van Coillie, Frieke
2017-10-01
Local climate zone (LCZ) mapping is an emerging field in urban climate research. LCZs potentially provide an objective framework to assess urban form and function worldwide. The scheme is currently being used to globally map LCZs as a part of the World Urban Database and Access Portal Tools (WUDAPT) initiative. So far, most of the LCZ maps lack proper quantitative assessment, challenging the generic character of the WUDAPT workflow. Using the standard method introduced by the WUDAPT community difficulties arose concerning the built zones due to high levels of heterogeneity. To overcome this problem a contextual classifier is adopted in the mapping process. This paper quantitatively assesses the influence of neighbourhood information on the LCZ mapping result of three cities in Belgium: Antwerp, Brussels and Ghent. Overall accuracies for the maps were respectively 85.7 ± 0.5, 79.6 ± 0.9, 90.2 ± 0.4%. The approach presented here results in overall accuracies of 93.6 ± 0.2, 92.6 ± 0.3 and 95.6 ± 0.3% for Antwerp, Brussels and Ghent. The results thus indicate a positive influence of neighbourhood information for all study areas with an increase in overall accuracies of 7.9, 13.0 and 5.4%. This paper reaches two main conclusions. Firstly, evidence was introduced on the relevance of a quantitative accuracy assessment in LCZ mapping, showing that the accuracies reported in previous papers are not easily achieved. Secondly, the method presented in this paper proves to be highly effective in Belgian cities, and given its open character shows promise for application in other heterogeneous cities worldwide.
Quantitative assessment of emphysema from whole lung CT scans: comparison with visual grading
NASA Astrophysics Data System (ADS)
Keller, Brad M.; Reeves, Anthony P.; Apanosovich, Tatiyana V.; Wang, Jianwei; Yankelevitz, David F.; Henschke, Claudia I.
2009-02-01
Emphysema is a disease of the lungs that destroys the alveolar air sacs and induces long-term respiratory dysfunction. CT scans allow for imaging of the anatomical basis of emphysema and for visual assessment by radiologists of the extent present in the lungs. Several measures have been introduced for the quantification of the extent of disease directly from CT data in order to add to the qualitative assessments made by radiologists. In this paper we compare emphysema index, mean lung density, histogram percentiles, and the fractal dimension to visual grade in order to evaluate the predictability of radiologist visual scoring of emphysema from low-dose CT scans through quantitative scores, in order to determine which measures can be useful as surrogates for visual assessment. All measures were computed over nine divisions of the lung field (whole lung, individual lungs, and upper/middle/lower thirds of each lung) for each of 148 low-dose, whole lung scans. In addition, a visual grade of each section was also given by an expert radiologist. One-way ANOVA and multinomial logistic regression were used to determine the ability of the measures to predict visual grade from quantitative score. We found that all measures were able to distinguish between normal and severe grades (p<0.01), and between mild/moderate and all other grades (p<0.05). However, no measure was able to distinguish between mild and moderate cases. Approximately 65% prediction accuracy was achieved from using quantitative score to predict visual grade, with 73% if mild and moderate cases are considered as a single class.
Portable smartphone based quantitative phase microscope
NASA Astrophysics Data System (ADS)
Meng, Xin; Tian, Xiaolin; Yu, Wei; Kong, Yan; Jiang, Zhilong; Liu, Fei; Xue, Liang; Liu, Cheng; Wang, Shouyu
2018-01-01
To realize portable device with high contrast imaging capability, we designed a quantitative phase microscope using transport of intensity equation method based on a smartphone. The whole system employs an objective and an eyepiece as imaging system and a cost-effective LED as illumination source. A 3-D printed cradle is used to align these components. Images of different focal planes are captured by manual focusing, followed by calculation of sample phase via a self-developed Android application. To validate its accuracy, we first tested the device by measuring a random phase plate with known phases, and then red blood cell smear, Pap smear, broad bean epidermis sections and monocot root were also measured to show its performance. Owing to its advantages as accuracy, high-contrast, cost-effective and portability, the portable smartphone based quantitative phase microscope is a promising tool which can be future adopted in remote healthcare and medical diagnosis.
Quantitative fluorescence tomography using a trimodality system: in vivo validation
Lin, Yuting; Barber, William C.; Iwanczyk, Jan S.; Roeck, Werner W.; Nalcioglu, Orhan; Gulsen, Gultekin
2010-01-01
A fully integrated trimodality fluorescence, diffuse optical, and x-ray computed tomography (FT∕DOT∕XCT) system for small animal imaging is reported in this work. The main purpose of this system is to obtain quantitatively accurate fluorescence concentration images using a multimodality approach. XCT offers anatomical information, while DOT provides the necessary background optical property map to improve FT image accuracy. The quantitative accuracy of this trimodality system is demonstrated in vivo. In particular, we show that a 2-mm-diam fluorescence inclusion located 8 mm deep in a nude mouse can only be localized when functional a priori information from DOT is available. However, the error in the recovered fluorophore concentration is nearly 87%. On the other hand, the fluorophore concentration can be accurately recovered within 2% error when both DOT functional and XCT structural a priori information are utilized together to guide and constrain the FT reconstruction algorithm. PMID:20799770
The Accuracy and Precision of Flow Measurements Using Phase Contrast Techniques
NASA Astrophysics Data System (ADS)
Tang, Chao
Quantitative volume flow rate measurements using the magnetic resonance imaging technique are studied in this dissertation because the volume flow rates have a special interest in the blood supply of the human body. The method of quantitative volume flow rate measurements is based on the phase contrast technique, which assumes a linear relationship between the phase and flow velocity of spins. By measuring the phase shift of nuclear spins and integrating velocity across the lumen of the vessel, we can determine the volume flow rate. The accuracy and precision of volume flow rate measurements obtained using the phase contrast technique are studied by computer simulations and experiments. The various factors studied include (1) the partial volume effect due to voxel dimensions and slice thickness relative to the vessel dimensions; (2) vessel angulation relative to the imaging plane; (3) intravoxel phase dispersion; (4) flow velocity relative to the magnitude of the flow encoding gradient. The partial volume effect is demonstrated to be the major obstacle to obtaining accurate flow measurements for both laminar and plug flow. Laminar flow can be measured more accurately than plug flow in the same condition. Both the experiment and simulation results for laminar flow show that, to obtain the accuracy of volume flow rate measurements to within 10%, at least 16 voxels are needed to cover the vessel lumen. The accuracy of flow measurements depends strongly on the relative intensity of signal from stationary tissues. A correction method is proposed to compensate for the partial volume effect. The correction method is based on a small phase shift approximation. After the correction, the errors due to the partial volume effect are compensated, allowing more accurate results to be obtained. An automatic program based on the correction method is developed and implemented on a Sun workstation. The correction method is applied to the simulation and experiment results. The results show that the correction significantly reduces the errors due to the partial volume effect. We apply the correction method to the data of in vivo studies. Because the blood flow is not known, the results of correction are tested according to the common knowledge (such as cardiac output) and conservation of flow. For example, the volume of blood flowing to the brain should be equal to the volume of blood flowing from the brain. Our measurement results are very convincing.
Improvements in Diagnostic Accuracy with Quantitative Dynamic Contrast-Enhanced MRI
2011-12-01
Magnetic Resonance Imaging during the Menstrual Cylce: Perfusion Imaging Signal Enhanceent, and Influence of...acquisition of quantitative images displaying the concentration of contrast media as well as MRI -detectable proton density. To date 21 patients have...truly quantitative images of a dynamic contrast-‐enhanced (DCE) MRI of the
On aerodynamic wake analysis and its relation to total aerodynamic drag in a wind tunnel environment
NASA Astrophysics Data System (ADS)
Guterres, Rui M.
The present work was developed with the goal of advancing the state of the art in the application of three-dimensional wake data analysis to the quantification of aerodynamic drag on a body in a low speed wind tunnel environment. Analysis of the existing tools, their strengths and limitations is presented. Improvements to the existing analysis approaches were made. Software tools were developed to integrate the analysis into a practical tool. A comprehensive derivation of the equations needed for drag computations based on three dimensional separated wake data is developed. A set of complete steps ranging from the basic mathematical concept to the applicable engineering equations is presented. An extensive experimental study was conducted. Three representative body types were studied in varying ground effect conditions. A detailed qualitative wake analysis using wake imaging and two and three dimensional flow visualization was performed. Several significant features of the flow were identified and their relation to the total aerodynamic drag established. A comprehensive wake study of this type is shown to be in itself a powerful tool for the analysis of the wake aerodynamics and its relation to body drag. Quantitative wake analysis techniques were developed. Significant post processing and data conditioning tools and precision analysis were developed. The quality of the data is shown to be in direct correlation with the accuracy of the computed aerodynamic drag. Steps are taken to identify the sources of uncertainty. These are quantified when possible and the accuracy of the computed results is seen to significantly improve. When post processing alone does not resolve issues related to precision and accuracy, solutions are proposed. The improved quantitative wake analysis is applied to the wake data obtained. Guidelines are established that will lead to more successful implementation of these tools in future research programs. Close attention is paid to implementation of issues that are of crucial importance for the accuracy of the results and that are not detailed in the literature. The impact of ground effect on the flows in hand is qualitatively and quantitatively studied. Its impact on the accuracy of the computations as well as the wall drag incompatibility with the theoretical model followed are discussed. The newly developed quantitative analysis provides significantly increased accuracy. The aerodynamic drag coefficient is computed within one percent of balance measured value for the best cases.
NASA Astrophysics Data System (ADS)
Qiu, Yuchen; Tan, Maxine; McMeekin, Scott; Thai, Theresa; Moore, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin
2015-03-01
The purpose of this study is to identify and apply quantitative image biomarkers for early prediction of the tumor response to the chemotherapy among the ovarian cancer patients participated in the clinical trials of testing new drugs. In the experiment, we retrospectively selected 30 cases from the patients who participated in Phase I clinical trials of new drug or drug agents for ovarian cancer treatment. Each case is composed of two sets of CT images acquired pre- and post-treatment (4-6 weeks after starting treatment). A computer-aided detection (CAD) scheme was developed to extract and analyze the quantitative image features of the metastatic tumors previously tracked by the radiologists using the standard Response Evaluation Criteria in Solid Tumors (RECIST) guideline. The CAD scheme first segmented 3-D tumor volumes from the background using a hybrid tumor segmentation scheme. Then, for each segmented tumor, CAD computed three quantitative image features including the change of tumor volume, tumor CT number (density) and density variance. The feature changes were calculated between the matched tumors tracked on the CT images acquired pre- and post-treatments. Finally, CAD predicted patient's 6-month progression-free survival (PFS) using a decision-tree based classifier. The performance of the CAD scheme was compared with the RECIST category. The result shows that the CAD scheme achieved a prediction accuracy of 76.7% (23/30 cases) with a Kappa coefficient of 0.493, which is significantly higher than the performance of RECIST prediction with a prediction accuracy and Kappa coefficient of 60% (17/30) and 0.062, respectively. This study demonstrated the feasibility of analyzing quantitative image features to improve the early predicting accuracy of the tumor response to the new testing drugs or therapeutic methods for the ovarian cancer patients.
Liquid Chromatography-Mass Spectrometry-based Quantitative Proteomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Fang; Liu, Tao; Qian, Weijun
2011-07-22
Liquid chromatography-mass spectrometry (LC-MS)-based quantitative proteomics has become increasingly applied for a broad range of biological applications due to growing capabilities for broad proteome coverage and good accuracy in quantification. Herein, we review the current LC-MS-based quantification methods with respect to their advantages and limitations, and highlight their potential applications.
Dong, Fei; Zeng, Qiang; Jiang, Biao; Yu, Xinfeng; Wang, Weiwei; Xu, Jingjing; Yu, Jinna; Li, Qian; Zhang, Minming
2018-05-01
To study whether some of the quantitative enhancement and necrosis features in preoperative conventional MRI (cMRI) had a predictive value for epidermal growth factor receptor (EGFR) gene amplification status in glioblastoma multiforme (GBM).Fifty-five patients with pathologically determined GBMs who underwent cMRI were retrospectively reviewed. The following cMRI features were quantitatively measured and recorded: long and short diameters of the enhanced portion (LDE and SDE), maximum and minimum thickness of the enhanced portion (MaxTE and MinTE), and long and short diameters of the necrotic portion (LDN and SDN). Univariate analysis of each feature and a decision tree model fed with all the features were performed. Area under the receiver operating characteristic (ROC) curve (AUC) was used to assess the performance of features, and predictive accuracy was used to assess the performance of the model.For single feature, MinTE showed the best performance in differentiating EGFR gene amplification negative (wild-type) (nEGFR) GBM from EGFR gene amplification positive (pEGFR) GBM, and it got an AUC of 0.68 with a cut-off value of 2.6 mm. The decision tree model included 2 features MinTE and SDN, and got an accuracy of 0.83 in validation dataset.Our results suggest that quantitative measurement of the features MinTE and SDN in preoperative cMRI had a high accuracy for predicting EGFR gene amplification status in GBM.
Impact of high 131I-activities on quantitative 124I-PET
NASA Astrophysics Data System (ADS)
Braad, P. E. N.; Hansen, S. B.; Høilund-Carlsen, P. F.
2015-07-01
Peri-therapeutic 124 I-PET/CT is of interest as guidance for radioiodine therapy. Unfortunately, image quality is complicated by dead time effects and increased random coincidence rates from high 131 I-activities. A series of phantom experiments with clinically relevant 124 I/131 I-activities were performed on a clinical PET/CT-system. Noise equivalent count rate (NECR) curves and quantitation accuracy were determined from repeated scans performed over several weeks on a decaying NEMA NU-2 1994 cylinder phantom initially filled with 25 MBq 124 I and 1250 MBq 131 I. Six spherical inserts with diameters 10-37 mm were filled with 124 I (0.45 MBq ml-1 ) and 131 I (22 MBq ml-1 ) and placed inside the background of the NEMA/IEC torso phantom. Contrast recovery, background variability and the accuracy of scatter and attenuation corrections were assessed at sphere-to-background activity ratios of 20, 10 and 5. Results were compared to pure 124 I-acquisitions. The quality of 124 I-PET images in the presence of high 131 I-activities was good and image quantification unaffected except at very high count rates. Quantitation accuracy and contrast recovery were uninfluenced at 131 I-activities below 1000 MBq, whereas image noise was slightly increased. The NECR peaked at 550 MBq of 131 I, where it was 2.8 times lower than without 131 I in the phantom. Quantitative peri-therapeutic 124 I-PET is feasible.
NASA Astrophysics Data System (ADS)
Wu, Jay; Shih, Cheng-Ting; Chang, Shu-Jun; Huang, Tzung-Chi; Chen, Chuan-Lin; Wu, Tung Hsin
2011-08-01
The quantitative ability of PET/CT allows the widespread use in clinical research and cancer staging. However, metal artifacts induced by high-density metal objects degrade the quality of CT images. These artifacts also propagate to the corresponding PET image and cause a false increase of 18F-FDG uptake near the metal implants when the CT-based attenuation correction (AC) is performed. In this study, we applied a model-based metal artifact reduction (MAR) algorithm to reduce the dark and bright streaks in the CT image and compared the differences between PET images with the general CT-based AC (G-AC) and the MAR-corrected-CT AC (MAR-AC). Results showed that the MAR algorithm effectively reduced the metal artifacts in the CT images of the ACR flangeless phantom and two clinical cases. The MAR-AC also removed the false-positive hot spot near the metal implants of the PET images. We conclude that the MAR-AC could be applied in clinical practice to improve the quantitative accuracy of PET images. Additionally, further use of PET/CT fusion images with metal artifact correction could be more valuable for diagnosis.
Baume, M; Garrelly, L; Facon, J P; Bouton, S; Fraisse, P O; Yardin, C; Reyrolle, M; Jarraud, S
2013-06-01
The characterization and certification of a Legionella DNA quantitative reference material as a primary measurement standard for Legionella qPCR. Twelve laboratories participated in a collaborative certification campaign. A candidate reference DNA material was analysed through PCR-based limiting dilution assays (LDAs). The validated data were used to statistically assign both a reference value and an associated uncertainty to the reference material. This LDA method allowed for the direct quantification of the amount of Legionella DNA per tube in genomic units (GU) and the determination of the associated uncertainties. This method could be used for the certification of all types of microbiological standards for qPCR. The use of this primary standard will improve the accuracy of Legionella qPCR measurements and the overall consistency of these measurements among different laboratories. The extensive use of this certified reference material (CRM) has been integrated in the French standard NF T90-471 (April 2010) and in the ISO Technical Specification 12 869 (Anon 2012 International Standardisation Organisation) for validating qPCR methods and ensuring the reliability of these methods. © 2013 The Society for Applied Microbiology.
Synchronous precipitation reduction in the American Tropics associated with Heinrich 2.
Medina-Elizalde, Martín; Burns, Stephen J; Polanco-Martinez, Josué; Lases-Hernández, Fernanda; Bradley, Raymond; Wang, Hao-Cheng; Shen, Chuan-Chou
2017-09-11
During the last ice age temperature in the North Atlantic oscillated in cycles known as Dansgaard-Oeschger (D-O) events. The magnitude of Caribbean hydroclimate change associated with D-O variability and particularly with stadial intervals, remains poorly constrained by paleoclimate records. We present a 3.3 thousand-year long stalagmite δ 18 O record from the Yucatan Peninsula (YP) that spans the interval between 26.5 and 23.2 thousand years before present. We estimate quantitative precipitation variability and the high resolution and dating accuracy of this record allow us to investigate how rainfall in the region responds to D-O events. Quantitative precipitation estimates are based on observed regional amount effect variability, last glacial paleotemperature records, and estimates of the last glacial oxygen isotopic composition of precipitation based on global circulation models (GCMs). The new precipitation record suggests significant low latitude hydrological responses to internal modes of climate variability and supports a role of Caribbean hydroclimate in helping Atlantic Meridional Overturning Circulation recovery during D-O events. Significant in-phase precipitation reduction across the equator in the tropical Americas associated with Heinrich event 2 is suggested by available speleothem oxygen isotope records.
Julian, Timothy R; Bustos, Carla; Kwong, Laura H; Badilla, Alejandro D; Lee, Julia; Bischel, Heather N; Canales, Robert A
2018-05-08
Quantitative data on human-environment interactions are needed to fully understand infectious disease transmission processes and conduct accurate risk assessments. Interaction events occur during an individual's movement through, and contact with, the environment, and can be quantified using diverse methodologies. Methods that utilize videography, coupled with specialized software, can provide a permanent record of events, collect detailed interactions in high resolution, be reviewed for accuracy, capture events difficult to observe in real-time, and gather multiple concurrent phenomena. In the accompanying video, the use of specialized software to capture humanenvironment interactions for human exposure and disease transmission is highlighted. Use of videography, combined with specialized software, allows for the collection of accurate quantitative representations of human-environment interactions in high resolution. Two specialized programs include the Virtual Timing Device for the Personal Computer, which collects sequential microlevel activity time series of contact events and interactions, and LiveTrak, which is optimized to facilitate annotation of events in real-time. Opportunities to annotate behaviors at high resolution using these tools are promising, permitting detailed records that can be summarized to gain information on infectious disease transmission and incorporated into more complex models of human exposure and risk.
Peng, G W; Sood, V K; Rykert, U M
1985-03-01
Bromadoline and its two N-demethylated metabolites were extracted into ether:butyl chloride after the addition of internal standard and basification of the various biological fluids (blood, plasma, serum, and urine). These compounds were then extracted into dilute phosphoric acid from the organic phase and separated on a reversed-phase chromatographic system using a mobile phase containing acetonitrile and a buffer of 1,4-dimethylpiperazine and perchloric acid. The overall absolute extraction recoveries of these compounds were approximately 50-80%. The background interferences from the biological fluids were negligible and allowed quantitative determination of bromadoline and the metabolites at levels as low as 2-5 ng/mL. At mobile phase flow rate of 1 mL/min, the sample components and the internal standard were eluted at the retention times within approximately 7-12 min. The drug- and metabolite-to-internal standard peak height ratios showed excellent linear relationships with their corresponding concentrations. The analytical method showed satisfactory within- and between-run assay precision and accuracy, and has been utilized in the simultaneous determination of bromadoline and its two N-demethylated metabolites in biological fluids collected from humans and from dogs after administration of bromadoline maleate.
Yu, Kebing; Salomon, Arthur R
2009-12-01
Recently, dramatic progress has been achieved in expanding the sensitivity, resolution, mass accuracy, and scan rate of mass spectrometers able to fragment and identify peptides through MS/MS. Unfortunately, this enhanced ability to acquire proteomic data has not been accompanied by a concomitant increase in the availability of flexible tools allowing users to rapidly assimilate, explore, and analyze this data and adapt to various experimental workflows with minimal user intervention. Here we fill this critical gap by providing a flexible relational database called PeptideDepot for organization of expansive proteomic data sets, collation of proteomic data with available protein information resources, and visual comparison of multiple quantitative proteomic experiments. Our software design, built upon the synergistic combination of a MySQL database for safe warehousing of proteomic data with a FileMaker-driven graphical user interface for flexible adaptation to diverse workflows, enables proteomic end-users to directly tailor the presentation of proteomic data to the unique analysis requirements of the individual proteomics lab. PeptideDepot may be deployed as an independent software tool or integrated directly with our high throughput autonomous proteomic pipeline used in the automated acquisition and post-acquisition analysis of proteomic data.
Sić, Siniša; Maier, Norbert M; Rizzi, Andreas M
2016-09-07
The potential and benefits of isotope-coded labeling in the context of MS-based glycan profiling are evaluated focusing on the analysis of O-glycans. For this purpose, a derivatization strategy using d0/d5-1-phenyl-3-methyl-5-pyrazolone (PMP) is employed, allowing O-glycan release and derivatization to be achieved in one single step. The paper demonstrates that this release and derivatization reaction can be carried out also in-gel with only marginal loss in sensitivity compared to in-solution derivatization. Such an effective in-gel reaction allows one to extend this release/labeling method also to glycoprotein/glycoform samples pre-separated by gel-electrophoresis without the need of extracting the proteins/digested peptides from the gel. With highly O-glycosylated proteins (e.g. mucins) LODs in the range of 0.4 μg glycoprotein (100 fmol) loaded onto the electrophoresis gel can be attained, with minor glycosylated proteins (like IgAs, FVII, FIX) the LODs were in the range of 80-100 μg (250 pmol-1.5 nmol) glycoprotein loaded onto the gel. As second aspect, the potential of isotope coded labeling as internal standardization strategy for the reliable determination of quantitative glycan profiles via MALDI-MS is investigated. Towards this goal, a number of established and emerging MALDI matrices were tested for PMP-glycan quantitation, and their performance is compared with that of ESI-based measurements. The crystalline matrix 2,6-dihydroxyacetophenone (DHAP) and the ionic liquid matrix N,N-diisopropyl-ethyl-ammonium 2,4,6-trihydroxyacetophenone (DIEA-THAP) showed potential for MALDI-based quantitation of PMP-labeled O-glycans. We also provide a comprehensive overview on the performance of MS-based glycan quantitation approaches by comparing sensitivity, LOD, accuracy and repeatability data obtained with RP-HPLC-ESI-MS, stand-alone nano-ESI-MS with a spray-nozzle chip, and MALDI-MS. Finally, the suitability of the isotope-coded PMP labeling strategy for O-glycan profiling of biological important proteins is demonstrated by comparative analysis of IgA immunoglobulins and two coagulation factors. Copyright © 2016 Elsevier B.V. All rights reserved.
Somatic coliphages as surrogates for enteroviruses in sludge hygienization treatments.
Martín-Díaz, Julia; Casas-Mangas, Raquel; García-Aljaro, Cristina; Blanch, Anicet R; Lucena, Francisco
2016-01-01
Conventional bacterial indicators present serious drawbacks giving information about viral pathogens persistence during sludge hygienization treatments. This calls for the search of alternative viral indicators. Somatic coliphages' (SOMCPH) ability for acting as surrogates for enteroviruses was assessed in 47 sludge samples subjected to novel treatment processes. SOMCPH, infectious enteroviruses and genome copies of enteroviruses were monitored. Only one of these groups, the bacteriophages, was present in the sludge at concentrations that allowed the evaluation of treatment's performance. An indicator/pathogen relationship of 4 log10 (PFU/g dw) was found between SOMCPH and infective enteroviruses and their detection accuracy was assessed. The obtained results and the existence of rapid and standardized methods encourage the inclusion of SOMCPH quantification in future sludge directives. In addition, an existing real-time quantitative polymerase chain reaction (RT-qPCR) for enteroviruses was adapted and applied.
Jiang, Yutao; Hascall, Daniel; Li, Delia; Pease, Joseph H
2015-09-11
In this paper, we introduce a high throughput LCMS/UV/CAD/CLND system that improves upon previously reported systems by increasing both the quantitation accuracy and the range of compounds amenable to testing, in particular, low molecular weight "fragment" compounds. This system consists of a charged aerosol detector (CAD) and chemiluminescent nitrogen detector (CLND) added to a LCMS/UV system. Our results show that the addition of CAD and CLND to LCMS/UV is more reliable for concentration determination for a wider range of compounds than either detector alone. Our setup also allows for the parallel analysis of each sample by all four detectors and so does not significantly increase run time per sample. Copyright © 2015 Elsevier B.V. All rights reserved.
Myocardial Tissue Characterization by Magnetic Resonance Imaging
Ferreira, Vanessa M.; Piechnik, Stefan K.; Robson, Matthew D.; Neubauer, Stefan
2014-01-01
Cardiac magnetic resonance (CMR) imaging is a well-established noninvasive imaging modality in clinical cardiology. Its unsurpassed accuracy in defining cardiac morphology and function and its ability to provide tissue characterization make it well suited for the study of patients with cardiac diseases. Late gadolinium enhancement was a major advancement in the development of tissue characterization techniques, allowing the unique ability of CMR to differentiate ischemic heart disease from nonischemic cardiomyopathies. Using T2-weighted techniques, areas of edema and inflammation can be identified in the myocardium. A new generation of myocardial mapping techniques are emerging, enabling direct quantitative assessment of myocardial tissue properties in absolute terms. This review will summarize recent developments involving T1-mapping and T2-mapping techniques and focus on the clinical applications and future potential of these evolving CMR methodologies. PMID:24576837
Radiation measurements from polar and geosynchronous satellites
NASA Technical Reports Server (NTRS)
Vonderhaar, T. H.
1973-01-01
During the 1960's, radiation budget measurements from satellites have allowed quantitative study of the global energetics of our atmosphere-ocean system. A continuing program is planned, including independent measurement of the solar constant. Thus far, the measurements returned from two basically different types of satellite experiments are in agreement on the long term global scales where they are most comparable. This fact, together with independent estimates of the accuracy of measurement from each system, shows that the energy exchange between earth and space is now measured better than it can be calculated. Examples of application of the radiation budget data were shown. They can be related to the age-old problem of climate change, to the basic question of the thermal forcing of our circulation systems, and to the contemporary problems of local area energetics and computer modeling of the atmosphere.
Weak Value Amplification is Suboptimal for Estimation and Detection
NASA Astrophysics Data System (ADS)
Ferrie, Christopher; Combes, Joshua
2014-01-01
We show by using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of single parameter estimation and signal detection. Specifically, we prove that postselection, a necessary ingredient for weak value amplification, decreases estimation accuracy and, moreover, arranging for anomalously large weak values is a suboptimal strategy. In doing so, we explicitly provide the optimal estimator, which in turn allows us to identify the optimal experimental arrangement to be the one in which all outcomes have equal weak values (all as small as possible) and the initial state of the meter is the maximal eigenvalue of the square of the system observable. Finally, we give precise quantitative conditions for when weak measurement (measurements without postselection or anomalously large weak values) can mitigate the effect of uncharacterized technical noise in estimation.
NASA Astrophysics Data System (ADS)
Mosquera Lopez, Clara; Agaian, Sos
2013-02-01
Prostate cancer detection and staging is an important step towards patient treatment selection. Advancements in digital pathology allow the application of new quantitative image analysis algorithms for computer-assisted diagnosis (CAD) on digitized histopathology images. In this paper, we introduce a new set of features to automatically grade pathological images using the well-known Gleason grading system. The goal of this study is to classify biopsy images belonging to Gleason patterns 3, 4, and 5 by using a combination of wavelet and fractal features. For image classification we use pairwise coupling Support Vector Machine (SVM) classifiers. The accuracy of the system, which is close to 97%, is estimated through three different cross-validation schemes. The proposed system offers the potential for automating classification of histological images and supporting prostate cancer diagnosis.
NASA Technical Reports Server (NTRS)
Nebenfuhr, A.; Lomax, T. L.
1998-01-01
We have developed an improved method for determination of gene expression levels with RT-PCR. The procedure is rapid and does not require extensive optimization or densitometric analysis. Since the detection of individual transcripts is PCR-based, small amounts of tissue samples are sufficient for the analysis of expression patterns in large gene families. Using this method, we were able to rapidly screen nine members of the Aux/IAA family of auxin-responsive genes and identify those genes which vary in message abundance in a tissue- and light-specific manner. While not offering the accuracy of conventional semi-quantitative or competitive RT-PCR, our method allows quick screening of large numbers of genes in a wide range of RNA samples with just a thermal cycler and standard gel analysis equipment.
2013-01-01
Background Genetic linkage maps are important tools in breeding programmes and quantitative trait analyses. Traditional molecular markers used for genotyping are limited in throughput and efficiency. The advent of next-generation sequencing technologies has facilitated progeny genotyping and genetic linkage map construction in the major grains. However, the applicability of the approach remains untested in the fungal system. Findings Shiitake mushroom, Lentinula edodes, is a basidiomycetous fungus that represents one of the most popular cultivated edible mushrooms. Here, we developed a rapid genotyping method based on low-coverage (~0.5 to 1.5-fold) whole-genome resequencing. We used the approach to genotype 20 single-spore isolates derived from L. edodes strain L54 and constructed the first high-density sequence-based genetic linkage map of L. edodes. The accuracy of the proposed genotyping method was verified experimentally with results from mating compatibility tests and PCR-single-strand conformation polymorphism on a few known genes. The linkage map spanned a total genetic distance of 637.1 cM and contained 13 linkage groups. Two hundred sequence-based markers were placed on the map, with an average marker spacing of 3.4 cM. The accuracy of the map was confirmed by comparing with previous maps the locations of known genes such as matA and matB. Conclusions We used the shiitake mushroom as an example to provide a proof-of-principle that low-coverage resequencing could allow rapid genotyping of basidiospore-derived progenies, which could in turn facilitate the construction of high-density genetic linkage maps of basidiomycetous fungi for quantitative trait analyses and improvement of genome assembly. PMID:23915543
Martinez-Enriquez, Eduardo; Sun, Mengchan; Velasco-Ocana, Miriam; Birkenfeld, Judith; Pérez-Merino, Pablo; Marcos, Susana
2016-07-01
Measurement of crystalline lens geometry in vivo is critical to optimize performance of state-of-the-art cataract surgery. We used custom-developed quantitative anterior segment optical coherence tomography (OCT) and developed dedicated algorithms to estimate lens volume (VOL), equatorial diameter (DIA), and equatorial plane position (EPP). The method was validated ex vivo in 27 human donor (19-71 years of age) lenses, which were imaged in three-dimensions by OCT. In vivo conditions were simulated assuming that only the information within a given pupil size (PS) was available. A parametric model was used to estimate the whole lens shape from PS-limited data. The accuracy of the estimated lens VOL, DIA, and EPP was evaluated by comparing estimates from the whole lens data and PS-limited data ex vivo. The method was demonstrated in vivo using 2 young eyes during accommodation and 2 cataract eyes. Crystalline lens VOL was estimated within 96% accuracy (average estimation error across lenses ± standard deviation: 9.30 ± 7.49 mm3). Average estimation errors in EPP were below 40 ± 32 μm, and below 0.26 ± 0.22 mm in DIA. Changes in lens VOL with accommodation were not statistically significant (2-way ANOVA, P = 0.35). In young eyes, DIA decreased and EPP increased statistically significantly with accommodation (P < 0.001) by 0.14 mm and 0.13 mm, respectively, on average across subjects. In cataract eyes, VOL = 205.5 mm3, DIA = 9.57 mm, and EPP = 2.15 mm on average. Quantitative OCT with dedicated image processing algorithms allows estimation of human crystalline lens volume, diameter, and equatorial lens position, as validated from ex vivo measurements, where entire lens images are available.
Lin, Ying-Chung; Li, Wei; Sun, Ying-Hsuan; Kumari, Sapna; Wei, Hairong; Li, Quanzi; Tunlaya-Anukit, Sermsawat; Sederoff, Ronald R.; Chiang, Vincent L.
2013-01-01
Wood is an essential renewable raw material for industrial products and energy. However, knowledge of the genetic regulation of wood formation is limited. We developed a genome-wide high-throughput system for the discovery and validation of specific transcription factor (TF)–directed hierarchical gene regulatory networks (hGRNs) in wood formation. This system depends on a new robust procedure for isolation and transfection of Populus trichocarpa stem differentiating xylem protoplasts. We overexpressed Secondary Wall-Associated NAC Domain 1s (Ptr-SND1-B1), a TF gene affecting wood formation, in these protoplasts and identified differentially expressed genes by RNA sequencing. Direct Ptr-SND1-B1–DNA interactions were then inferred by integration of time-course RNA sequencing data and top-down Graphical Gaussian Modeling–based algorithms. These Ptr-SND1-B1-DNA interactions were verified to function in differentiating xylem by anti-PtrSND1-B1 antibody-based chromatin immunoprecipitation (97% accuracy) and in stable transgenic P. trichocarpa (90% accuracy). In this way, we established a Ptr-SND1-B1–directed quantitative hGRN involving 76 direct targets, including eight TF and 61 enzyme-coding genes previously unidentified as targets. The network can be extended to the third layer from the second-layer TFs by computation or by overexpression of a second-layer TF to identify a new group of direct targets (third layer). This approach would allow the sequential establishment, one two-layered hGRN at a time, of all layers involved in a more comprehensive hGRN. Our approach may be particularly useful to study hGRNs in complex processes in plant species resistant to stable genetic transformation and where mutants are unavailable. PMID:24280390
Calibration Issues and Operating System Requirements for Electron-Probe Microanalysis
NASA Technical Reports Server (NTRS)
Carpenter, P.
2006-01-01
Instrument purchase requirements and dialogue with manufacturers have established hardware parameters for alignment, stability, and reproducibility, which have helped improve the precision and accuracy of electron microprobe analysis (EPMA). The development of correction algorithms and the accurate solution to quantitative analysis problems requires the minimization of systematic errors and relies on internally consistent data sets. Improved hardware and computer systems have resulted in better automation of vacuum systems, stage and wavelength-dispersive spectrometer (WDS) mechanisms, and x-ray detector systems which have improved instrument stability and precision. Improved software now allows extended automated runs involving diverse setups and better integrates digital imaging and quantitative analysis. However, instrumental performance is not regularly maintained, as WDS are aligned and calibrated during installation but few laboratories appear to check and maintain this calibration. In particular, detector deadtime (DT) data is typically assumed rather than measured, due primarily to the difficulty and inconvenience of the measurement process. This is a source of fundamental systematic error in many microprobe laboratories and is unknown to the analyst, as the magnitude of DT correction is not listed in output by microprobe operating systems. The analyst must remain vigilant to deviations in instrumental alignment and calibration, and microprobe system software must conveniently verify the necessary parameters. Microanalysis of mission critical materials requires an ongoing demonstration of instrumental calibration. Possible approaches to improvements in instrument calibration, quality control, and accuracy will be discussed. Development of a set of core requirements based on discussions with users, researchers, and manufacturers can yield documents that improve and unify the methods by which instruments can be calibrated. These results can be used to continue improvements of EPMA.
Channual, Stephanie; Tan, Nelly; Siripongsakun, Surachate; Lassman, Charles; Lu, David S; Raman, Steven S
2015-09-01
The objective of our study was to determine quantitative differences to differentiate low-grade from high-grade dysplastic nodules (DNs) and low-grade from high-grade hepatocellular carcinomas (HCCs) using gadoxetate disodium-enhanced MRI. A retrospective study of 149 hepatic nodules in 127 consecutive patients who underwent gadoxetic acid-enhanced MRI was performed. MRI signal intensities (SIs) of the representative lesion ROI and of ROIs in liver parenchyma adjacent to the lesion were measured on unenhanced T1-weighted imaging and on dynamic contrast-enhanced MRI in the arterial, portal venous, delayed, and hepatobiliary phases. The relative SI of the lesion was calculated for each phase as the relative intensity ratio as follows: [mass SI / liver SI]. Of the 149 liver lesions, nine (6.0%) were low-grade DNs, 21 (14.1%) were high-grade DNs, 83 (55.7%) were low-grade HCCs, and 36 (24.2%) were high-grade HCCs. The optimal cutoffs for differentiating low-grade DNs from high-grade DNs and HCCs were an unenhanced to arterial SI of ≥ 0 or a relative SI on T2-weighted imaging of ≤ 1.5, with a positive predictive value (PPV) of 99.2% and accuracy of 88.6%. The optimal cutoffs for differentiating low-grade HCCs from high-grade HCCs were a relative hepatobiliary SI of ≤ 0.5 or a relative T2 SI of ≥ 1.5, with a PPV of 81.0% and an accuracy of 60.5%. Gadoxetate disodium-enhanced MRI allows quantitative differentiation of low-grade DNs from high-grade DNs and HCCs, but significant overlap was seen between low-grade HCCs and high-grade HCCs.
Metzger, Gregory J; Kalavagunta, Chaitanya; Spilseth, Benjamin; Bolan, Patrick J; Li, Xiufeng; Hutter, Diane; Nam, Jung W; Johnson, Andrew D; Henriksen, Jonathan C; Moench, Laura; Konety, Badrinath; Warlick, Christopher A; Schmechel, Stephen C; Koopmeiners, Joseph S
2016-06-01
Purpose To develop multiparametric magnetic resonance (MR) imaging models to generate a quantitative, user-independent, voxel-wise composite biomarker score (CBS) for detection of prostate cancer by using coregistered correlative histopathologic results, and to compare performance of CBS-based detection with that of single quantitative MR imaging parameters. Materials and Methods Institutional review board approval and informed consent were obtained. Patients with a diagnosis of prostate cancer underwent multiparametric MR imaging before surgery for treatment. All MR imaging voxels in the prostate were classified as cancer or noncancer on the basis of coregistered histopathologic data. Predictive models were developed by using more than one quantitative MR imaging parameter to generate CBS maps. Model development and evaluation of quantitative MR imaging parameters and CBS were performed separately for the peripheral zone and the whole gland. Model accuracy was evaluated by using the area under the receiver operating characteristic curve (AUC), and confidence intervals were calculated with the bootstrap procedure. The improvement in classification accuracy was evaluated by comparing the AUC for the multiparametric model and the single best-performing quantitative MR imaging parameter at the individual level and in aggregate. Results Quantitative T2, apparent diffusion coefficient (ADC), volume transfer constant (K(trans)), reflux rate constant (kep), and area under the gadolinium concentration curve at 90 seconds (AUGC90) were significantly different between cancer and noncancer voxels (P < .001), with ADC showing the best accuracy (peripheral zone AUC, 0.82; whole gland AUC, 0.74). Four-parameter models demonstrated the best performance in both the peripheral zone (AUC, 0.85; P = .010 vs ADC alone) and whole gland (AUC, 0.77; P = .043 vs ADC alone). Individual-level analysis showed statistically significant improvement in AUC in 82% (23 of 28) and 71% (24 of 34) of patients with peripheral-zone and whole-gland models, respectively, compared with ADC alone. Model-based CBS maps for cancer detection showed improved visualization of cancer location and extent. Conclusion Quantitative multiparametric MR imaging models developed by using coregistered correlative histopathologic data yielded a voxel-wise CBS that outperformed single quantitative MR imaging parameters for detection of prostate cancer, especially when the models were assessed at the individual level. (©) RSNA, 2016 Online supplemental material is available for this article.
Engelberg, Jesse A; Retallack, Hanna; Balassanian, Ronald; Dowsett, Mitchell; Zabaglo, Lila; Ram, Arishneel A; Apple, Sophia K; Bishop, John W; Borowsky, Alexander D; Carpenter, Philip M; Chen, Yunn-Yi; Datnow, Brian; Elson, Sarah; Hasteh, Farnaz; Lin, Fritz; Moatamed, Neda A; Zhang, Yanhong; Cardiff, Robert D
2015-11-01
Hormone receptor status is an integral component of decision-making in breast cancer management. IHC4 score is an algorithm that combines hormone receptor, HER2, and Ki-67 status to provide a semiquantitative prognostic score for breast cancer. High accuracy and low interobserver variance are important to ensure the score is accurately calculated; however, few previous efforts have been made to measure or decrease interobserver variance. We developed a Web-based training tool, called "Score the Core" (STC) using tissue microarrays to train pathologists to visually score estrogen receptor (using the 300-point H score), progesterone receptor (percent positive), and Ki-67 (percent positive). STC used a reference score calculated from a reproducible manual counting method. Pathologists in the Athena Breast Health Network and pathology residents at associated institutions completed the exercise. By using STC, pathologists improved their estrogen receptor H score and progesterone receptor and Ki-67 proportion assessment and demonstrated a good correlation between pathologist and reference scores. In addition, we collected information about pathologist performance that allowed us to compare individual pathologists and measures of agreement. Pathologists' assessment of the proportion of positive cells was closer to the reference than their assessment of the relative intensity of positive cells. Careful training and assessment should be used to ensure the accuracy of breast biomarkers. This is particularly important as breast cancer diagnostics become increasingly quantitative and reproducible. Our training tool is a novel approach for pathologist training that can serve as an important component of ongoing quality assessment and can improve the accuracy of breast cancer prognostic biomarkers. Copyright © 2015 Elsevier Inc. All rights reserved.
Setford, Steven; Grady, Mike; Mackintosh, Stephen; Donald, Robert; Levy, Brian
2018-05-01
MARD (mean absolute relative difference) is increasingly used to describe performance of glucose monitoring systems, providing a single-value quantitative measure of accuracy and allowing comparisons between different monitoring systems. This study reports MARDs for the OneTouch Verio® glucose meter clinical data set of 80 258 data points (671 individual batches) gathered as part of a 7.5-year self-surveillance program Methods: Test strips were routinely sampled from randomly selected manufacturer's production batches and sent to one of 3 clinic sites for clinical accuracy assessment using fresh capillary blood from patients with diabetes, using both the meter system and standard laboratory reference instrument. Evaluation of the distribution of strip batch MARD yielded a mean value of 5.05% (range: 3.68-6.43% at ±1.96 standard deviations from mean). The overall MARD for all clinic data points (N = 80 258) was also 5.05%, while a mean bias of 1.28 was recorded. MARD by glucose level was found to be consistent, yielding a maximum value of 4.81% at higher glucose (≥100 mg/dL) and a mean absolute difference (MAD) of 5.60 mg/dL at low glucose (<100 mg/dL). MARD by year of manufacture varied from 4.67-5.42% indicating consistent accuracy performance over the surveillance period. This 7.5-year surveillance program showed that this meter system exhibits consistently low MARD by batch, glucose level and year, indicating close agreement with established reference methods whilste exhibiting lower MARD values than continuous glucose monitoring (CGM) systems and providing users with confidence in the performance when transitioning to each new strip batch.
Benz, Dominik C; Fuchs, Tobias A; Gräni, Christoph; Studer Bruengger, Annina A; Clerc, Olivier F; Mikulicic, Fran; Messerli, Michael; Stehli, Julia; Possner, Mathias; Pazhenkottil, Aju P; Gaemperli, Oliver; Kaufmann, Philipp A; Buechel, Ronny R
2018-02-01
Iterative reconstruction (IR) algorithms allow for a significant reduction in radiation dose of coronary computed tomography angiography (CCTA). We performed a head-to-head comparison of adaptive statistical IR (ASiR) and model-based IR (MBIR) algorithms to assess their impact on quantitative image parameters and diagnostic accuracy for submillisievert CCTA. CCTA datasets of 91 patients were reconstructed using filtered back projection (FBP), increasing contributions of ASiR (20, 40, 60, 80, and 100%), and MBIR. Signal and noise were measured in the aortic root to calculate signal-to-noise ratio (SNR). In a subgroup of 36 patients, diagnostic accuracy of ASiR 40%, ASiR 100%, and MBIR for diagnosis of coronary artery disease (CAD) was compared with invasive coronary angiography. Median radiation dose was 0.21 mSv for CCTA. While increasing levels of ASiR gradually reduced image noise compared with FBP (up to - 48%, P < 0.001), MBIR provided largest noise reduction (-79% compared with FBP) outperforming ASiR (-59% compared with ASiR 100%; P < 0.001). Increased noise and lower SNR with ASiR 40% and ASiR 100% resulted in substantially lower diagnostic accuracy to detect CAD as diagnosed by invasive coronary angiography compared with MBIR: sensitivity and specificity were 100 and 37%, 100 and 57%, and 100 and 74% for ASiR 40%, ASiR 100%, and MBIR, respectively. MBIR offers substantial noise reduction with increased SNR, paving the way for implementation of submillisievert CCTA protocols in clinical routine. In contrast, inferior noise reduction by ASiR negatively affects diagnostic accuracy of submillisievert CCTA for CAD detection. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2017. For permissions, please email: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Han, Chao; Riazi, Mehdi
2018-01-01
The accuracy of self-assessment has long been examined empirically in higher education research, producing a substantial body of literature that casts light on numerous potential moderators. However, despite the growing popularity of self-assessment in interpreter training and education, very limited evidence-based research has been initiated to…
ERIC Educational Resources Information Center
Wang, Xin
2017-01-01
Scholars debate whether corrective feedback contributes to improving L2 learners' grammatical accuracy in writing performance. Some researchers take a stance on the ineffectiveness of corrective feedback based on the impracticality of providing detailed corrective feedback for all L2 learners and detached grammar instruction in language…
Experimental validation of numerical simulations on a cerebral aneurysm phantom model
Seshadhri, Santhosh; Janiga, Gábor; Skalej, Martin; Thévenin, Dominique
2012-01-01
The treatment of cerebral aneurysms, found in roughly 5% of the population and associated in case of rupture to a high mortality rate, is a major challenge for neurosurgery and neuroradiology due to the complexity of the intervention and to the resulting, high hazard ratio. Improvements are possible but require a better understanding of the associated, unsteady blood flow patterns in complex 3D geometries. It would be very useful to carry out such studies using suitable numerical models, if it is proven that they reproduce accurately enough the real conditions. This validation step is classically based on comparisons with measured data. Since in vivo measurements are extremely difficult and therefore of limited accuracy, complementary model-based investigations considering realistic configurations are essential. In the present study, simulations based on computational fluid dynamics (CFD) have been compared with in situ, laser-Doppler velocimetry (LDV) measurements in the phantom model of a cerebral aneurysm. The employed 1:1 model is made from transparent silicone. A liquid mixture composed of water, glycerin, xanthan gum and sodium chloride has been specifically adapted for the present investigation. It shows physical flow properties similar to real blood and leads to a refraction index perfectly matched to that of the silicone model, allowing accurate optical measurements of the flow velocity. For both experiments and simulations, complex pulsatile flow waveforms and flow rates were accounted for. This finally allows a direct, quantitative comparison between measurements and simulations. In this manner, the accuracy of the employed computational model can be checked. PMID:24265876
Tu, Chengjian; Li, Jun; Sheng, Quanhu; Zhang, Ming; Qu, Jun
2014-04-04
Survey-scan-based label-free method have shown no compelling benefit over fragment ion (MS2)-based approaches when low-resolution mass spectrometry (MS) was used, the growing prevalence of high-resolution analyzers may have changed the game. This necessitates an updated, comparative investigation of these approaches for data acquired by high-resolution MS. Here, we compared survey scan-based (ion current, IC) and MS2-based abundance features including spectral-count (SpC) and MS2 total-ion-current (MS2-TIC), for quantitative analysis using various high-resolution LC/MS data sets. Key discoveries include: (i) study with seven different biological data sets revealed only IC achieved high reproducibility for lower-abundance proteins; (ii) evaluation with 5-replicate analyses of a yeast sample showed IC provided much higher quantitative precision and lower missing data; (iii) IC, SpC, and MS2-TIC all showed good quantitative linearity (R(2) > 0.99) over a >1000-fold concentration range; (iv) both MS2-TIC and IC showed good linear response to various protein loading amounts but not SpC; (v) quantification using a well-characterized CPTAC data set showed that IC exhibited markedly higher quantitative accuracy, higher sensitivity, and lower false-positives/false-negatives than both SpC and MS2-TIC. Therefore, IC achieved an overall superior performance than the MS2-based strategies in terms of reproducibility, missing data, quantitative dynamic range, quantitative accuracy, and biomarker discovery.
2015-01-01
Survey-scan-based label-free method have shown no compelling benefit over fragment ion (MS2)-based approaches when low-resolution mass spectrometry (MS) was used, the growing prevalence of high-resolution analyzers may have changed the game. This necessitates an updated, comparative investigation of these approaches for data acquired by high-resolution MS. Here, we compared survey scan-based (ion current, IC) and MS2-based abundance features including spectral-count (SpC) and MS2 total-ion-current (MS2-TIC), for quantitative analysis using various high-resolution LC/MS data sets. Key discoveries include: (i) study with seven different biological data sets revealed only IC achieved high reproducibility for lower-abundance proteins; (ii) evaluation with 5-replicate analyses of a yeast sample showed IC provided much higher quantitative precision and lower missing data; (iii) IC, SpC, and MS2-TIC all showed good quantitative linearity (R2 > 0.99) over a >1000-fold concentration range; (iv) both MS2-TIC and IC showed good linear response to various protein loading amounts but not SpC; (v) quantification using a well-characterized CPTAC data set showed that IC exhibited markedly higher quantitative accuracy, higher sensitivity, and lower false-positives/false-negatives than both SpC and MS2-TIC. Therefore, IC achieved an overall superior performance than the MS2-based strategies in terms of reproducibility, missing data, quantitative dynamic range, quantitative accuracy, and biomarker discovery. PMID:24635752
FPGA-Based Fused Smart-Sensor for Tool-Wear Area Quantitative Estimation in CNC Machine Inserts
Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto
2010-01-01
Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used. PMID:22319304
Frampton, G K; Kalita, N; Payne, L; Colquitt, J L; Loveman, E; Downes, S M; Lotery, A J
2017-07-01
We conducted a systematic review of the accuracy of fundus autofluorescence (FAF) imaging for diagnosing and monitoring retinal conditions. Searches in November 2014 identified English language references. Sources included MEDLINE, EMBASE, the Cochrane Library, Web of Science, and MEDION databases; reference lists of retrieved studies; and internet pages of relevant organisations, meetings, and trial registries. For inclusion, studies had to report FAF imaging accuracy quantitatively. Studies were critically appraised using QUADAS risk of bias criteria. Two reviewers conducted all review steps. From 2240 unique references identified, eight primary research studies met the inclusion criteria. These investigated diagnostic accuracy of FAF imaging for choroidal neovascularisation (one study), reticular pseudodrusen (three studies), cystoid macular oedema (two studies), and diabetic macular oedema (two studies). Diagnostic sensitivity of FAF imaging ranged from 32 to 100% and specificity from 34 to 100%. However, owing to methodological limitations, including high and/or unclear risks of bias, none of these studies provides conclusive evidence of the diagnostic accuracy of FAF imaging. Study heterogeneity precluded meta-analysis. In most studies, the patient spectrum was not reflective of those who would present in clinical practice and no studies adequately reported whether FAF images were interpreted consistently. No studies of monitoring accuracy were identified. An update in October 2016, based on MEDLINE and internet searches, identified four new studies but did not alter our conclusions. Robust quantitative evidence on the accuracy of FAF imaging and how FAF images are interpreted is lacking. We provide recommendations to address this.
Wang, Chao-Qun; Jia, Xiu-Hong; Zhu, Shu; Komatsu, Katsuko; Wang, Xuan; Cai, Shao-Qing
2015-03-01
A new quantitative analysis of multi-component with single marker (QAMS) method for 11 saponins (ginsenosides Rg1, Rb1, Rg2, Rh1, Rf, Re and Rd; notoginsenosides R1, R4, Fa and K) in notoginseng was established, when 6 of these saponins were individually used as internal referring substances to investigate the influences of chemical structure, concentrations of quantitative components, and purities of the standard substances on the accuracy of the QAMS method. The results showed that the concentration of the analyte in sample solution was the major influencing parameter, whereas the other parameters had minimal influence on the accuracy of the QAMS method. A new method for calculating the relative correction factors by linear regression was established (linear regression method), which demonstrated to decrease standard method differences of the QAMS method from 1.20%±0.02% - 23.29%±3.23% to 0.10%±0.09% - 8.84%±2.85% in comparison with the previous method. And the differences between external standard method and the QAMS method using relative correction factors calculated by linear regression method were below 5% in the quantitative determination of Rg1, Re, R1, Rd and Fa in 24 notoginseng samples and Rb1 in 21 notoginseng samples. And the differences were mostly below 10% in the quantitative determination of Rf, Rg2, R4 and N-K (the differences of these 4 constituents bigger because their contents lower) in all the 24 notoginseng samples. The results indicated that the contents assayed by the new QAMS method could be considered as accurate as those assayed by external standard method. In addition, a method for determining applicable concentration ranges of the quantitative components assayed by QAMS method was established for the first time, which could ensure its high accuracy and could be applied to QAMS methods of other TCMs. The present study demonstrated the practicability of the application of the QAMS method for the quantitative analysis of multi-component and the quality control of TCMs and TCM prescriptions. Copyright © 2014 Elsevier B.V. All rights reserved.
Wang, Shunhai; Bobst, Cedric E; Kaltashov, Igor A
2015-01-01
Transferrin (Tf) is an 80 kDa iron-binding protein that is viewed as a promising drug carrier to target the central nervous system as a result of its ability to penetrate the blood-brain barrier. Among the many challenges during the development of Tf-based therapeutics, the sensitive and accurate quantitation of the administered Tf in cerebrospinal fluid (CSF) remains particularly difficult because of the presence of abundant endogenous Tf. Herein, we describe the development of a new liquid chromatography-mass spectrometry-based method for the sensitive and accurate quantitation of exogenous recombinant human Tf in rat CSF. By taking advantage of a His-tag present in recombinant Tf and applying Ni affinity purification, the exogenous human serum Tf can be greatly enriched from rat CSF, despite the presence of the abundant endogenous protein. Additionally, we applied a newly developed (18)O-labeling technique that can generate internal standards at the protein level, which greatly improved the accuracy and robustness of quantitation. The developed method was investigated for linearity, accuracy, precision, and lower limit of quantitation, all of which met the commonly accepted criteria for bioanalytical method validation.
Identification of susceptibility genes and genetic modifiers of human diseases
NASA Astrophysics Data System (ADS)
Abel, Kenneth; Kammerer, Stefan; Hoyal, Carolyn; Reneland, Rikard; Marnellos, George; Nelson, Matthew R.; Braun, Andreas
2005-03-01
The completion of the human genome sequence enables the discovery of genes involved in common human disorders. The successful identification of these genes is dependent on the availability of informative sample sets, validated marker panels, a high-throughput scoring technology, and a strategy for combining these resources. We have developed a universal platform technology based on mass spectrometry (MassARRAY) for analyzing nucleic acids with high precision and accuracy. To fuel this technology, we generated more than 100,000 validated assays for single nucleotide polymorphisms (SNPs) covering virtually all known and predicted human genes. We also established a large DNA sample bank comprised of more than 50,000 consented healthy and diseased individuals. This combination of reagents and technology allows the execution of large-scale genome-wide association studies. Taking advantage of MassARRAY"s capability for quantitative analysis of nucleic acids, allele frequencies are estimated in sample pools containing large numbers of individual DNAs. To compare pools as a first-pass "filtering" step is a tremendous advantage in throughput and cost over individual genotyping. We employed this approach in numerous genome-wide, hypothesis-free searches to identify genes associated with common complex diseases, such as breast cancer, osteoporosis, and osteoarthritis, and genes involved in quantitative traits like high density lipoproteins cholesterol (HDL-c) levels and central fat. Access to additional well-characterized patient samples through collaborations allows us to conduct replication studies that validate true disease genes. These discoveries will expand our understanding of genetic disease predisposition, and our ability for early diagnosis and determination of specific disease subtype or progression stage.
FlowerMorphology: fully automatic flower morphometry software.
Rozov, Sergey M; Deineko, Elena V; Deyneko, Igor V
2018-05-01
The software FlowerMorphology is designed for automatic morphometry of actinomorphic flowers. The novel complex parameters of flowers calculated by FlowerMorphology allowed us to quantitatively characterize a polyploid series of tobacco. Morphological differences of plants representing closely related lineages or mutants are mostly quantitative. Very often, there are only very fine variations in plant morphology. Therefore, accurate and high-throughput methods are needed for their quantification. In addition, new characteristics are necessary for reliable detection of subtle changes in morphology. FlowerMorphology is an all-in-one software package to automatically image and analyze five-petal actinomorphic flowers of the dicotyledonous plants. Sixteen directly measured parameters and ten calculated complex parameters of a flower allow us to characterize variations with high accuracy. The program was developed for the needs of automatic characterization of Nicotiana tabacum flowers, but is applicable to many other plants with five-petal actinomorphic flowers and can be adopted for flowers of other merosity. A genetically similar polyploid series of N. tabacum plants was used to investigate differences in flower morphology. For the first time, we could quantify the dependence between ploidy and size and form of the tobacco flowers. We found that the radius of inner petal incisions shows a persistent positive correlation with the chromosome number. In contrast, a commonly used parameter-radius of outer corolla-does not discriminate 2n and 4n plants. Other parameters show that polyploidy leads to significant aberrations in flower symmetry and are also positively correlated with chromosome number. Executables of FlowerMorphology, source code, documentation, and examples are available at the program website: https://github.com/Deyneko/FlowerMorphology .
Reasoning strategies with rational numbers revealed by eye tracking.
Plummer, Patrick; DeWolf, Melissa; Bassok, Miriam; Gordon, Peter C; Holyoak, Keith J
2017-07-01
Recent research has begun to investigate the impact of different formats for rational numbers on the processes by which people make relational judgments about quantitative relations. DeWolf, Bassok, and Holyoak (Journal of Experimental Psychology: General, 144(1), 127-150, 2015) found that accuracy on a relation identification task was highest when fractions were presented with countable sets, whereas accuracy was relatively low for all conditions where decimals were presented. However, it is unclear what processing strategies underlie these disparities in accuracy. We report an experiment that used eye-tracking methods to externalize the strategies that are evoked by different types of rational numbers for different types of quantities (discrete vs. continuous). Results showed that eye-movement behavior during the task was jointly determined by image and number format. Discrete images elicited a counting strategy for both fractions and decimals, but this strategy led to higher accuracy only for fractions. Continuous images encouraged magnitude estimation and comparison, but to a greater degree for decimals than fractions. This strategy led to decreased accuracy for both number formats. By analyzing participants' eye movements when they viewed a relational context and made decisions, we were able to obtain an externalized representation of the strategic choices evoked by different ontological types of entities and different types of rational numbers. Our findings using eye-tracking measures enable us to go beyond previous studies based on accuracy data alone, demonstrating that quantitative properties of images and the different formats for rational numbers jointly influence strategies that generate eye-movement behavior.
Morgante, Fabio; Huang, Wen; Maltecca, Christian; Mackay, Trudy F C
2018-06-01
Predicting complex phenotypes from genomic data is a fundamental aim of animal and plant breeding, where we wish to predict genetic merits of selection candidates; and of human genetics, where we wish to predict disease risk. While genomic prediction models work well with populations of related individuals and high linkage disequilibrium (LD) (e.g., livestock), comparable models perform poorly for populations of unrelated individuals and low LD (e.g., humans). We hypothesized that low prediction accuracies in the latter situation may occur when the genetics architecture of the trait departs from the infinitesimal and additive architecture assumed by most prediction models. We used simulated data for 10,000 lines based on sequence data from a population of unrelated, inbred Drosophila melanogaster lines to evaluate this hypothesis. We show that, even in very simplified scenarios meant as a stress test of the commonly used Genomic Best Linear Unbiased Predictor (G-BLUP) method, using all common variants yields low prediction accuracy regardless of the trait genetic architecture. However, prediction accuracy increases when predictions are informed by the genetic architecture inferred from mapping the top variants affecting main effects and interactions in the training data, provided there is sufficient power for mapping. When the true genetic architecture is largely or partially due to epistatic interactions, the additive model may not perform well, while models that account explicitly for interactions generally increase prediction accuracy. Our results indicate that accounting for genetic architecture can improve prediction accuracy for quantitative traits.
The Impact of Pushed Output on Accuracy and Fluency of Iranian EFL Learners' Speaking
ERIC Educational Resources Information Center
Sadeghi Beniss, Aram Reza; Edalati Bazzaz, Vahid
2014-01-01
The current study attempted to establish baseline quantitative data on the impacts of pushed output on two components of speaking (i.e., accuracy and fluency). To achieve this purpose, 30 female EFL learners were selected from a whole population pool of 50 based on the standard test of IELTS interview and were randomly assigned into an…
Method comparison for forest soil carbon and nitrogen estimates in the Delaware River basin
B. Xu; Yude Pan; A.H. Johnson; A.F. Plante
2016-01-01
The accuracy of forest soil C and N estimates is hampered by forest soils that are rocky, inaccessible, and spatially heterogeneous. A composite coring technique is the standard method used in Forest Inventory and Analysis, but its accuracy has been questioned. Quantitative soil pits provide direct measurement of rock content and soil mass from a larger, more...
Increased Accuracy of Ligand Sensing by Receptor Internalization and Lateral Receptor Diffusion
NASA Astrophysics Data System (ADS)
Aquino, Gerardo; Endres, Robert
2010-03-01
Many types of cells can sense external ligand concentrations with cell-surface receptors at extremely high accuracy. Interestingly, ligand-bound receptors are often internalized, a process also known as receptor-mediated endocytosis. While internalization is involved in a vast number of important functions for the life of a cell, it was recently also suggested to increase the accuracy of sensing ligand as overcounting of the same ligand molecules is reduced. A similar role may be played by receptor diffusion om the cell membrane. Fast, lateral receptor diffusion is known to be relevant in neurotransmission initiated by release of neurotransmitter glutamate in the synaptic cleft between neurons. By binding ligand and removal by diffusion from the region of release of the neurotransmitter, diffusing receptors can be reasonably expected to reduce the local overcounting of the same ligand molecules in the region of signaling. By extending simple ligand-receptor models to out-of-equilibrium thermodynamics, we show that both receptor internalization and lateral diffusion increase the accuracy with which cells can measure ligand concentrations in the external environment. We confirm this with our model and give quantitative predictions for experimental parameters values. We give quantitative predictions, which compare favorably to experimental data of real receptors.
Bell, M R; Britson, P J; Chu, A; Holmes, D R; Bresnahan, J F; Schwartz, R S
1997-01-01
We describe a method of validation of computerized quantitative coronary arteriography and report the results of a new UNIX-based quantitative coronary arteriography software program developed for rapid on-line (digital) and off-line (digital or cinefilm) analysis. The UNIX operating system is widely available in computer systems using very fast processors and has excellent graphics capabilities. The system is potentially compatible with any cardiac digital x-ray system for on-line analysis and has been designed to incorporate an integrated database, have on-line and immediate recall capabilities, and provide digital access to all data. The accuracy (mean signed differences of the observed minus the true dimensions) and precision (pooled standard deviations of the measurements) of the program were determined x-ray vessel phantoms. Intra- and interobserver variabilities were assessed from in vivo studies during routine clinical coronary arteriography. Precision from the x-ray phantom studies (6-In. field of view) for digital images was 0.066 mm and for digitized cine images was 0.060 mm. Accuracy was 0.076 mm (overestimation) for digital images compared to 0.008 mm for digitized cine images. Diagnostic coronary catheters were also used for calibration; accuracy.varied according to size of catheter and whether or not they were filled with iodinated contrast. Intra- and interobserver variabilities were excellent and indicated that coronary lesion measurements were relatively user-independent. Thus, this easy to use and very fast UNIX based program appears to be robust with optimal accuracy and precision for clinical and research applications.
Motion correction for improving the accuracy of dual-energy myocardial perfusion CT imaging
NASA Astrophysics Data System (ADS)
Pack, Jed D.; Yin, Zhye; Xiong, Guanglei; Mittal, Priya; Dunham, Simon; Elmore, Kimberly; Edic, Peter M.; Min, James K.
2016-03-01
Coronary Artery Disease (CAD) is the leading cause of death globally [1]. Modern cardiac computed tomography angiography (CCTA) is highly effective at identifying and assessing coronary blockages associated with CAD. The diagnostic value of this anatomical information can be substantially increased in combination with a non-invasive, low-dose, correlative, quantitative measure of blood supply to the myocardium. While CT perfusion has shown promise of providing such indications of ischemia, artifacts due to motion, beam hardening, and other factors confound clinical findings and can limit quantitative accuracy. In this paper, we investigate the impact of applying a novel motion correction algorithm to correct for motion in the myocardium. This motion compensation algorithm (originally designed to correct for the motion of the coronary arteries in order to improve CCTA images) has been shown to provide substantial improvements in both overall image quality and diagnostic accuracy of CCTA. We have adapted this technique for application beyond the coronary arteries and present an assessment of its impact on image quality and quantitative accuracy within the context of dual-energy CT perfusion imaging. We conclude that motion correction is a promising technique that can help foster the routine clinical use of dual-energy CT perfusion. When combined, the anatomical information of CCTA and the hemodynamic information from dual-energy CT perfusion should facilitate better clinical decisions about which patients would benefit from treatments such as stent placement, drug therapy, or surgery and help other patients avoid the risks and costs associated with unnecessary, invasive, diagnostic coronary angiography procedures.
NASA Astrophysics Data System (ADS)
Ying, Jia-ju; Yin, Jian-ling; Wu, Dong-sheng; Liu, Jie; Chen, Yu-dan
2017-11-01
Low-light level night vision device and thermal infrared imaging binocular photoelectric instrument are used widely. The maladjustment of binocular instrument ocular axises parallelism will cause the observer the symptom such as dizziness, nausea, when use for a long time. Binocular photoelectric equipment digital calibration instrument is developed for detecting ocular axises parallelism. And the quantitative value of optical axis deviation can be quantitatively measured. As a testing instrument, the precision must be much higher than the standard of test instrument. Analyzes the factors that influence the accuracy of detection. Factors exist in each testing process link which affect the precision of the detecting instrument. They can be divided into two categories, one category is factors which directly affect the position of reticle image, the other category is factors which affect the calculation the center of reticle image. And the Synthesize error is calculated out. And further distribute the errors reasonably to ensure the accuracy of calibration instruments.
NASA Astrophysics Data System (ADS)
Mok, Aaron T. Y.; Lee, Kelvin C. M.; Wong, Kenneth K. Y.; Tsia, Kevin K.
2018-02-01
Biophysical properties of cells could complement and correlate biochemical markers to characterize a multitude of cellular states. Changes in cell size, dry mass and subcellular morphology, for instance, are relevant to cell-cycle progression which is prevalently evaluated by DNA-targeted fluorescence measurements. Quantitative-phase microscopy (QPM) is among the effective biophysical phenotyping tools that can quantify cell sizes and sub-cellular dry mass density distribution of single cells at high spatial resolution. However, limited camera frame rate and thus imaging throughput makes QPM incompatible with high-throughput flow cytometry - a gold standard in multiparametric cell-based assay. Here we present a high-throughput approach for label-free analysis of cell cycle based on quantitative-phase time-stretch imaging flow cytometry at a throughput of > 10,000 cells/s. Our time-stretch QPM system enables sub-cellular resolution even at high speed, allowing us to extract a multitude (at least 24) of single-cell biophysical phenotypes (from both amplitude and phase images). Those phenotypes can be combined to track cell-cycle progression based on a t-distributed stochastic neighbor embedding (t-SNE) algorithm. Using multivariate analysis of variance (MANOVA) discriminant analysis, cell-cycle phases can also be predicted label-free with high accuracy at >90% in G1 and G2 phase, and >80% in S phase. We anticipate that high throughput label-free cell cycle characterization could open new approaches for large-scale single-cell analysis, bringing new mechanistic insights into complex biological processes including diseases pathogenesis.
Liu, Dan; Li, Xingrui; Zhou, Junkai; Liu, Shibo; Tian, Tian; Song, Yanling; Zhu, Zhi; Zhou, Leiji; Ji, Tianhai; Yang, Chaoyong
2017-10-15
Enzyme-linked immunosorbent assay (ELISA) is a popular laboratory technique for detection of disease-specific protein biomarkers with high specificity and sensitivity. However, ELISA requires labor-intensive and time-consuming procedures with skilled operators and spectroscopic instrumentation. Simplification of the procedures and miniaturization of the devices are crucial for ELISA-based point-of-care (POC) testing in resource-limited settings. Here, we present a fully integrated, instrument-free, low-cost and portable POC platform which integrates the process of ELISA and the distance readout into a single microfluidic chip. Based on manipulation using a permanent magnet, the process is initiated by moving magnetic beads with capture antibody through different aqueous phases containing ELISA reagents to form bead/antibody/antigen/antibody sandwich structure, and finally converts the molecular recognition signal into a highly sensitive distance readout for visual quantitative bioanalysis. Without additional equipment and complicated operations, our integrated ELISA-Chip with distance readout allows ultrasensitive quantitation of disease biomarkers within 2h. The ELISA-Chip method also showed high specificity, good precision and great accuracy. Furthermore, the ELISA-Chip system is highly applicable as a sandwich-based platform for the detection of a variety of protein biomarkers. With the advantages of visual analysis, easy operation, high sensitivity, and low cost, the integrated sample-in-answer-out ELISA-Chip with distance readout shows great potential for quantitative POCT in resource-limited settings. Copyright © 2017. Published by Elsevier B.V.
Assis, Sofia; Costa, Pedro; Rosas, Maria Jose; Vaz, Rui; Silva Cunha, Joao Paulo
2016-08-01
Intraoperative evaluation of the efficacy of Deep Brain Stimulation includes evaluation of the effect on rigidity. A subjective semi-quantitative scale is used, dependent on the examiner perception and experience. A system was proposed previously, aiming to tackle this subjectivity, using quantitative data and providing real-time feedback of the computed rigidity reduction, hence supporting the physician decision. This system comprised of a gyroscope-based motion sensor in a textile band, placed in the patients hand, which communicated its measurements to a laptop. The latter computed a signal descriptor from the angular velocity of the hand during wrist flexion in DBS surgery. The first approach relied on using a general rigidity reduction model, regardless of the initial severity of the symptom. Thus, to enhance the performance of the previously presented system, we aimed to develop models for high and low baseline rigidity, according to the examiner assessment before any stimulation. This would allow a more patient-oriented approach. Additionally, usability was improved by having in situ processing in a smartphone, instead of a computer. Such system has shown to be reliable, presenting an accuracy of 82.0% and a mean error of 3.4%. Relatively to previous results, the performance was similar, further supporting the importance of considering the cogwheel rigidity to better infer about the reduction in rigidity. Overall, we present a simple, wearable, mobile system, suitable for intra-operatory conditions during DBS, supporting a physician in decision-making when setting stimulation parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierson, Theodore C.; Sanchez, Melissa D.; Puffer, Bridget A.
2006-03-01
West Nile virus (WNV) is a neurotropic flavivirus within the Japanese encephalitis antigenic complex that is responsible for causing West Nile encephalitis in humans. The surface of WNV virions is covered by a highly ordered icosahedral array of envelope proteins that is responsible for mediating attachment and fusion with target cells. These envelope proteins are also primary targets for the generation of neutralizing antibodies in vivo. In this study, we describe a novel approach for measuring antibody-mediated neutralization of WNV infection using virus-like particles that measure infection as a function of reporter gene expression. These reporter virus particles (RVPs) aremore » produced by complementation of a sub-genomic replicon with WNV structural proteins provided in trans using conventional DNA expression vectors. The precision and accuracy of this approach stem from an ability to measure the outcome of the interaction between antibody and viral antigens under conditions that satisfy the assumptions of the law of mass action as applied to virus neutralization. In addition to its quantitative strengths, this approach allows the production of WNV RVPs bearing the prM-E proteins of different WNV strains and mutants, offering considerable flexibility for the study of the humoral immune response to WNV in vitro. WNV RVPs are capable of only a single round of infection, can be used under BSL-2 conditions, and offer a rapid and quantitative approach for detecting virus entry and its inhibition by neutralizing antibody.« less
NASA Technical Reports Server (NTRS)
Duhon, D. D.
1975-01-01
The shuttle orbital maneuvering system (OMS) pressure-volume-temperature (P-V-T) propellant gaging module computes the quantity of usable OMS propellant remaining based on the real gas P-V-T relationship for the propellant tank pressurant, helium. The OMS P-V-T propellant quantity gaging error was determined for four sets of instrumentation configurations and accuracies with the propellant tank operating in the normal constant pressure mode and in the blowdown mode. The instrumentation inaccuracy allowance for propellant leak detection was also computed for these same four sets of instrumentation. These gaging errors and leak detection allowances are presented in tables designed to permit a direct comparison of the effectiveness of the four instrumentation sets. The results show the magnitudes of the improvements in propellant quantity gaging accuracies and propellant leak detection allowances which can be achieved by employing more accurate pressure and temperature instrumentation.
NASA Astrophysics Data System (ADS)
Margheri, Luca; Sagaut, Pierre
2016-11-01
To significantly increase the contribution of numerical computational fluid dynamics (CFD) simulation for risk assessment and decision making, it is important to quantitatively measure the impact of uncertainties to assess the reliability and robustness of the results. As unsteady high-fidelity CFD simulations are becoming the standard for industrial applications, reducing the number of required samples to perform sensitivity (SA) and uncertainty quantification (UQ) analysis is an actual engineering challenge. The novel approach presented in this paper is based on an efficient hybridization between the anchored-ANOVA and the POD/Kriging methods, which have already been used in CFD-UQ realistic applications, and the definition of best practices to achieve global accuracy. The anchored-ANOVA method is used to efficiently reduce the UQ dimension space, while the POD/Kriging is used to smooth and interpolate each anchored-ANOVA term. The main advantages of the proposed method are illustrated through four applications with increasing complexity, most of them based on Large-Eddy Simulation as a high-fidelity CFD tool: the turbulent channel flow, the flow around an isolated bluff-body, a pedestrian wind comfort study in a full scale urban area and an application to toxic gas dispersion in a full scale city area. The proposed c-APK method (anchored-ANOVA-POD/Kriging) inherits the advantages of each key element: interpolation through POD/Kriging precludes the use of quadrature schemes therefore allowing for a more flexible sampling strategy while the ANOVA decomposition allows for a better domain exploration. A comparison of the three methods is given for each application. In addition, the importance of adding flexibility to the control parameters and the choice of the quantity of interest (QoI) are discussed. As a result, global accuracy can be achieved with a reasonable number of samples allowing computationally expensive CFD-UQ analysis.
Vaccarono, Mattia; Bechini, Renzo; Chandrasekar, Chandra V.; ...
2016-11-08
The stability of weather radar calibration is a mandatory aspect for quantitative applications, such as rainfall estimation, short-term weather prediction and initialization of numerical atmospheric and hydrological models. Over the years, calibration monitoring techniques based on external sources have been developed, specifically calibration using the Sun and calibration based on ground clutter returns. In this paper, these two techniques are integrated and complemented with a self-consistency procedure and an intercalibration technique. The aim of the integrated approach is to implement a robust method for online monitoring, able to detect significant changes in the radar calibration. The physical consistency of polarimetricmore » radar observables is exploited using the self-consistency approach, based on the expected correspondence between dual-polarization power and phase measurements in rain. This technique allows a reference absolute value to be provided for the radar calibration, from which eventual deviations may be detected using the other procedures. In particular, the ground clutter calibration is implemented on both polarization channels (horizontal and vertical) for each radar scan, allowing the polarimetric variables to be monitored and hardware failures to promptly be recognized. The Sun calibration allows monitoring the calibration and sensitivity of the radar receiver, in addition to the antenna pointing accuracy. It is applied using observations collected during the standard operational scans but requires long integration times (several days) in order to accumulate a sufficient amount of useful data. Finally, an intercalibration technique is developed and performed to compare colocated measurements collected in rain by two radars in overlapping regions. The integrated approach is performed on the C-band weather radar network in northwestern Italy, during July–October 2014. The set of methods considered appears suitable to establish an online tool to monitor the stability of the radar calibration with an accuracy of about 2 dB. In conclusion, this is considered adequate to automatically detect any unexpected change in the radar system requiring further data analysis or on-site measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaccarono, Mattia; Bechini, Renzo; Chandrasekar, Chandra V.
The stability of weather radar calibration is a mandatory aspect for quantitative applications, such as rainfall estimation, short-term weather prediction and initialization of numerical atmospheric and hydrological models. Over the years, calibration monitoring techniques based on external sources have been developed, specifically calibration using the Sun and calibration based on ground clutter returns. In this paper, these two techniques are integrated and complemented with a self-consistency procedure and an intercalibration technique. The aim of the integrated approach is to implement a robust method for online monitoring, able to detect significant changes in the radar calibration. The physical consistency of polarimetricmore » radar observables is exploited using the self-consistency approach, based on the expected correspondence between dual-polarization power and phase measurements in rain. This technique allows a reference absolute value to be provided for the radar calibration, from which eventual deviations may be detected using the other procedures. In particular, the ground clutter calibration is implemented on both polarization channels (horizontal and vertical) for each radar scan, allowing the polarimetric variables to be monitored and hardware failures to promptly be recognized. The Sun calibration allows monitoring the calibration and sensitivity of the radar receiver, in addition to the antenna pointing accuracy. It is applied using observations collected during the standard operational scans but requires long integration times (several days) in order to accumulate a sufficient amount of useful data. Finally, an intercalibration technique is developed and performed to compare colocated measurements collected in rain by two radars in overlapping regions. The integrated approach is performed on the C-band weather radar network in northwestern Italy, during July–October 2014. The set of methods considered appears suitable to establish an online tool to monitor the stability of the radar calibration with an accuracy of about 2 dB. In conclusion, this is considered adequate to automatically detect any unexpected change in the radar system requiring further data analysis or on-site measurements.« less
Liao, Hsiao-Wei; Chen, Guan-Yuan; Wu, Ming-Shiang; Liao, Wei-Chih; Lin, Ching-Hung; Kuo, Ching-Hua
2017-02-03
Quantitative metabolomics has become much more important in clinical research in recent years. Individual differences in matrix effects (MEs) and the injection order effect are two major factors that reduce the quantification accuracy in liquid chromatography-electrospray ionization-mass spectrometry-based (LC-ESI-MS) metabolomics studies. This study proposed a postcolumn infused-internal standard (PCI-IS) combined with a matrix normalization factor (MNF) strategy to improve the analytical accuracy of quantitative metabolomics. The PCI-IS combined with the MNF method was applied for a targeted metabolomics study of amino acids (AAs). D8-Phenylalanine was used as the PCI-IS, and it was postcolumn-infused into the ESI interface for calibration purposes. The MNF was used to bridge the AA response in a standard solution with the plasma samples. The MEs caused signal changes that were corrected by dividing the AA signal intensities by the PCI-IS intensities after adjustment with the MNF. After the method validation, we evaluated the method applicability for breast cancer research using 100 plasma samples. The quantification results revealed that the 11 tested AAs exhibit an accuracy between 88.2 and 110.7%. The principal component analysis score plot revealed that the injection order effect can be successfully removed, and most of the within-group variation of the tested AAs decreased after the PCI-IS correction. Finally, targeted metabolomics studies on the AAs showed that tryptophan was expressed more in malignant patients than in the benign group. We anticipate that a similar approach can be applied to other endogenous metabolites to facilitate quantitative metabolomics studies.
Pardo, Scott; Simmons, David A
2016-09-01
The relationship between International Organization for Standardization (ISO) accuracy criteria and mean absolute relative difference (MARD), 2 methods for assessing the accuracy of blood glucose meters, is complex. While lower MARD values are generally better than higher MARD values, it is not possible to define a particular MARD value that ensures a blood glucose meter will satisfy the ISO accuracy criteria. The MARD value that ensures passing the ISO accuracy test can be described only as a probabilistic range. In this work, a Bayesian model is presented to represent the relationship between ISO accuracy criteria and MARD. Under the assumptions made in this work, there is nearly a 100% chance of satisfying ISO 15197:2013 accuracy requirements if the MARD value is between 3.25% and 5.25%. © 2016 Diabetes Technology Society.
Studying learning in the healthcare setting: the potential of quantitative diary methods.
Ciere, Yvette; Jaarsma, Debbie; Visser, Annemieke; Sanderman, Robbert; Snippe, Evelien; Fleer, Joke
2015-08-01
Quantitative diary methods are longitudinal approaches that involve the repeated measurement of aspects of peoples' experience of daily life. In this article, we outline the main characteristics and applications of quantitative diary methods and discuss how their use may further research in the field of medical education. Quantitative diary methods offer several methodological advantages, such as measuring aspects of learning with great detail, accuracy and authenticity. Moreover, they enable researchers to study how and under which conditions learning in the health care setting occurs and in which way learning can be promoted. Hence, quantitative diary methods may contribute to theory development and the optimization of teaching methods in medical education.
NASA Astrophysics Data System (ADS)
Voloshin, A. E.; Prostomolotov, A. I.; Verezub, N. A.
2016-11-01
The paper deals with the analysis of the accuracy of some one-dimensional (1D) analytical models of the axial distribution of impurities in the crystal grown from a melt. The models proposed by Burton-Prim-Slichter, Ostrogorsky-Muller and Garandet with co-authors are considered, these models are compared to the results of a two-dimensional (2D) numerical simulation. Stationary solutions as well as solutions for the initial transient regime obtained using these models are considered. The sources of errors are analyzed, a conclusion is made about the applicability of 1D analytical models for quantitative estimates of impurity incorporation into the crystal sample as well as for the solution of the inverse problems.
Lin, Yu-Zi; Huang, Kuang-Yuh; Luo, Yuan
2018-06-15
Half-circle illumination-based differential phase contrast (DPC) microscopy has been utilized to recover phase images through a pair of images along multiple axes. Recently, the half-circle based DPC using 12-axis measurements significantly provides a circularly symmetric phase transfer function to improve accuracy for more stable phase recovery. Instead of using half-circle-based DPC, we propose a new scheme of DPC under radially asymmetric illumination to achieve circularly symmetric phase transfer function and enhance the accuracy of phase recovery in a more stable and efficient fashion. We present the design, implementation, and experimental image data demonstrating the ability of our method to obtain quantitative phase images of microspheres, as well as live fibroblast cell samples.
NASA Astrophysics Data System (ADS)
Galmed, A. H.; Elshemey, Wael M.
2017-08-01
Differentiating between normal, benign and malignant excised breast tissues is one of the major worldwide challenges that need a quantitative, fast and reliable technique in order to avoid personal errors in diagnosis. Laser induced fluorescence (LIF) is a promising technique that has been applied for the characterization of biological tissues including breast tissue. Unfortunately, only few studies have adopted a quantitative approach that can be directly applied for breast tissue characterization. This work provides a quantitative means for such characterization via introduction of several LIF characterization parameters and determining the diagnostic accuracy of each parameter in the differentiation between normal, benign and malignant excised breast tissues. Extensive analysis on 41 lyophilized breast samples using scatter diagrams, cut-off values, diagnostic indices and receiver operating characteristic (ROC) curves, shows that some spectral parameters (peak height and area under the peak) are superior for characterization of normal, benign and malignant breast tissues with high sensitivity (up to 0.91), specificity (up to 0.91) and accuracy ranking (highly accurate).
García-Florentino, Cristina; Maguregui, Maite; Romera-Fernández, Miriam; Queralt, Ignasi; Margui, Eva; Madariaga, Juan Manuel
2018-05-01
Wavelength dispersive X-ray fluorescence (WD-XRF) spectrometry has been widely used for elemental quantification of mortars and cements. In this kind of instrument, samples are usually prepared as pellets or fused beads and the whole volume of sample is measured at once. In this work, the usefulness of a dual energy dispersive X-ray fluorescence spectrometer (ED-XRF), working at two lateral resolutions (1 mm and 25 μm) for macro and microanalysis respectively, to develop quantitative methods for the elemental characterization of mortars and concretes is demonstrated. A crucial step before developing any quantitative method with this kind of spectrometers is to verify the homogeneity of the standards at these two lateral resolutions. This new ED-XRF quantitative method also demonstrated the importance of matrix effects in the accuracy of the results being necessary to use Certified Reference Materials as standards. The results obtained with the ED-XRF quantitative method were compared with the ones obtained with two WD-XRF quantitative methods employing two different sample preparation strategies (pellets and fused beads). The selected ED-XRF and both WD-XRF quantitative methods were applied to the analysis of real mortars. The accuracy of the ED-XRF results turn out to be similar to the one achieved by WD-XRF, except for the lightest elements (Na and Mg). The results described in this work proved that μ-ED-XRF spectrometers can be used not only for acquiring high resolution elemental map distributions, but also to perform accurate quantitative studies avoiding the use of more sophisticated WD-XRF systems or the acid extraction/alkaline fusion required as destructive pretreatment in Inductively coupled plasma mass spectrometry based procedures.
Evaluation of airway protection: Quantitative timing measures versus penetration/aspiration score.
Kendall, Katherine A
2017-10-01
Quantitative measures of swallowing function may improve the reliability and accuracy of modified barium swallow (MBS) study interpretation. Quantitative study analysis has not been widely instituted, however, secondary to concerns about the time required to make measures and a lack of research demonstrating impact on MBS interpretation. This study compares the accuracy of the penetration/aspiration (PEN/ASP) scale (an observational visual-perceptual assessment tool) to quantitative measures of airway closure timing relative to the arrival of the bolus at the upper esophageal sphincter in identifying a failure of airway protection during deglutition. Retrospective review of clinical swallowing data from a university-based outpatient clinic. Swallowing data from 426 patients were reviewed. Patients with normal PEN/ASP scores were identified, and the results of quantitative airway closure timing measures for three liquid bolus sizes were evaluated. The incidence of significant airway closure delay with and without a normal PEN/ASP score was determined. Inter-rater reliability for the quantitative measures was calculated. In patients with a normal PEN/ASP score, 33% demonstrated a delay in airway closure on at least one swallow during the MBS study. There was no correlation between PEN/ASP score and airway closure delay. Inter-rater reliability for the quantitative measure of airway closure timing was nearly perfect (intraclass correlation coefficient = 0.973). The use of quantitative measures of swallowing function, in conjunction with traditional visual perceptual methods of MBS study interpretation, improves the identification of airway closure delay, and hence, potential aspiration risk, even when no penetration or aspiration is apparent on the MBS study. 4. Laryngoscope, 127:2314-2318, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
Quantitative endoscopy: initial accuracy measurements.
Truitt, T O; Adelman, R A; Kelly, D H; Willging, J P
2000-02-01
The geometric optics of an endoscope can be used to determine the absolute size of an object in an endoscopic field without knowing the actual distance from the object. This study explores the accuracy of a technique that estimates absolute object size from endoscopic images. Quantitative endoscopy involves calibrating a rigid endoscope to produce size estimates from 2 images taken with a known traveled distance between the images. The heights of 12 samples, ranging in size from 0.78 to 11.80 mm, were estimated with this calibrated endoscope. Backup distances of 5 mm and 10 mm were used for comparison. The mean percent error for all estimated measurements when compared with the actual object sizes was 1.12%. The mean errors for 5-mm and 10-mm backup distances were 0.76% and 1.65%, respectively. The mean errors for objects <2 mm and > or =2 mm were 0.94% and 1.18%, respectively. Quantitative endoscopy estimates endoscopic image size to within 5% of the actual object size. This method remains promising for quantitatively evaluating object size from endoscopic images. It does not require knowledge of the absolute distance of the endoscope from the object, rather, only the distance traveled by the endoscope between images.
Quantification of EEG reactivity in comatose patients
Hermans, Mathilde C.; Westover, M. Brandon; van Putten, Michel J.A.M.; Hirsch, Lawrence J.; Gaspard, Nicolas
2016-01-01
Objective EEG reactivity is an important predictor of outcome in comatose patients. However, visual analysis of reactivity is prone to subjectivity and may benefit from quantitative approaches. Methods In EEG segments recorded during reactivity testing in 59 comatose patients, 13 quantitative EEG parameters were used to compare the spectral characteristics of 1-minute segments before and after the onset of stimulation (spectral temporal symmetry). Reactivity was quantified with probability values estimated using combinations of these parameters. The accuracy of probability values as a reactivity classifier was evaluated against the consensus assessment of three expert clinical electroencephalographers using visual analysis. Results The binary classifier assessing spectral temporal symmetry in four frequency bands (delta, theta, alpha and beta) showed best accuracy (Median AUC: 0.95) and was accompanied by substantial agreement with the individual opinion of experts (Gwet’s AC1: 65–70%), at least as good as inter-expert agreement (AC1: 55%). Probability values also reflected the degree of reactivity, as measured by the inter-experts’ agreement regarding reactivity for each individual case. Conclusion Automated quantitative EEG approaches based on probabilistic description of spectral temporal symmetry reliably quantify EEG reactivity. Significance Quantitative EEG may be useful for evaluating reactivity in comatose patients, offering increased objectivity. PMID:26183757
Calus, M P L; de Haas, Y; Veerkamp, R F
2013-10-01
Genomic selection holds the promise to be particularly beneficial for traits that are difficult or expensive to measure, such that access to phenotypes on large daughter groups of bulls is limited. Instead, cow reference populations can be generated, potentially supplemented with existing information from the same or (highly) correlated traits available on bull reference populations. The objective of this study, therefore, was to develop a model to perform genomic predictions and genome-wide association studies based on a combined cow and bull reference data set, with the accuracy of the phenotypes differing between the cow and bull genomic selection reference populations. The developed bivariate Bayesian stochastic search variable selection model allowed for an unbalanced design by imputing residuals in the residual updating scheme for all missing records. The performance of this model is demonstrated on a real data example, where the analyzed trait, being milk fat or protein yield, was either measured only on a cow or a bull reference population, or recorded on both. Our results were that the developed bivariate Bayesian stochastic search variable selection model was able to analyze 2 traits, even though animals had measurements on only 1 of 2 traits. The Bayesian stochastic search variable selection model yielded consistently higher accuracy for fat yield compared with a model without variable selection, both for the univariate and bivariate analyses, whereas the accuracy of both models was very similar for protein yield. The bivariate model identified several additional quantitative trait loci peaks compared with the single-trait models on either trait. In addition, the bivariate models showed a marginal increase in accuracy of genomic predictions for the cow traits (0.01-0.05), although a greater increase in accuracy is expected as the size of the bull population increases. Our results emphasize that the chosen value of priors in Bayesian genomic prediction models are especially important in small data sets. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Adabi, Saba; Conforto, Silvia; Hosseinzadeh, Matin; Noe, Shahryar; Daveluy, Steven; Mehregan, Darius; Nasiriavanaki, Mohammadreza
2017-02-01
Optical Coherence Tomography (OCT) offers real-time high-resolution three-dimensional images of tissue microstructures. In this study, we used OCT skin images acquired from ten volunteers, neither of whom had any skin conditions addressing the features of their anatomic location. OCT segmented images are analyzed based on their optical properties (attenuation coefficient) and textural image features e.g., contrast, correlation, homogeneity, energy, entropy, etc. Utilizing the information and referring to their clinical insight, we aim to make a comprehensive computational model for the healthy skin. The derived parameters represent the OCT microstructural morphology and might provide biological information for generating an atlas of normal skin from different anatomic sites of human skin and may allow for identification of cell microstructural changes in cancer patients. We then compared the parameters of healthy samples with those of abnormal skin and classified them using a linear Support Vector Machines (SVM) with 82% accuracy.
Determination and discrimination of biodiesel fuels by gas chromatographic and chemometric methods
NASA Astrophysics Data System (ADS)
Milina, R.; Mustafa, Z.; Bojilov, D.; Dagnon, S.; Moskovkina, M.
2016-03-01
Pattern recognition method (PRM) was applied to gas chromatographic (GC) data for a fatty acid methyl esters (FAME) composition of commercial and laboratory synthesized biodiesel fuels from vegetable oils including sunflower, rapeseed, corn and palm oils. Two GC quantitative methods to calculate individual fames were compared: Area % and internal standard. The both methods were applied for analysis of two certified reference materials. The statistical processing of the obtained results demonstrates the accuracy and precision of the two methods and allows them to be compared. For further chemometric investigations of biodiesel fuels by their FAME-profiles any of those methods can be used. PRM results of FAME profiles of samples from different vegetable oils show a successful recognition of biodiesels according to the feedstock. The information obtained can be used for selection of feedstock to produce biodiesels with certain properties, for assessing their interchangeability, for fuel spillage and remedial actions in the environment.
Automated solid-phase extraction workstations combined with quantitative bioanalytical LC/MS.
Huang, N H; Kagel, J R; Rossi, D T
1999-03-01
An automated solid-phase extraction workstation was used to develop, characterize and validate an LC/MS/MS method for quantifying a novel lipid-regulating drug in dog plasma. Method development was facilitated by workstation functions that allowed wash solvents of varying organic composition to be mixed and tested automatically. Precision estimates for this approach were within 9.8% relative standard deviation (RSD) across the calibration range. Accuracy for replicate determinations of quality controls was between -7.2 and +6.2% relative error (RE) over 5-1,000 ng/ml(-1). Recoveries were evaluated for a wide variety of wash solvents, elution solvents and sorbents. Optimized recoveries were generally > 95%. A sample throughput benchmark for the method was approximately equal 8 min per sample. Because of parallel sample processing, 100 samples were extracted in less than 120 min. The approach has proven useful for use with LC/MS/MS, using a multiple reaction monitoring (MRM) approach.
[Determination of glutamic acid in biological material by capillary electrophoresis].
Narezhnaya, E; Krukier, I; Avrutskaya, V; Degtyareva, A; Igumnova, E A
2015-01-01
The conditions for the identification and determination of Glutamic acid by capillary zone electrophoresis without their preliminary derivatization have been optimized. The effect of concentration of buffer electrolyte and pH on determination of Glutamic acid has been investigated. It is shown that the 5 Mm borate buffer concentration and a pH 9.15 are optimal. Quantitative determination of glutamic acid has been carried out using a linear dependence between the concentration of the analyte and the area of the peak. The accuracy and reproducibility of the determination are confirmed by the method "introduced - found". Glutamic acid has been determined in the placenta homogenate. The duration of analysis doesn't exceed 30 minutes. The results showed a decrease in the level of glutamic acid in cases of pregnancy complicated by placental insufficiency compared with the physiological, and this fact allows to consider the level of glutamic acid as a possible marker of complicated pregnancy.
Quantitative mass imaging of single biological macromolecules.
Young, Gavin; Hundt, Nikolas; Cole, Daniel; Fineberg, Adam; Andrecka, Joanna; Tyler, Andrew; Olerinyova, Anna; Ansari, Ayla; Marklund, Erik G; Collier, Miranda P; Chandler, Shane A; Tkachenko, Olga; Allen, Joel; Crispin, Max; Billington, Neil; Takagi, Yasuharu; Sellers, James R; Eichmann, Cédric; Selenko, Philipp; Frey, Lukas; Riek, Roland; Galpin, Martin R; Struwe, Weston B; Benesch, Justin L P; Kukura, Philipp
2018-04-27
The cellular processes underpinning life are orchestrated by proteins and their interactions. The associated structural and dynamic heterogeneity, despite being key to function, poses a fundamental challenge to existing analytical and structural methodologies. We used interferometric scattering microscopy to quantify the mass of single biomolecules in solution with 2% sequence mass accuracy, up to 19-kilodalton resolution, and 1-kilodalton precision. We resolved oligomeric distributions at high dynamic range, detected small-molecule binding, and mass-imaged proteins with associated lipids and sugars. These capabilities enabled us to characterize the molecular dynamics of processes as diverse as glycoprotein cross-linking, amyloidogenic protein aggregation, and actin polymerization. Interferometric scattering mass spectrometry allows spatiotemporally resolved measurement of a broad range of biomolecular interactions, one molecule at a time. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Lapek, John D; Greninger, Patricia; Morris, Robert; Amzallag, Arnaud; Pruteanu-Malinici, Iulian; Benes, Cyril H; Haas, Wilhelm
2017-10-01
The formation of protein complexes and the co-regulation of the cellular concentrations of proteins are essential mechanisms for cellular signaling and for maintaining homeostasis. Here we use isobaric-labeling multiplexed proteomics to analyze protein co-regulation and show that this allows the identification of protein-protein associations with high accuracy. We apply this 'interactome mapping by high-throughput quantitative proteome analysis' (IMAHP) method to a panel of 41 breast cancer cell lines and show that deviations of the observed protein co-regulations in specific cell lines from the consensus network affects cellular fitness. Furthermore, these aberrant interactions serve as biomarkers that predict the drug sensitivity of cell lines in screens across 195 drugs. We expect that IMAHP can be broadly used to gain insight into how changing landscapes of protein-protein associations affect the phenotype of biological systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rutland, Christopher J.
2009-04-26
The Terascale High-Fidelity Simulations of Turbulent Combustion (TSTC) project is a multi-university collaborative effort to develop a high-fidelity turbulent reacting flow simulation capability utilizing terascale, massively parallel computer technology. The main paradigm of the approach is direct numerical simulation (DNS) featuring the highest temporal and spatial accuracy, allowing quantitative observations of the fine-scale physics found in turbulent reacting flows as well as providing a useful tool for development of sub-models needed in device-level simulations. Under this component of the TSTC program the simulation code named S3D, developed and shared with coworkers at Sandia National Laboratories, has been enhanced with newmore » numerical algorithms and physical models to provide predictive capabilities for turbulent liquid fuel spray dynamics. Major accomplishments include improved fundamental understanding of mixing and auto-ignition in multi-phase turbulent reactant mixtures and turbulent fuel injection spray jets.« less
NASA Astrophysics Data System (ADS)
Poluyan, L. V.; Syutkina, E. V.; Guryev, E. S.
2017-11-01
The comparative analysis of key features of the software systems TOXI+Risk and ALOHA is presented. The authors made a comparison of domestic (TOXI+Risk) and foreign (ALOHA) software systems allowing to give the quantitative assessment of impact areas (pressure, thermal, toxic) in case of hypothetical emergencies in potentially hazardous objects of the oil, gas, chemical, petrochemical and oil-processing industry. Both software systems use different mathematical models for assessment of the release rate of a chemically hazardous substance from a storage tank and its evaporation. The comparison of the accuracy of definition of impact areas made by both software systems to verify the examples shows good convergence of both products. The analysis results showed that the ALOHA software can be actively used for forecasting and immediate assessment of emergency situations, assessment of damage as a result of emergencies on the territories of municipalities.
Data mining for materials design: A computational study of single molecule magnet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dam, Hieu Chi; Faculty of Physics, Vietnam National University, 334 Nguyen Trai, Hanoi; Pham, Tien Lam
2014-01-28
We develop a method that combines data mining and first principles calculation to guide the designing of distorted cubane Mn{sup 4+} Mn {sub 3}{sup 3+} single molecule magnets. The essential idea of the method is a process consisting of sparse regressions and cross-validation for analyzing calculated data of the materials. The method allows us to demonstrate that the exchange coupling between Mn{sup 4+} and Mn{sup 3+} ions can be predicted from the electronegativities of constituent ligands and the structural features of the molecule by a linear regression model with high accuracy. The relations between the structural features and magnetic propertiesmore » of the materials are quantitatively and consistently evaluated and presented by a graph. We also discuss the properties of the materials and guide the material design basing on the obtained results.« less
Triangulation-based edge measurement using polyview optics
NASA Astrophysics Data System (ADS)
Li, Yinan; Kästner, Markus; Reithmeier, Eduard
2018-04-01
Laser triangulation sensors as non-contact measurement devices are widely used in industry and research for profile measurements and quantitative inspections. Some technical applications e.g. edge measurements usually require a configuration of a single sensor and a translation stage or a configuration of multiple sensors, so that they can measure a large measurement range that is out of the scope of a single sensor. However, the cost of both configurations is high, due to the additional rotational axis or additional sensor. This paper provides a special measurement system for measurement of great curved surfaces based on a single sensor configuration. Utilizing a self-designed polyview optics and calibration process, the proposed measurement system allows an over 180° FOV (field of view) with a precise measurement accuracy as well as an advantage of low cost. The detailed capability of this measurement system based on experimental data is discussed in this paper.
Girard, Romuald; Zeineddine, Hussein A; Orsbon, Courtney; Tan, Huan; Moore, Thomas; Hobson, Nick; Shenkar, Robert; Lightle, Rhonda; Shi, Changbin; Fam, Maged D; Cao, Ying; Shen, Le; Neander, April I; Rorrer, Autumn; Gallione, Carol; Tang, Alan T; Kahn, Mark L; Marchuk, Douglas A; Luo, Zhe-Xi; Awad, Issam A
2016-09-15
Cerebral cavernous malformations (CCMs) are hemorrhagic brain lesions, where murine models allow major mechanistic discoveries, ushering genetic manipulations and preclinical assessment of therapies. Histology for lesion counting and morphometry is essential yet tedious and time consuming. We herein describe the application and validations of X-ray micro-computed tomography (micro-CT), a non-destructive technique allowing three-dimensional CCM lesion count and volumetric measurements, in transgenic murine brains. We hereby describe a new contrast soaking technique not previously applied to murine models of CCM disease. Volumetric segmentation and image processing paradigm allowed for histologic correlations and quantitative validations not previously reported with the micro-CT technique in brain vascular disease. Twenty-two hyper-dense areas on micro-CT images, identified as CCM lesions, were matched by histology. The inter-rater reliability analysis showed strong consistency in the CCM lesion identification and staging (K=0.89, p<0.0001) between the two techniques. Micro-CT revealed a 29% greater CCM lesion detection efficiency, and 80% improved time efficiency. Serial integrated lesional area by histology showed a strong positive correlation with micro-CT estimated volume (r(2)=0.84, p<0.0001). Micro-CT allows high throughput assessment of lesion count and volume in pre-clinical murine models of CCM. This approach complements histology with improved accuracy and efficiency, and can be applied for lesion burden assessment in other brain diseases. Copyright © 2016 Elsevier B.V. All rights reserved.
Bedoya, Cesar; Cardona, Andrés; Galeano, July; Cortés-Mancera, Fabián; Sandoz, Patrick; Zarzycki, Artur
2017-12-01
The wound healing assay is widely used for the quantitative analysis of highly regulated cellular events. In this essay, a wound is voluntarily produced on a confluent cell monolayer, and then the rate of wound reduction (WR) is characterized by processing images of the same regions of interest (ROIs) recorded at different time intervals. In this method, sharp-image ROI recovery is indispensable to compensate for displacements of the cell cultures due either to the exploration of multiple sites of the same culture or to transfers from the microscope stage to a cell incubator. ROI recovery is usually done manually and, despite a low-magnification microscope objective is generally used (10x), repositioning imperfections constitute a major source of errors detrimental to the WR measurement accuracy. We address this ROI recovery issue by using pseudoperiodic patterns fixed onto the cell culture dishes, allowing the easy localization of ROIs and the accurate quantification of positioning errors. The method is applied to a tumor-derived cell line, and the WR rates are measured by means of two different image processing software. Sharp ROI recovery based on the proposed method is found to improve significantly the accuracy of the WR measurement and the positioning under the microscope.
Sánek, Lubomír; Pecha, Jiří; Kolomazník, Karel
2013-03-01
The proposed analytical method allows for simultaneous determination by GC using a programed temperature vaporization injector and a flame ionization detector of the main reaction components (i.e. glycerol, methyl esters, mono-, di-, and triacylglycerols) in the reaction mixture during biodiesel production. The suggested method is convenient for the rapid and simple evaluation of the kinetic data gained during the transesterification reaction and, also partially serves as an indicator of the quality of biodiesel and mainly, as the indicator of the efficiency of the whole production process (i.e. the conversion of triacylglycerols to biodiesel and its time progress). The optimization of chromatographic conditions (e.g. the oven temperature program, injector setting, amount of derivatization reagent, and the derivatization reaction time) was performed. The method has been validated with crude samples of biodiesel made from waste-cooking oils in terms of linearity, precision, accuracy, sensitivity, and limits of detection and quantification. The results confirmed a satisfactory degree of accuracy and repeatability (the mean RSDs were usually below 2%) necessary for the reliable quantitative determination of all components in the considerable concentration range (e.g. 10-1100 μg/mL in case of methyl esters). Compound recoveries ranging from 96 to 104% were obtained. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Tapping into tongue motion to substitute or augment upper limbs
NASA Astrophysics Data System (ADS)
Ghovanloo, Maysam; Sahadat, M. Nazmus; Zhang, Zhenxuan; Kong, Fanpeng; Sebkhi, Nordine
2017-05-01
Assistive technologies (AT) play an important role in the lives of people with disabilities. Most importantly, they allow individuals with severe physical disabilities become more independence. Inherent abilities of the human tongue originated from its strong representation in the motor cortex, its direct connection to the brain through well-protected cranial nerves, and easy access without a surgery have resulted in development of a series of tongue-operated ATs that tap into the dexterous, intuitive, rapid, precise, and tireless motion of the tongue. These ATs not only help people with tetraplegia as a result of spinal cord injury or degenerative neurological diseases to access computers/smartphones, drive wheelchairs, and interact with their environments but also have the potential to enhance rehabilitation paradigms for stroke survivors. In this paper, various types of tongue operated ATs are discussed based on their working principles and task based performances. Comparisons are drawn based on widely accepted and standardized quantitative measures, such as throughput, information transfer rate, typing speed/accuracy, tracking error, navigation period, and navigation accuracy as well as qualitative measures, such as user feedback. Finally, the prospects of using variations of these versatile devices to enhance human performance in environments that limit hand and finger movements, such as space exploration or underwater operations are discussed.
Plant pathogen nanodiagnostic techniques: forthcoming changes?
Khiyami, Mohammad A.; Almoammar, Hassan; Awad, Yasser M.; Alghuthaymi, Mousa A.; Abd-Elsalam, Kamel A.
2014-01-01
Plant diseases are among the major factors limiting crop productivity. A first step towards managing a plant disease under greenhouse and field conditions is to correctly identify the pathogen. Current technologies, such as quantitative polymerase chain reaction (Q-PCR), require a relatively large amount of target tissue and rely on multiple assays to accurately identify distinct plant pathogens. The common disadvantage of the traditional diagnostic methods is that they are time consuming and lack high sensitivity. Consequently, developing low-cost methods to improve the accuracy and rapidity of plant pathogens diagnosis is needed. Nanotechnology, nano particles and quantum dots (QDs) have emerged as essential tools for fast detection of a particular biological marker with extreme accuracy. Biosensor, QDs, nanostructured platforms, nanoimaging and nanopore DNA sequencing tools have the potential to raise sensitivity, specificity and speed of the pathogen detection, facilitate high-throughput analysis, and to be used for high-quality monitoring and crop protection. Furthermore, nanodiagnostic kit equipment can easily and quickly detect potential serious plant pathogens, allowing experts to help farmers in the prevention of epidemic diseases. The current review deals with the application of nanotechnology for quicker, more cost-effective and precise diagnostic procedures of plant diseases. Such an accurate technology may help to design a proper integrated disease management system which may modify crop environments to adversely affect crop pathogens. PMID:26740775
Computerized method to compensate for breathing body motion in dynamic chest radiographs
NASA Astrophysics Data System (ADS)
Matsuda, H.; Tanaka, R.; Sanada, S.
2017-03-01
Dynamic chest radiography combined with computer analysis allows quantitative analyses on pulmonary function and rib motion. The accuracy of kinematic analysis is directly linked to diagnostic accuracy, and thus body motion compensation is a major concern. Our purpose in this study was to develop a computerized method to reduce a breathing body motion in dynamic chest radiographs. Dynamic chest radiographs of 56 patients were obtained using a dynamic flat-panel detector. The images were divided into a 1 cm-square and the squares on body counter were used to detect the body motion. Velocity vector was measured using cross-correlation method on the body counter and the body motion was then determined on the basis of the summation of motion vector. The body motion was then compensated by shifting the images based on the measured vector. By using our method, the body motion was accurately detected by the order of a few pixels in clinical cases, mean 82.5% in right and left directions. In addition, our method detected slight body motion which was not able to be identified by human observations. We confirmed our method effectively worked in kinetic analysis of rib motion. The present method would be useful for the reduction of a breathing body motion in dynamic chest radiography.
Adjedj, Julien; Xaplanteris, Panagiotis; Toth, Gabor; Ferrara, Angela; Pellicano, Mariano; Ciccarelli, Giovanni; Floré, Vincent; Barbato, Emanuele; De Bruyne, Bernard
2017-07-01
The correlation between angiographic assessment of coronary stenoses and fractional flow reserve (FFR) is weak. Whether and how risk factors impact the diagnostic accuracy of angiography is unknown. We sought to evaluate the diagnostic accuracy of angiography by visual estimate and by quantitative coronary angiography when compared with FFR and evaluate the influence of risk factors (RF) on this accuracy. In 1382 coronary stenoses (1104 patients), percent diameter stenosis by visual estimation (DS VE ) and by quantitative coronary angiography (DS QCA ) was compared with FFR. Patients were divided into 4 subgroups, according to the presence of RFs, and the relationship between DS VE , DS QCA , and FFR was analyzed. Overall, DS VE was significantly higher than DS QCA ( P <0.0001); nonetheless, when examined by strata of DS, DS VE was significantly smaller than DS QCA in mild stenoses, although the reverse held true for severe stenoses. Compared with FFR, a large scatter was observed for both DS VE and DS QCA . When using a dichotomous FFR value of 0.80, C statistic was significantly higher for DS VE than for DS QCA (0.712 versus 0.640, respectively; P <0.001). C statistics for DS VE decreased progressively as RFs accumulated (0.776 for ≤1 RF, 0.750 for 2 RFs, 0.713 for 3 RFs and 0.627 for ≥4 RFs; P =0.0053). In addition, in diabetics, the relationship between FFR and angiographic indices was particularly weak (C statistics: 0.524 for DS VE and 0.511 for DS QCA ). Overall, DS VE has a better diagnostic accuracy than DS QCA to predict the functional significance of coronary stenosis. The predictive accuracy of angiography is moderate in patients with ≤1 RFs, but weakens as RFs accumulate, especially in diabetics. © 2017 American Heart Association, Inc.
Digital Architecture for a Trace Gas Sensor Platform
NASA Technical Reports Server (NTRS)
Gonzales, Paula; Casias, Miguel; Vakhtin, Andrei; Pilgrim, Jeffrey
2012-01-01
A digital architecture has been implemented for a trace gas sensor platform, as a companion to standard analog control electronics, which accommodates optical absorption whose fractional absorbance equivalent would result in excess error if assumed to be linear. In cases where the absorption (1-transmission) is not equivalent to the fractional absorbance within a few percent error, it is necessary to accommodate the actual measured absorption while reporting the measured concentration of a target analyte with reasonable accuracy. This requires incorporation of programmable intelligence into the sensor platform so that flexible interpretation of the acquired data may be accomplished. Several different digital component architectures were tested and implemented. Commercial off-the-shelf digital electronics including data acquisition cards (DAQs), complex programmable logic devices (CPLDs), field-programmable gate arrays (FPGAs), and microcontrollers have been used to achieve the desired outcome. The most completely integrated architecture achieved during the project used the CPLD along with a microcontroller. The CPLD provides the initial digital demodulation of the raw sensor signal, and then communicates over a parallel communications interface with a microcontroller. The microcontroller analyzes the digital signal from the CPLD, and applies a non-linear correction obtained through extensive data analysis at the various relevant EVA operating pressures. The microcontroller then presents the quantitatively accurate carbon dioxide partial pressure regardless of optical density. This technique could extend the linear dynamic range of typical absorption spectrometers, particularly those whose low end noise equivalent absorbance is below one-part-in-100,000. In the EVA application, it allows introduction of a path-length-enhancing architecture whose optical interference effects are well understood and quantified without sacrificing the dynamic range that allows quantitative detection at the higher carbon dioxide partial pressures. The digital components are compact and allow reasonably complete integration with separately developed analog control electronics without sacrificing size, mass, or power draw.
Petraco, Ricardo; Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P
2018-01-01
Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test's performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Chol rapid and Chol gold ) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard.
Zhang, Haibo; Yang, Litao; Guo, Jinchao; Li, Xiang; Jiang, Lingxi; Zhang, Dabing
2008-07-23
To enforce the labeling regulations of genetically modified organisms (GMOs), the application of reference molecules as calibrators is becoming essential for practical quantification of GMOs. However, the reported reference molecules with tandem marker multiple targets have been proved not suitable for duplex PCR analysis. In this study, we developed one unique plasmid molecule based on one pMD-18T vector with three exogenous target DNA fragments of Roundup Ready soybean GTS 40-3-2 (RRS), that is, CaMV35S, NOS, and RRS event fragments, plus one fragment of soybean endogenous Lectin gene. This Lectin gene fragment was separated from the three exogenous target DNA fragments of RRS by inserting one 2.6 kb DNA fragment with no relatedness to RRS detection targets in this resultant plasmid. Then, we proved that this design allows the quantification of RRS using the three duplex real-time PCR assays targeting CaMV35S, NOS, and RRS events employing this reference molecule as the calibrator. In these duplex PCR assays, the limits of detection (LOD) and quantification (LOQ) were 10 and 50 copies, respectively. For the quantitative analysis of practical RRS samples, the results of accuracy and precision were similar to those of simplex PCR assays, for instance, the quantitative results were at the 1% level, the mean bias of the simplex and duplex PCR were 4.0% and 4.6%, respectively, and the statistic analysis ( t-test) showed that the quantitative data from duplex and simplex PCR had no significant discrepancy for each soybean sample. Obviously, duplex PCR analysis has the advantages of saving the costs of PCR reaction and reducing the experimental errors in simplex PCR testing. The strategy reported in the present study will be helpful for the development of new reference molecules suitable for duplex PCR quantitative assays of GMOs.
Spibey, C A; Jackson, P; Herick, K
2001-03-01
In recent years the use of fluorescent dyes in biological applications has dramatically increased. The continual improvement in the capabilities of these fluorescent dyes demands increasingly sensitive detection systems that provide accurate quantitation over a wide linear dynamic range. In the field of proteomics, the detection, quantitation and identification of very low abundance proteins are of extreme importance in understanding cellular processes. Therefore, the instrumentation used to acquire an image of such samples, for spot picking and identification by mass spectrometry, must be sensitive enough to be able, not only, to maximise the sensitivity and dynamic range of the staining dyes but, as importantly, adapt to the ever changing portfolio of fluorescent dyes as they become available. Just as the available fluorescent probes are improving and evolving so are the users application requirements. Therefore, the instrumentation chosen must be flexible to address and adapt to those changing needs. As a result, a highly competitive market for the supply and production of such dyes and the instrumentation for their detection and quantitation have emerged. The instrumentation currently available is based on either laser/photomultiplier tube (PMT) scanning or lamp/charge-coupled device (CCD) based mechanisms. This review briefly discusses the advantages and disadvantages of both System types for fluorescence imaging, gives a technical overview of CCD technology and describes in detail a unique xenon/are lamp CCD based instrument, from PerkinElmer Life Sciences. The Wallac-1442 ARTHUR is unique in its ability to scan both large areas at high resolution and give accurate selectable excitation over the whole of the UV/visible range. It operates by filtering both the excitation and emission wavelengths, providing optimal and accurate measurement and quantitation of virtually any available dye and allows excellent spectral resolution between different fluorophores. This flexibility and excitation accuracy is key to multicolour applications and future adaptation of the instrument to address the application requirements and newly emerging dyes.
Silva, Eduarda M P; Varandas, Pedro A M M; Melo, Tânia; Barros, Cristina; Alencastre, Inês S; Barreiros, Luísa; Domingues, Pedro; Lamghari, Meriem; Domingues, M Rosário M; Segundo, Marcela A
2018-03-20
Collision induced dissociation of triple quadrupole mass spectrometer (CID-QqQ) and high-energy collision dissociation (HCD) of Orbitrap were compared for four neuropeptides Y Y1 (NPY Y1) receptor antagonists and showed similar qualitative fragmentation and structural information. Orbitrap high resolution and high mass accuracy HCD fragmentation spectra allowed unambiguous identification of product ions in the range 0.04-4.25 ppm. Orbitrap mass spectrometry showed abundant analyte-specific product ions also observed on CID-QqQ. These results show the suitability of these product ions for use in quantitative analysis by MRM mode. In addition, it was found that all compounds could be determined at levels >1 μg L -1 using the QqQ instrument and that the detection limits for this analyzer ranged from 0.02 to 0.6 μg L -1 . Overall, the results obtained from experiments acquired in QqQ show a good agreement with those acquired from the Orbitrap instrument allowing the use of this relatively inexpensive technique (QqQ) for accurate quantification of these compounds in clinical and academic applications. Copyright © 2018 Elsevier B.V. All rights reserved.
Ndira, S P; Rosenberger, K D; Wetter, T
2008-01-01
To assess if electronic health record systems in developing countries can improve on timeliness, availability and accuracy of routine health reports and staff satisfaction after introducing the electronic system, compared to the paper-based alternative. The research was conducted with hospital staff of Tororo District Hospital in Uganda. A comparative intervention study with qualitative and quantitative methods was used to compare the paper-based (pre-test) to the electronic system (post-test) focusing on accuracy, availability and timeliness of monthly routine reports about mothers visiting the hospital; and staff satisfaction with the electronic system as outcome measures. Timeliness: pre-test 13 of 19 months delivered to the district timely, delivery dates for six months could not be established; post-test 100%. pre-test 79% of reports were present at the district health office; post-test 100%. Accuracy: pre-test 73.2% of selected reports could be independently confirmed as correct; post-test 71.2%. Difficulties were encountered in finding enough mothers through direct follow up to inquire on accuracy of information recorded about them. Staff interviews showed that the electronic system is appreciated by the majority of the hospital staff. Remaining obstacles include staff workload, power shortages, network breakdowns and parallel data entry (paper-based and electronic). While timeliness and availability improved, improvement of accuracy could not be established. Better approaches to ascertaining accuracy have to be devised, e.g. evaluation of intended use. For success, organizational, managerial and social challenges must be addressed beyond technical aspects.
Mezzullo, Marco; Fazzini, Alessia; Gambineri, Alessandra; Di Dalmazi, Guido; Mazza, Roberta; Pelusi, Carla; Vicennati, Valentina; Pasquali, Renato; Pagotto, Uberto; Fanelli, Flaminia
2017-08-28
Salivary androgen testing represents a valuable source of biological information. However, the proper measurement of such low levels is challenging for direct immunoassays, lacking adequate accuracy. In the last few years, many conflicting findings reporting low correlation with the serum counterparts have hampered the clinical application of salivary androgen testing. Liquid chromatography-tandem mass spectrometry (LC-MS/MS) makes it possible to overcome previous analytical limits, providing new insights in endocrinology practice. Salivary testosterone (T), androstenedione (A), dehydroepiandrosterone (DHEA) and 17OHprogesterone (17OHP) were extracted from 500µL of saliva, separated in 9.5 min LC-gradient and detected by positive electrospray ionization - multiple reaction monitoring. The diurnal variation of salivary and serum androgens was described by a four paired collection protocol (8 am, 12 am, 4 pm and 8 pm) in 19 healthy subjects. The assay allowed the quantitation of T, A, DHEA and 17OHP down to 3.40, 6.81, 271.0 and 23.7 pmol/L, respectively, with accuracy between 83.0 and 106.1% for all analytes. A parallel diurnal rhythm in saliva and serum was observed for all androgens, with values decreasing from the morning to the evening time points. Salivary androgen levels revealed a high linear correlation with serum counterparts in both sexes (T: R>0.85; A: R>0.90; DHEA: R>0.73 and 17OHP: R>0.89; p<0.0001 for all). Our LC-MS/MS method allowed a sensitive evaluation of androgen salivary levels and represents an optimal technique to explore the relevance of a comprehensive androgen profile as measured in saliva for the study of androgen secretion modulation and activity in physiologic and pathologic states.
EXPONENTIAL TIME DIFFERENCING FOR HODGKIN–HUXLEY-LIKE ODES
Börgers, Christoph; Nectow, Alexander R.
2013-01-01
Several authors have proposed the use of exponential time differencing (ETD) for Hodgkin–Huxley-like partial and ordinary differential equations (PDEs and ODEs). For Hodgkin–Huxley-like PDEs, ETD is attractive because it can deal effectively with the stiffness issues that diffusion gives rise to. However, large neuronal networks are often simulated assuming “space-clamped” neurons, i.e., using the Hodgkin–Huxley ODEs, in which there are no diffusion terms. Our goal is to clarify whether ETD is a good idea even in that case. We present a numerical comparison of first- and second-order ETD with standard explicit time-stepping schemes (Euler’s method, the midpoint method, and the classical fourth-order Runge–Kutta method). We find that in the standard schemes, the stable computation of the very rapid rising phase of the action potential often forces time steps of a small fraction of a millisecond. This can result in an expensive calculation yielding greater overall accuracy than needed. Although it is tempting at first to try to address this issue with adaptive or fully implicit time-stepping, we argue that neither is effective here. The main advantage of ETD for Hodgkin–Huxley-like systems of ODEs is that it allows underresolution of the rising phase of the action potential without causing instability, using time steps on the order of one millisecond. When high quantitative accuracy is not necessary and perhaps, because of modeling inaccuracies, not even useful, ETD allows much faster simulations than standard explicit time-stepping schemes. The second-order ETD scheme is found to be substantially more accurate than the first-order one even for large values of Δt. PMID:24058276
Analysis of tomographic mineralogical data using YaDiV—Overview and practical case study
NASA Astrophysics Data System (ADS)
Friese, Karl-Ingo; Cichy, Sarah B.; Wolter, Franz-Erich; Botcharnikov, Roman E.
2013-07-01
We introduce the 3D-segmentation and -visualization software YaDiV to the mineralogical application of rock texture analysis. YaDiV has been originally designed to process medical DICOM datasets. But due to software advancements and additional plugins, this open-source software can now be easily used for the fast quantitative morphological characterization of geological objects from tomographic datasets. In this paper, we give a summary of YaDiV's features and demonstrate the advantages of 3D-stereographic visualization and the accuracy of 3D-segmentation for the analysis of geological samples. For this purpose, we present a virtual and a real use case (here: experimentally crystallized and vesiculated magmatic rocks, corresponding to the composition of the 1991-1995 Unzen eruption, Japan). Especially the spacial representation of structures in YaDiV allows an immediate, intuitive understanding of the 3D-structures, which may not become clear by only looking on 2D-images. We compare our results of object number density calculations with the established classical stereological 3D-correction methods for 2D-images and show that it was possible to achieve a seriously higher quality and accuracy. The methods described in this paper are not dependent on the nature of the object. The fact, that YaDiV is open-source and users with programming skills can create new plugins themselves, may allow this platform to become applicable to a variety of geological scenarios from the analysis of textures in tiny rock samples to the interpretation of global geophysical data, as long as the data are provided in tomographic form.
Automated pixel-wise brain tissue segmentation of diffusion-weighted images via machine learning.
Ciritsis, Alexander; Boss, Andreas; Rossi, Cristina
2018-04-26
The diffusion-weighted (DW) MR signal sampled over a wide range of b-values potentially allows for tissue differentiation in terms of cellularity, microstructure, perfusion, and T 2 relaxivity. This study aimed to implement a machine learning algorithm for automatic brain tissue segmentation from DW-MRI datasets, and to determine the optimal sub-set of features for accurate segmentation. DWI was performed at 3 T in eight healthy volunteers using 15 b-values and 20 diffusion-encoding directions. The pixel-wise signal attenuation, as well as the trace and fractional anisotropy (FA) of the diffusion tensor, were used as features to train a support vector machine classifier for gray matter, white matter, and cerebrospinal fluid classes. The datasets of two volunteers were used for validation. For each subject, tissue classification was also performed on 3D T 1 -weighted data sets with a probabilistic framework. Confusion matrices were generated for quantitative assessment of image classification accuracy in comparison with the reference method. DWI-based tissue segmentation resulted in an accuracy of 82.1% on the validation dataset and of 82.2% on the training dataset, excluding relevant model over-fitting. A mean Dice coefficient (DSC) of 0.79 ± 0.08 was found. About 50% of the classification performance was attributable to five features (i.e. signal measured at b-values of 5/10/500/1200 s/mm 2 and the FA). This reduced set of features led to almost identical performances for the validation (82.2%) and the training (81.4%) datasets (DSC = 0.79 ± 0.08). Machine learning techniques applied to DWI data allow for accurate brain tissue segmentation based on both morphological and functional information. Copyright © 2018 John Wiley & Sons, Ltd.
A Depolarisation Lidar Based Method for the Determination of Liquid-Cloud Microphysical Properties.
NASA Astrophysics Data System (ADS)
Donovan, D. P.; Klein Baltink, H.; Henzing, J. S.; De Roode, S. R.; Siebesma, P.
2014-12-01
The fact that polarisation lidars measure a multiple-scattering induced depolarisation signal in liquid clouds is well-known. The depolarisation signal depends on the lidar characteristics (e.g. wavelength and field-of-view) as well as the cloud properties (e.g. liquid water content (LWC) and cloud droplet number concentration (CDNC)). Previous efforts seeking to use depolarisation information in a quantitative manner to retrieve cloud properties have been undertaken with, arguably, limited practical success. In this work we present a retrieval procedure applicable to clouds with (quasi-)linear LWC profiles and (quasi-)constant CDNC in the cloud base region. Limiting the applicability of the procedure in this manner allows us to reduce the cloud variables to two parameters (namely liquid water content lapse-rate and the CDNC). This simplification, in turn, allows us to employ a robust optimal-estimation inversion using pre-computed look-up-tables produced using lidar Monte-Carlo multiple-scattering simulations. Here, we describe the theory behind the inversion procedure and apply it to simulated observations based on large-eddy simulation model output. The inversion procedure is then applied to actual depolarisation lidar data covering to a range of cases taken from the Cabauw measurement site in the central Netherlands. The lidar results were then used to predict the corresponding cloud-base region radar reflectivities. In non-drizzling condition, it was found that the lidar inversion results can be used to predict the observed radar reflectivities with an accuracy within the radar calibration uncertainty (2-3 dBZ). This result strongly supports the accuracy of the lidar inversion results. Results of a comparison between ground-based aerosol number concentration and lidar-derived CDNC are also presented. The results are seen to be consistent with previous studies based on aircraft-based in situ measurements.
Semantic focusing allows fully automated single-layer slide scanning of cervical cytology slides.
Lahrmann, Bernd; Valous, Nektarios A; Eisenmann, Urs; Wentzensen, Nicolas; Grabe, Niels
2013-01-01
Liquid-based cytology (LBC) in conjunction with Whole-Slide Imaging (WSI) enables the objective and sensitive and quantitative evaluation of biomarkers in cytology. However, the complex three-dimensional distribution of cells on LBC slides requires manual focusing, long scanning-times, and multi-layer scanning. Here, we present a solution that overcomes these limitations in two steps: first, we make sure that focus points are only set on cells. Secondly, we check the total slide focus quality. From a first analysis we detected that superficial dust can be separated from the cell layer (thin layer of cells on the glass slide) itself. Then we analyzed 2,295 individual focus points from 51 LBC slides stained for p16 and Ki67. Using the number of edges in a focus point image, specific color values and size-inclusion filters, focus points detecting cells could be distinguished from focus points on artifacts (accuracy 98.6%). Sharpness as total focus quality of a virtual LBC slide is computed from 5 sharpness features. We trained a multi-parameter SVM classifier on 1,600 images. On an independent validation set of 3,232 cell images we achieved an accuracy of 94.8% for classifying images as focused. Our results show that single-layer scanning of LBC slides is possible and how it can be achieved. We assembled focus point analysis and sharpness classification into a fully automatic, iterative workflow, free of user intervention, which performs repetitive slide scanning as necessary. On 400 LBC slides we achieved a scanning-time of 13.9±10.1 min with 29.1±15.5 focus points. In summary, the integration of semantic focus information into whole-slide imaging allows automatic high-quality imaging of LBC slides and subsequent biomarker analysis.
NASA Astrophysics Data System (ADS)
Cawood, A.; Bond, C. E.; Howell, J.; Totake, Y.
2016-12-01
Virtual outcrops derived from techniques such as LiDAR and SfM (digital photogrammetry) provide a viable and potentially powerful addition or alternative to traditional field studies, given the large amounts of raw data that can be acquired rapidly and safely. The use of these digital representations of outcrops as a source of geological data has increased greatly in the past decade, and as such, the accuracy and precision of these new acquisition methods applied to geological problems has been addressed by a number of authors. Little work has been done, however, on the integration of virtual outcrops into fundamental structural geology workflows and to systematically studying the fidelity of the data derived from them. Here, we use the classic Stackpole Quay syncline outcrop in South Wales to quantitatively evaluate the accuracy of three virtual outcrop models (LiDAR, aerial and terrestrial digital photogrammetry) compared to data collected directly in the field. Using these structural data, we have built 2D and 3D geological models which make predictions of fold geometries. We examine the fidelity of virtual outcrops generated using different acquisition techniques to outcrop geology and how these affect model building and final outcomes. Finally, we utilize newly acquired data to deterministically test model validity. Based upon these results, we find that acquisition of digital imagery by UAS (Unmanned Autonomous Vehicle) yields highly accurate virtual outcrops when compared to terrestrial methods, allowing the construction of robust data-driven predictive models. Careful planning, survey design and choice of suitable acquisition method are, however, of key importance for best results.
USDA-ARS?s Scientific Manuscript database
A technique of using multiple calibration sets in partial least squares regression (PLS) was proposed to improve the quantitative determination of ammonia from open-path Fourier transform infrared spectra. The spectra were measured near animal farms, and the path-integrated concentration of ammonia...
Micro-anatomical quantitative optical imaging: toward automated assessment of breast tissues.
Dobbs, Jessica L; Mueller, Jenna L; Krishnamurthy, Savitri; Shin, Dongsuk; Kuerer, Henry; Yang, Wei; Ramanujam, Nirmala; Richards-Kortum, Rebecca
2015-08-20
Pathologists currently diagnose breast lesions through histologic assessment, which requires fixation and tissue preparation. The diagnostic criteria used to classify breast lesions are qualitative and subjective, and inter-observer discordance has been shown to be a significant challenge in the diagnosis of selected breast lesions, particularly for borderline proliferative lesions. Thus, there is an opportunity to develop tools to rapidly visualize and quantitatively interpret breast tissue morphology for a variety of clinical applications. Toward this end, we acquired images of freshly excised breast tissue specimens from a total of 34 patients using confocal fluorescence microscopy and proflavine as a topical stain. We developed computerized algorithms to segment and quantify nuclear and ductal parameters that characterize breast architectural features. A total of 33 parameters were evaluated and used as input to develop a decision tree model to classify benign and malignant breast tissue. Benign features were classified in tissue specimens acquired from 30 patients and malignant features were classified in specimens from 22 patients. The decision tree model that achieved the highest accuracy for distinguishing between benign and malignant breast features used the following parameters: standard deviation of inter-nuclear distance and number of duct lumens. The model achieved 81 % sensitivity and 93 % specificity, corresponding to an area under the curve of 0.93 and an overall accuracy of 90 %. The model classified IDC and DCIS with 92 % and 96 % accuracy, respectively. The cross-validated model achieved 75 % sensitivity and 93 % specificity and an overall accuracy of 88 %. These results suggest that proflavine staining and confocal fluorescence microscopy combined with image analysis strategies to segment morphological features could potentially be used to quantitatively diagnose freshly obtained breast tissue at the point of care without the need for tissue preparation.
In vivo estimation of target registration errors during augmented reality laparoscopic surgery.
Thompson, Stephen; Schneider, Crispin; Bosi, Michele; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J
2018-06-01
Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertesz, Vilmos; Weiskittel, Taylor M.; Vavek, Marissa
Currently, absolute quantitation aspects of droplet-based surface sampling for thin tissue analysis using a fully automated autosampler/HPLC-ESI-MS/MS system are not fully evaluated. Knowledge of extraction efficiency and its reproducibility is required to judge the potential of the method for absolute quantitation of analytes from thin tissue sections. Methods: Adjacent thin tissue sections of propranolol dosed mouse brain (10- μm-thick), kidney (10- μm-thick) and liver (8-, 10-, 16- and 24- μm-thick) were obtained. Absolute concentration of propranolol was determined in tissue punches from serial sections using standard bulk tissue extraction protocols and subsequent HPLC separations and tandem mass spectrometric analysis. Thesemore » values were used to determine propranolol extraction efficiency from the tissues with the droplet-based surface sampling approach. Results: Extraction efficiency of propranolol using 10- μm-thick brain, kidney and liver thin tissues using droplet-based surface sampling varied between ~45-63%. Extraction efficiency decreased from ~65% to ~36% with liver thickness increasing from 8 μm to 24 μm. Randomly selecting half of the samples as standards, precision and accuracy of propranolol concentrations obtained for the other half of samples as quality control metrics were determined. Resulting precision ( ±15%) and accuracy ( ±3%) values, respectively, were within acceptable limits. In conclusion, comparative quantitation of adjacent mouse thin tissue sections of different organs and of various thicknesses by droplet-based surface sampling and by bulk extraction of tissue punches showed that extraction efficiency was incomplete using the former method, and that it depended on the organ and tissue thickness. However, once extraction efficiency was determined and applied, the droplet-based approach provided the required quantitation accuracy and precision for assay validations. Furthermore, this means that once the extraction efficiency was calibrated for a given tissue type and drug, the droplet-based approach provides a non-labor intensive and high-throughput means to acquire spatially resolved quantitative analysis of multiple samples of the same type.« less
Kertesz, Vilmos; Weiskittel, Taylor M.; Vavek, Marissa; ...
2016-06-22
Currently, absolute quantitation aspects of droplet-based surface sampling for thin tissue analysis using a fully automated autosampler/HPLC-ESI-MS/MS system are not fully evaluated. Knowledge of extraction efficiency and its reproducibility is required to judge the potential of the method for absolute quantitation of analytes from thin tissue sections. Methods: Adjacent thin tissue sections of propranolol dosed mouse brain (10- μm-thick), kidney (10- μm-thick) and liver (8-, 10-, 16- and 24- μm-thick) were obtained. Absolute concentration of propranolol was determined in tissue punches from serial sections using standard bulk tissue extraction protocols and subsequent HPLC separations and tandem mass spectrometric analysis. Thesemore » values were used to determine propranolol extraction efficiency from the tissues with the droplet-based surface sampling approach. Results: Extraction efficiency of propranolol using 10- μm-thick brain, kidney and liver thin tissues using droplet-based surface sampling varied between ~45-63%. Extraction efficiency decreased from ~65% to ~36% with liver thickness increasing from 8 μm to 24 μm. Randomly selecting half of the samples as standards, precision and accuracy of propranolol concentrations obtained for the other half of samples as quality control metrics were determined. Resulting precision ( ±15%) and accuracy ( ±3%) values, respectively, were within acceptable limits. In conclusion, comparative quantitation of adjacent mouse thin tissue sections of different organs and of various thicknesses by droplet-based surface sampling and by bulk extraction of tissue punches showed that extraction efficiency was incomplete using the former method, and that it depended on the organ and tissue thickness. However, once extraction efficiency was determined and applied, the droplet-based approach provided the required quantitation accuracy and precision for assay validations. Furthermore, this means that once the extraction efficiency was calibrated for a given tissue type and drug, the droplet-based approach provides a non-labor intensive and high-throughput means to acquire spatially resolved quantitative analysis of multiple samples of the same type.« less
Selection of reference standard during method development using the analytical hierarchy process.
Sun, Wan-yang; Tong, Ling; Li, Dong-xiang; Huang, Jing-yi; Zhou, Shui-ping; Sun, Henry; Bi, Kai-shun
2015-03-25
Reference standard is critical for ensuring reliable and accurate method performance. One important issue is how to select the ideal one from the alternatives. Unlike the optimization of parameters, the criteria of the reference standard are always immeasurable. The aim of this paper is to recommend a quantitative approach for the selection of reference standard during method development based on the analytical hierarchy process (AHP) as a decision-making tool. Six alternative single reference standards were assessed in quantitative analysis of six phenolic acids from Salvia Miltiorrhiza and its preparations by using ultra-performance liquid chromatography. The AHP model simultaneously considered six criteria related to reference standard characteristics and method performance, containing feasibility to obtain, abundance in samples, chemical stability, accuracy, precision and robustness. The priority of each alternative was calculated using standard AHP analysis method. The results showed that protocatechuic aldehyde is the ideal reference standard, and rosmarinic acid is about 79.8% ability as the second choice. The determination results successfully verified the evaluation ability of this model. The AHP allowed us comprehensive considering the benefits and risks of the alternatives. It was an effective and practical tool for optimization of reference standards during method development. Copyright © 2015 Elsevier B.V. All rights reserved.
Malyarenko, Dariya I; Pang, Yuxi; Senegas, Julien; Ivancevic, Marko K; Ross, Brian D; Chenevert, Thomas L
2015-12-01
Spatially non-uniform diffusion weighting bias due to gradient nonlinearity (GNL) causes substantial errors in apparent diffusion coefficient (ADC) maps for anatomical regions imaged distant from magnet isocenter. Our previously-described approach allowed effective removal of spatial ADC bias from three orthogonal DWI measurements for mono-exponential media of arbitrary anisotropy. The present work evaluates correction feasibility and performance for quantitative diffusion parameters of the two-component IVIM model for well-perfused and nearly isotropic renal tissue. Sagittal kidney DWI scans of a volunteer were performed on a clinical 3T MRI scanner near isocenter and offset superiorly. Spatially non-uniform diffusion weighting due to GNL resulted both in shift and broadening of perfusion-suppressed ADC histograms for off-center DWI relative to unbiased measurements close to isocenter. Direction-average DW-bias correctors were computed based on the known gradient design provided by vendor. The computed bias maps were empirically confirmed by coronal DWI measurements for an isotropic gel-flood phantom. Both phantom and renal tissue ADC bias for off-center measurements was effectively removed by applying pre-computed 3D correction maps. Comparable ADC accuracy was achieved for corrections of both b -maps and DWI intensities in presence of IVIM perfusion. No significant bias impact was observed for IVIM perfusion fraction.
Malyarenko, Dariya I.; Pang, Yuxi; Senegas, Julien; Ivancevic, Marko K.; Ross, Brian D.; Chenevert, Thomas L.
2015-01-01
Spatially non-uniform diffusion weighting bias due to gradient nonlinearity (GNL) causes substantial errors in apparent diffusion coefficient (ADC) maps for anatomical regions imaged distant from magnet isocenter. Our previously-described approach allowed effective removal of spatial ADC bias from three orthogonal DWI measurements for mono-exponential media of arbitrary anisotropy. The present work evaluates correction feasibility and performance for quantitative diffusion parameters of the two-component IVIM model for well-perfused and nearly isotropic renal tissue. Sagittal kidney DWI scans of a volunteer were performed on a clinical 3T MRI scanner near isocenter and offset superiorly. Spatially non-uniform diffusion weighting due to GNL resulted both in shift and broadening of perfusion-suppressed ADC histograms for off-center DWI relative to unbiased measurements close to isocenter. Direction-average DW-bias correctors were computed based on the known gradient design provided by vendor. The computed bias maps were empirically confirmed by coronal DWI measurements for an isotropic gel-flood phantom. Both phantom and renal tissue ADC bias for off-center measurements was effectively removed by applying pre-computed 3D correction maps. Comparable ADC accuracy was achieved for corrections of both b-maps and DWI intensities in presence of IVIM perfusion. No significant bias impact was observed for IVIM perfusion fraction. PMID:26811845
Duan, Xiaotao; Zhong, Dafang; Chen, Xiaoyan
2008-06-01
Houttuynin (decanoyl acetaldehyde), a beta-dicarbonyl compound, is the major antibacterial constituent in the volatile oil of Houttuynina cordata Thunb. In the present work, detection of houttuynin in human plasma based on the chemical derivatization with 2,4-dinitrophenylhydrazine (DNPH) coupled with liquid chromatography/tandem mass spectrometry was described. The primary reaction products between the beta-dicarbonyl compound and DNPH in aqueous phase were identified as heterocyclic structures, of which the mass spectrometric ionization and fragmentation behavior were characterized with the aid of high-resolution multistage mass spectral analysis. For quantification, houttuynin and internal standard (IS, benzophenone) in plasma were firstly converted to their DNPH derivatives without sample purification, then extracted from human plasma with n-hexane and detected by liquid chromatography tandem mass spectrometry performed in selected reaction monitoring (SRM) mode. This method allowed for a lower limit of quantification (LLOQ) of 1.0 ng/ml using 100-microl plasma. The validation results showed high accuracy (%bias < 2.1) and precision (%CV < 7.2) at broad linear dynamic range (1.0-5000 ng/ml). The simple and quantitative derivatization coupled with tandem mass spectrometric analysis facilitates a sensitive and robust method for the determination of plasma houttuynin in pharmacokinetic studies.
Digital micromirror device-based common-path quantitative phase imaging.
Zheng, Cheng; Zhou, Renjie; Kuang, Cuifang; Zhao, Guangyuan; Yaqoob, Zahid; So, Peter T C
2017-04-01
We propose a novel common-path quantitative phase imaging (QPI) method based on a digital micromirror device (DMD). The DMD is placed in a plane conjugate to the objective back-aperture plane for the purpose of generating two plane waves that illuminate the sample. A pinhole is used in the detection arm to filter one of the beams after sample to create a reference beam. Additionally, a transmission-type liquid crystal device, placed at the objective back-aperture plane, eliminates the specular reflection noise arising from all the "off" state DMD micromirrors, which is common in all DMD-based illuminations. We have demonstrated high sensitivity QPI, which has a measured spatial and temporal noise of 4.92 nm and 2.16 nm, respectively. Experiments with calibrated polystyrene beads illustrate the desired phase measurement accuracy. In addition, we have measured the dynamic height maps of red blood cell membrane fluctuations, showing the efficacy of the proposed system for live cell imaging. Most importantly, the DMD grants the system convenience in varying the interference fringe period on the camera to easily satisfy the pixel sampling conditions. This feature also alleviates the pinhole alignment complexity. We envision that the proposed DMD-based common-path QPI system will allow for system miniaturization and automation for a broader adaption.
Digital micromirror device-based common-path quantitative phase imaging
Zheng, Cheng; Zhou, Renjie; Kuang, Cuifang; Zhao, Guangyuan; Yaqoob, Zahid; So, Peter T. C.
2017-01-01
We propose a novel common-path quantitative phase imaging (QPI) method based on a digital micromirror device (DMD). The DMD is placed in a plane conjugate to the objective back-aperture plane for the purpose of generating two plane waves that illuminate the sample. A pinhole is used in the detection arm to filter one of the beams after sample to create a reference beam. Additionally, a transmission-type liquid crystal device, placed at the objective back-aperture plane, eliminates the specular reflection noise arising from all the “off” state DMD micromirrors, which is common in all DMD-based illuminations. We have demonstrated high sensitivity QPI, which has a measured spatial and temporal noise of 4.92 nm and 2.16 nm, respectively. Experiments with calibrated polystyrene beads illustrate the desired phase measurement accuracy. In addition, we have measured the dynamic height maps of red blood cell membrane fluctuations, showing the efficacy of the proposed system for live cell imaging. Most importantly, the DMD grants the system convenience in varying the interference fringe period on the camera to easily satisfy the pixel sampling conditions. This feature also alleviates the pinhole alignment complexity. We envision that the proposed DMD-based common-path QPI system will allow for system miniaturization and automation for a broader adaption. PMID:28362789
Quantitative Absorption and Kinetic Studies of Transient Species Using Gas Phase Optical Calorimetry
NASA Astrophysics Data System (ADS)
Melnik, Dmitry G.
2014-06-01
Quantitative measurements of the absorption cross-sections and reaction rates constants of free radicals by spectroscopic means requires the knowledge of the absolute concentration of the target species. We have demonstrated earlier that such information can be retrieved from absorption measurements of the well-known ``reporter" molecule, co-produced in radical synthesis. This method is limited to photochemical protocols allowing for production of ``reporters" stochiometrically with the target species. This limitation can be overcome by use of the optical calorimetry (OC) which measures heat signatures of a photochemical protocol. These heat signatures are directly related to the amount of species produced and the thermochemical data of the reactants and stable products whose accuracy is usually substantially higher than that of the absorption data for prospective ``reporters". The implementation of the OC method presented in this talk is based on the measurements of the frequency shift of the resonances due to the change in the optical density of the reactiove sample within a Fabry-Perot cavity caused by deposition of heat from the absorbed photolysis beam and subsequent chemical reactions. Preliminary results will be presented and future development of this experimental technique will be discussed. D. Melnik, R. Chhantyal-Pun and T. A. Miller, J. Phys. Chem. A, 114, 11583, (2010)
Evaluation of MRI sequences for quantitative T1 brain mapping
NASA Astrophysics Data System (ADS)
Tsialios, P.; Thrippleton, M.; Glatz, A.; Pernet, C.
2017-11-01
T1 mapping constitutes a quantitative MRI technique finding significant application in brain imaging. It allows evaluation of contrast uptake, blood perfusion, volume, providing a more specific biomarker of disease progression compared to conventional T1-weighted images. While there are many techniques for T1-mapping there is a wide range of reported T1-values in tissues, raising the issue of protocols reproducibility and standardization. The gold standard for obtaining T1-maps is based on acquiring IR-SE sequence. Widely used alternative sequences are IR-SE-EPI, VFA (DESPOT), DESPOT-HIFI and MP2RAGE that speed up scanning and fitting procedures. A custom MRI phantom was used to assess the reproducibility and accuracy of the different methods. All scans were performed using a 3T Siemens Prisma scanner. The acquired data processed using two different codes. The main difference was observed for VFA (DESPOT) which grossly overestimated T1 relaxation time by 214 ms [126 270] compared to the IR-SE sequence. MP2RAGE and DESPOT-HIFI sequences gave slightly shorter time than IR-SE (~20 to 30ms) and can be considered as alternative and time-efficient methods for acquiring accurate T1 maps of the human brain, while IR-SE-EPI gave identical result, at a cost of a lower image quality.
Yu, Kebing; Salomon, Arthur R.
2010-01-01
Recently, dramatic progress has been achieved in expanding the sensitivity, resolution, mass accuracy, and scan rate of mass spectrometers able to fragment and identify peptides through tandem mass spectrometry (MS/MS). Unfortunately, this enhanced ability to acquire proteomic data has not been accompanied by a concomitant increase in the availability of flexible tools allowing users to rapidly assimilate, explore, and analyze this data and adapt to a variety of experimental workflows with minimal user intervention. Here we fill this critical gap by providing a flexible relational database called PeptideDepot for organization of expansive proteomic data sets, collation of proteomic data with available protein information resources, and visual comparison of multiple quantitative proteomic experiments. Our software design, built upon the synergistic combination of a MySQL database for safe warehousing of proteomic data with a FileMaker-driven graphical user interface for flexible adaptation to diverse workflows, enables proteomic end-users to directly tailor the presentation of proteomic data to the unique analysis requirements of the individual proteomics lab. PeptideDepot may be deployed as an independent software tool or integrated directly with our High Throughput Autonomous Proteomic Pipeline (HTAPP) used in the automated acquisition and post-acquisition analysis of proteomic data. PMID:19834895
Adaptive sensor-based ultra-high accuracy solar concentrator tracker
NASA Astrophysics Data System (ADS)
Brinkley, Jordyn; Hassanzadeh, Ali
2017-09-01
Conventional solar trackers use information of the sun's position, either by direct sensing or by GPS. Our method uses the shading of the receiver. This, coupled with nonimaging optics design allows us to achieve ultra-high concentration. Incorporating a sensor based shadow tracking method with a two stage concentration solar hybrid parabolic trough allows the system to maintain high concentration with acute accuracy.
Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks.
Rumschinski, Philipp; Borchers, Steffen; Bosio, Sandro; Weismantel, Robert; Findeisen, Rolf
2010-05-25
Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates.
Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks
2010-01-01
Background Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. Results In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. Conclusions The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates. PMID:20500862
Sentiment Analysis of Health Care Tweets: Review of the Methods Used.
Gohil, Sunir; Vuik, Sabine; Darzi, Ara
2018-04-23
Twitter is a microblogging service where users can send and read short 140-character messages called "tweets." There are several unstructured, free-text tweets relating to health care being shared on Twitter, which is becoming a popular area for health care research. Sentiment is a metric commonly used to investigate the positive or negative opinion within these messages. Exploring the methods used for sentiment analysis in Twitter health care research may allow us to better understand the options available for future research in this growing field. The first objective of this study was to understand which tools would be available for sentiment analysis of Twitter health care research, by reviewing existing studies in this area and the methods they used. The second objective was to determine which method would work best in the health care settings, by analyzing how the methods were used to answer specific health care questions, their production, and how their accuracy was analyzed. A review of the literature was conducted pertaining to Twitter and health care research, which used a quantitative method of sentiment analysis for the free-text messages (tweets). The study compared the types of tools used in each case and examined methods for tool production, tool training, and analysis of accuracy. A total of 12 papers studying the quantitative measurement of sentiment in the health care setting were found. More than half of these studies produced tools specifically for their research, 4 used open source tools available freely, and 2 used commercially available software. Moreover, 4 out of the 12 tools were trained using a smaller sample of the study's final data. The sentiment method was trained against, on an average, 0.45% (2816/627,024) of the total sample data. One of the 12 papers commented on the analysis of accuracy of the tool used. Multiple methods are used for sentiment analysis of tweets in the health care setting. These range from self-produced basic categorizations to more complex and expensive commercial software. The open source and commercial methods are developed on product reviews and generic social media messages. None of these methods have been extensively tested against a corpus of health care messages to check their accuracy. This study suggests that there is a need for an accurate and tested tool for sentiment analysis of tweets trained using a health care setting-specific corpus of manually annotated tweets first. ©Sunir Gohil, Sabine Vuik, Ara Darzi. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 23.04.2018.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Sun Mo, E-mail: Sunmo.Kim@rmp.uhn.on.ca; Haider, Masoom A.; Jaffray, David A.
Purpose: A previously proposed method to reduce radiation dose to patient in dynamic contrast-enhanced (DCE) CT is enhanced by principal component analysis (PCA) filtering which improves the signal-to-noise ratio (SNR) of time-concentration curves in the DCE-CT study. The efficacy of the combined method to maintain the accuracy of kinetic parameter estimates at low temporal resolution is investigated with pixel-by-pixel kinetic analysis of DCE-CT data. Methods: The method is based on DCE-CT scanning performed with low temporal resolution to reduce the radiation dose to the patient. The arterial input function (AIF) with high temporal resolution can be generated with a coarselymore » sampled AIF through a previously published method of AIF estimation. To increase the SNR of time-concentration curves (tissue curves), first, a region-of-interest is segmented into squares composed of 3 × 3 pixels in size. Subsequently, the PCA filtering combined with a fraction of residual information criterion is applied to all the segmented squares for further improvement of their SNRs. The proposed method was applied to each DCE-CT data set of a cohort of 14 patients at varying levels of down-sampling. The kinetic analyses using the modified Tofts’ model and singular value decomposition method, then, were carried out for each of the down-sampling schemes between the intervals from 2 to 15 s. The results were compared with analyses done with the measured data in high temporal resolution (i.e., original scanning frequency) as the reference. Results: The patients’ AIFs were estimated to high accuracy based on the 11 orthonormal bases of arterial impulse responses established in the previous paper. In addition, noise in the images was effectively reduced by using five principal components of the tissue curves for filtering. Kinetic analyses using the proposed method showed superior results compared to those with down-sampling alone; they were able to maintain the accuracy in the quantitative histogram parameters of volume transfer constant [standard deviation (SD), 98th percentile, and range], rate constant (SD), blood volume fraction (mean, SD, 98th percentile, and range), and blood flow (mean, SD, median, 98th percentile, and range) for sampling intervals between 10 and 15 s. Conclusions: The proposed method of PCA filtering combined with the AIF estimation technique allows low frequency scanning for DCE-CT study to reduce patient radiation dose. The results indicate that the method is useful in pixel-by-pixel kinetic analysis of DCE-CT data for patients with cervical cancer.« less
Pharmacometabolomics Informs Quantitative Radiomics for Glioblastoma Diagnostic Innovation.
Katsila, Theodora; Matsoukas, Minos-Timotheos; Patrinos, George P; Kardamakis, Dimitrios
2017-08-01
Applications of omics systems biology technologies have enormous promise for radiology and diagnostics in surgical fields. In this context, the emerging fields of radiomics (a systems scale approach to radiology using a host of technologies, including omics) and pharmacometabolomics (use of metabolomics for patient and disease stratification and guiding precision medicine) offer much synergy for diagnostic innovation in surgery, particularly in neurosurgery. This synthesis of omics fields and applications is timely because diagnostic accuracy in central nervous system tumors still challenges decision-making. Considering the vast heterogeneity in brain tumors, disease phenotypes, and interindividual variability in surgical and chemotherapy outcomes, we believe that diagnostic accuracy can be markedly improved by quantitative radiomics coupled to pharmacometabolomics and related health information technologies while optimizing economic costs of traditional diagnostics. In this expert review, we present an innovation analysis on a systems-level multi-omics approach toward diagnostic accuracy in central nervous system tumors. For this, we suggest that glioblastomas serve as a useful application paradigm. We performed a literature search on PubMed for articles published in English between 2006 and 2016. We used the search terms "radiomics," "glioblastoma," "biomarkers," "pharmacogenomics," "pharmacometabolomics," "pharmacometabonomics/pharmacometabolomics," "collaborative informatics," and "precision medicine." A list of the top 4 insights we derived from this literature analysis is presented in this study. For example, we found that (i) tumor grading needs to be better refined, (ii) diagnostic precision should be improved, (iii) standardization in radiomics is lacking, and (iv) quantitative radiomics needs to prove clinical implementation. We conclude with an interdisciplinary call to the metabolomics, pharmacy/pharmacology, radiology, and surgery communities that pharmacometabolomics coupled to information technologies (chemoinformatics tools, databases, collaborative systems) can inform quantitative radiomics, thus translating Big Data and information growth to knowledge growth, rational drug development and diagnostics innovation for glioblastomas, and possibly in other brain tumors.
Park, Han Sang; Rinehart, Matthew T; Walzer, Katelyn A; Chi, Jen-Tsan Ashley; Wax, Adam
2016-01-01
Malaria detection through microscopic examination of stained blood smears is a diagnostic challenge that heavily relies on the expertise of trained microscopists. This paper presents an automated analysis method for detection and staging of red blood cells infected by the malaria parasite Plasmodium falciparum at trophozoite or schizont stage. Unlike previous efforts in this area, this study uses quantitative phase images of unstained cells. Erythrocytes are automatically segmented using thresholds of optical phase and refocused to enable quantitative comparison of phase images. Refocused images are analyzed to extract 23 morphological descriptors based on the phase information. While all individual descriptors are highly statistically different between infected and uninfected cells, each descriptor does not enable separation of populations at a level satisfactory for clinical utility. To improve the diagnostic capacity, we applied various machine learning techniques, including linear discriminant classification (LDC), logistic regression (LR), and k-nearest neighbor classification (NNC), to formulate algorithms that combine all of the calculated physical parameters to distinguish cells more effectively. Results show that LDC provides the highest accuracy of up to 99.7% in detecting schizont stage infected cells compared to uninfected RBCs. NNC showed slightly better accuracy (99.5%) than either LDC (99.0%) or LR (99.1%) for discriminating late trophozoites from uninfected RBCs. However, for early trophozoites, LDC produced the best accuracy of 98%. Discrimination of infection stage was less accurate, producing high specificity (99.8%) but only 45.0%-66.8% sensitivity with early trophozoites most often mistaken for late trophozoite or schizont stage and late trophozoite and schizont stage most often confused for each other. Overall, this methodology points to a significant clinical potential of using quantitative phase imaging to detect and stage malaria infection without staining or expert analysis.
Park, Han Sang; Rinehart, Matthew T.; Walzer, Katelyn A.; Chi, Jen-Tsan Ashley; Wax, Adam
2016-01-01
Malaria detection through microscopic examination of stained blood smears is a diagnostic challenge that heavily relies on the expertise of trained microscopists. This paper presents an automated analysis method for detection and staging of red blood cells infected by the malaria parasite Plasmodium falciparum at trophozoite or schizont stage. Unlike previous efforts in this area, this study uses quantitative phase images of unstained cells. Erythrocytes are automatically segmented using thresholds of optical phase and refocused to enable quantitative comparison of phase images. Refocused images are analyzed to extract 23 morphological descriptors based on the phase information. While all individual descriptors are highly statistically different between infected and uninfected cells, each descriptor does not enable separation of populations at a level satisfactory for clinical utility. To improve the diagnostic capacity, we applied various machine learning techniques, including linear discriminant classification (LDC), logistic regression (LR), and k-nearest neighbor classification (NNC), to formulate algorithms that combine all of the calculated physical parameters to distinguish cells more effectively. Results show that LDC provides the highest accuracy of up to 99.7% in detecting schizont stage infected cells compared to uninfected RBCs. NNC showed slightly better accuracy (99.5%) than either LDC (99.0%) or LR (99.1%) for discriminating late trophozoites from uninfected RBCs. However, for early trophozoites, LDC produced the best accuracy of 98%. Discrimination of infection stage was less accurate, producing high specificity (99.8%) but only 45.0%-66.8% sensitivity with early trophozoites most often mistaken for late trophozoite or schizont stage and late trophozoite and schizont stage most often confused for each other. Overall, this methodology points to a significant clinical potential of using quantitative phase imaging to detect and stage malaria infection without staining or expert analysis. PMID:27636719
NASA Astrophysics Data System (ADS)
Takahashi, Tomoko; Thornton, Blair
2017-12-01
This paper reviews methods to compensate for matrix effects and self-absorption during quantitative analysis of compositions of solids measured using Laser Induced Breakdown Spectroscopy (LIBS) and their applications to in-situ analysis. Methods to reduce matrix and self-absorption effects on calibration curves are first introduced. The conditions where calibration curves are applicable to quantification of compositions of solid samples and their limitations are discussed. While calibration-free LIBS (CF-LIBS), which corrects matrix effects theoretically based on the Boltzmann distribution law and Saha equation, has been applied in a number of studies, requirements need to be satisfied for the calculation of chemical compositions to be valid. Also, peaks of all elements contained in the target need to be detected, which is a bottleneck for in-situ analysis of unknown materials. Multivariate analysis techniques are gaining momentum in LIBS analysis. Among the available techniques, principal component regression (PCR) analysis and partial least squares (PLS) regression analysis, which can extract related information to compositions from all spectral data, are widely established methods and have been applied to various fields including in-situ applications in air and for planetary explorations. Artificial neural networks (ANNs), where non-linear effects can be modelled, have also been investigated as a quantitative method and their applications are introduced. The ability to make quantitative estimates based on LIBS signals is seen as a key element for the technique to gain wider acceptance as an analytical method, especially in in-situ applications. In order to accelerate this process, it is recommended that the accuracy should be described using common figures of merit which express the overall normalised accuracy, such as the normalised root mean square errors (NRMSEs), when comparing the accuracy obtained from different setups and analytical methods.
Lyngby, Janne G; Court, Michael H; Lee, Pamela M
2017-08-01
The clopidogrel active metabolite (CAM) is unstable and challenging to quantitate. The objective was to validate a new method for stabilization and quantitation of CAM, clopidogrel, and the inactive metabolites clopidogrel carboxylic acid and 2-oxo-clopiodgrel in feline plasma. Two healthy cats administered clopidogrel to demonstrate assay in vivo utility. Stabilization of CAM was achieved by adding 2-bromo-3'methoxyacetophenone to blood tubes to form a derivatized CAM (CAM-D). Method validation included evaluation of calibration curve linearity, accuracy, and precision; within and between assay precision and accuracy; and compound stability using spiked blank feline plasma. Analytes were measured by high performance liquid chromatography with tandem mass spectrometry. In vivo utility was demonstrated by a pharmacokinetic study of cats given a single oral dose of 18.75mg clopidogrel. The 2-oxo-clopidogrel metabolite was unstable. Clopidogrel, CAM-D, and clopidogrel carboxylic acid appear stable for 1 week at room temperature and 9 months at -80°C. Standard curves showed linearity for CAM-D, clopidogrel, and clopidogrel carboxylic acid (r > 0.99). Between assay accuracy and precision was ≤2.6% and ≤7.1% for CAM-D and ≤17.9% and ≤11.3% for clopidogrel and clopidogrel carboxylic acid. Within assay precision for all three compounds was ≤7%. All three compounds were detected in plasma from healthy cats receiving clopidogrel. This methodology is accurate and precise for simultaneous quantitation of CAM-D, clopidogrel, and clopidogrel carboxylic acid in feline plasma but not 2-oxo-clopidogrel. Validation of this assay is the first step to more fully understanding the use of clopidogrel in cats. Copyright © 2017 Elsevier B.V. All rights reserved.
Chen, Ru-huang; Jin, Gang
2015-08-01
This paper presented an application of mid-infrared (MIR), near-infrared (NIR) and Raman spectroscopies for collecting the spectra of 31 kinds of low density polyethylene/polyprolene (LDPE/PP) samples with different proportions. The different pre-processing methods (multiplicative scatter correction, mean centering and Savitzky-Golay first derivative) and spectral region were explored to develop partial least-squares (PLS) model for LDPE, their influence on the accuracy of PLS model also being discussed. Three spectroscopies were compared about the accuracy of quantitative measurement. Consequently, the pre-processing methods and spectral region have a great impact on the accuracy of PLS model, especially the spectra with subtle difference, random noise and baseline variation. After being pre-processed and spectral region selected, the calibration model of MIR, NIR and Raman exhibited R2/RMSEC values of 0.9906/2.941, 0.9973/1.561 and 0.9972/1.598 respectively, which corrsponding to 0.8876/10.15, 0.8493/11.75 and 0.8757/10.67 before any treatment. The results also suggested MIR, NIR and Raman are three strong tools to predict the content of LDPE in LDPE/PP blend. However, NIR and Raman showed higher accuracy after being pre-processed and more suitability to fast quantitative characterization due to their high measuring speed.
Mesihää, Samuel; Rasanen, Ilpo; Ojanperä, Ilkka
2018-05-01
Gas chromatography (GC) hyphenated with nitrogen chemiluminescence detection (NCD) and quadrupole time-of-flight mass spectrometry (QTOFMS) was applied for the first time to the quantitative analysis of new psychoactive substances (NPS) in urine, based on the N-equimolar response of NCD. A method was developed and validated to estimate the concentrations of three metabolites of the common stimulant NPS α-pyrrolidinovalerophenone (α-PVP) in spiked urine samples, simulating an analysis having no authentic reference standards for the metabolites and using the parent drug instead for quantitative calibration. The metabolites studied were OH-α-PVP (M1), 2″-oxo-α-PVP (M3), and N,N-bis-dealkyl-PVP (2-amino-1-phenylpentan-1-one; M5). Sample preparation involved liquid-liquid extraction with a mixture of ethyl acetate and butyl chloride at a basic pH and subsequent silylation of the sec-hydroxyl and prim-amino groups of M1 and M5, respectively. Simultaneous compound identification was based on the accurate masses of the protonated molecules for each compound by QTOFMS following atmospheric pressure chemical ionization. The accuracy of quantification of the parent-calibrated NCD method was compared with that of the corresponding parent-calibrated QTOFMS method, as well as with a reference QTOFMS method calibrated with the authentic reference standards. The NCD method produced an equally good accuracy to the reference method for α-PVP, M3 and M5, while a higher negative bias (25%) was obtained for M1, best explainable by recovery and stability issues. The performance of the parent-calibrated QTOFMS method was inferior to the reference method with an especially high negative bias (60%) for M1. The NCD method enabled better quantitative precision than the QTOFMS methods To evaluate the novel approach in casework, twenty post- mortem urine samples previously found positive for α-PVP were analyzed by the parent calibrated NCD method and the reference QTOFMS method. The highest difference in the quantitative results between the two methods was only 33%, and the NCD method's precision as the coefficient of variation was better than 13%. The limit of quantification for the NCD method was approximately 0.25μg/mL in urine, which generally allowed the analysis of α-PVP and the main metabolite M1. However, the sensitivity was not sufficient for the low concentrations of M3 and M5. Consequently, while having potential for instant analysis of NPS and metabolites in moderate concentrations without reference standards, the NCD method should be further developed for improved sensitivity to be more generally applicable. Copyright © 2018 Elsevier B.V. All rights reserved.
Respiratory trace feature analysis for the prediction of respiratory-gated PET quantification.
Wang, Shouyi; Bowen, Stephen R; Chaovalitwongse, W Art; Sandison, George A; Grabowski, Thomas J; Kinahan, Paul E
2014-02-21
The benefits of respiratory gating in quantitative PET/CT vary tremendously between individual patients. Respiratory pattern is among many patient-specific characteristics that are thought to play an important role in gating-induced imaging improvements. However, the quantitative relationship between patient-specific characteristics of respiratory pattern and improvements in quantitative accuracy from respiratory-gated PET/CT has not been well established. If such a relationship could be estimated, then patient-specific respiratory patterns could be used to prospectively select appropriate motion compensation during image acquisition on a per-patient basis. This study was undertaken to develop a novel statistical model that predicts quantitative changes in PET/CT imaging due to respiratory gating. Free-breathing static FDG-PET images without gating and respiratory-gated FDG-PET images were collected from 22 lung and liver cancer patients on a PET/CT scanner. PET imaging quality was quantified with peak standardized uptake value (SUV(peak)) over lesions of interest. Relative differences in SUV(peak) between static and gated PET images were calculated to indicate quantitative imaging changes due to gating. A comprehensive multidimensional extraction of the morphological and statistical characteristics of respiratory patterns was conducted, resulting in 16 features that characterize representative patterns of a single respiratory trace. The six most informative features were subsequently extracted using a stepwise feature selection approach. The multiple-regression model was trained and tested based on a leave-one-subject-out cross-validation. The predicted quantitative improvements in PET imaging achieved an accuracy higher than 90% using a criterion with a dynamic error-tolerance range for SUV(peak) values. The results of this study suggest that our prediction framework could be applied to determine which patients would likely benefit from respiratory motion compensation when clinicians quantitatively assess PET/CT for therapy target definition and response assessment.
Rota, Cristina; Biondi, Marco; Trenti, Tommaso
2011-09-26
Aution Max AX-4030, a test strip analyzer recently introduced to the market, represents an upgrade of the Aution Max AX-4280 widely employed for urinalysis. This new instrument model can allocate two different test strips at the same time. In the present study the two instruments have been compared together with the usage of Uriflet 9UB and the recently produced Aution Sticks 10PA urine strips, the latter presenting an additional test area for the measurement of urinary creatinine. Imprecision and correlation between instruments and strips have been evaluated for chemical-physical parameters. Accuracy was evaluated for protein, glucose and creatinine by comparing the semi-quantitative results to those obtained by quantitative methods. The well-known interference effect of high ascorbic acid levels on urine glucose test strip determination was evaluated, ascorbic acid influence was also evaluated on protein and creatinine determination. The two instruments have demonstrated comparable performances: precision and correlation between instruments and strips, evaluated for chemical-physical parameters, were always good. Furthermore, accuracy was always very good: results of protein and glucose semi-quantitative measurements resulted to be highly correlated with those obtained by quantitative methods. Moreover, the semi-quantitative measurements of creatinine, employing Aution Sticks 10PA urine strips, were highly comparable with quantitative results. 10PA urine strips are eligible for urine creatinine determination with the possibility of correcting urinalysis results for urinary creatinine concentration, whenever necessary and calculating the protein creatinine ratio. Further studies should be carried out to evaluate effectiveness and appropriateness of the usage of creatinine semi-quantitative analysis.
Respiratory trace feature analysis for the prediction of respiratory-gated PET quantification
NASA Astrophysics Data System (ADS)
Wang, Shouyi; Bowen, Stephen R.; Chaovalitwongse, W. Art; Sandison, George A.; Grabowski, Thomas J.; Kinahan, Paul E.
2014-02-01
The benefits of respiratory gating in quantitative PET/CT vary tremendously between individual patients. Respiratory pattern is among many patient-specific characteristics that are thought to play an important role in gating-induced imaging improvements. However, the quantitative relationship between patient-specific characteristics of respiratory pattern and improvements in quantitative accuracy from respiratory-gated PET/CT has not been well established. If such a relationship could be estimated, then patient-specific respiratory patterns could be used to prospectively select appropriate motion compensation during image acquisition on a per-patient basis. This study was undertaken to develop a novel statistical model that predicts quantitative changes in PET/CT imaging due to respiratory gating. Free-breathing static FDG-PET images without gating and respiratory-gated FDG-PET images were collected from 22 lung and liver cancer patients on a PET/CT scanner. PET imaging quality was quantified with peak standardized uptake value (SUVpeak) over lesions of interest. Relative differences in SUVpeak between static and gated PET images were calculated to indicate quantitative imaging changes due to gating. A comprehensive multidimensional extraction of the morphological and statistical characteristics of respiratory patterns was conducted, resulting in 16 features that characterize representative patterns of a single respiratory trace. The six most informative features were subsequently extracted using a stepwise feature selection approach. The multiple-regression model was trained and tested based on a leave-one-subject-out cross-validation. The predicted quantitative improvements in PET imaging achieved an accuracy higher than 90% using a criterion with a dynamic error-tolerance range for SUVpeak values. The results of this study suggest that our prediction framework could be applied to determine which patients would likely benefit from respiratory motion compensation when clinicians quantitatively assess PET/CT for therapy target definition and response assessment.
Cued Speech Transliteration: Effects of Speaking Rate and Lag Time on Production Accuracy
Tessler, Morgan P.
2016-01-01
Many deaf and hard-of-hearing children rely on interpreters to access classroom communication. Although the exact level of access provided by interpreters in these settings is unknown, it is likely to depend heavily on interpreter accuracy (portion of message correctly produced by the interpreter) and the factors that govern interpreter accuracy. In this study, the accuracy of 12 Cued Speech (CS) transliterators with varying degrees of experience was examined at three different speaking rates (slow, normal, fast). Accuracy was measured with a high-resolution, objective metric in order to facilitate quantitative analyses of the effect of each factor on accuracy. Results showed that speaking rate had a large negative effect on accuracy, caused primarily by an increase in omitted cues, whereas the effect of lag time on accuracy, also negative, was quite small and explained just 3% of the variance. Increased experience level was generally associated with increased accuracy; however, high levels of experience did not guarantee high levels of accuracy. Finally, the overall accuracy of the 12 transliterators, 54% on average across all three factors, was low enough to raise serious concerns about the quality of CS transliteration services that (at least some) children receive in educational settings. PMID:27221370
Henshall, John M; Dierens, Leanne; Sellars, Melony J
2014-09-02
While much attention has focused on the development of high-density single nucleotide polymorphism (SNP) assays, the costs of developing and running low-density assays have fallen dramatically. This makes it feasible to develop and apply SNP assays for agricultural species beyond the major livestock species. Although low-cost low-density assays may not have the accuracy of the high-density assays widely used in human and livestock species, we show that when combined with statistical analysis approaches that use quantitative instead of discrete genotypes, their utility may be improved. The data used in this study are from a 63-SNP marker Sequenom® iPLEX Platinum panel for the Black Tiger shrimp, for which high-density SNP assays are not currently available. For quantitative genotypes that could be estimated, in 5% of cases the most likely genotype for an individual at a SNP had a probability of less than 0.99. Matrix formulations of maximum likelihood equations for parentage assignment were developed for the quantitative genotypes and also for discrete genotypes perturbed by an assumed error term. Assignment rates that were based on maximum likelihood with quantitative genotypes were similar to those based on maximum likelihood with perturbed genotypes but, for more than 50% of cases, the two methods resulted in individuals being assigned to different families. Treating genotypes as quantitative values allows the same analysis framework to be used for pooled samples of DNA from multiple individuals. Resulting correlations between allele frequency estimates from pooled DNA and individual samples were consistently greater than 0.90, and as high as 0.97 for some pools. Estimates of family contributions to the pools based on quantitative genotypes in pooled DNA had a correlation of 0.85 with estimates of contributions from DNA-derived pedigree. Even with low numbers of SNPs of variable quality, parentage testing and family assignment from pooled samples are sufficiently accurate to provide useful information for a breeding program. Treating genotypes as quantitative values is an alternative to perturbing genotypes using an assumed error distribution, but can produce very different results. An understanding of the distribution of the error is required for SNP genotyping platforms.
Voice reaction times with recognition for Commodore computers
NASA Technical Reports Server (NTRS)
Washburn, David A.; Putney, R. Thompson
1990-01-01
Hardware and software modifications are presented that allow for collection and recognition by a Commodore computer of spoken responses. Responses are timed with millisecond accuracy and automatically analyzed and scored. Accuracy data for this device from several experiments are presented. Potential applications and suggestions for improving recognition accuracy are also discussed.
Amperometric Glucose Sensors: Sources of Error and Potential Benefit of Redundancy
Castle, Jessica R.; Kenneth Ward, W.
2010-01-01
Amperometric glucose sensors have advanced the care of patients with diabetes and are being studied to control insulin delivery in the research setting. However, at times, currently available sensors demonstrate suboptimal accuracy, which can result from calibration error, sensor drift, or lag. Inaccuracy can be particularly problematic in a closed-loop glycemic control system. In such a system, the use of two sensors allows selection of the more accurate sensor as the input to the controller. In our studies in subjects with type 1 diabetes, the accuracy of the better of two sensors significantly exceeded the accuracy of a single, randomly selected sensor. If an array with three or more sensors were available, it would likely allow even better accuracy with the use of voting. PMID:20167187
Severgnini, Mara; de Denaro, Mario; Bortul, Marina; Vidali, Cristiana; Beorchia, Aulo
2014-01-08
Intraoperative electron radiation therapy (IOERT) cannot usually benefit, as conventional external radiotherapy, from software systems of treatment planning based on computed tomography and from common dose verify procedures. For this reason, in vivo film dosimetry (IVFD) proves to be an effective methodology to evaluate the actual radiation dose delivered to the target. A practical method for IVFD during breast IOERT was carried out to improve information on the dose actually delivered to the tumor target and on the alignment of the shielding disk with respect to the electron beam. Two EBT3 GAFCHROMIC films have been positioned on the two sides of the shielding disk in order to obtain the dose maps at the target and beyond the disk. Moreover the postprocessing analysis of the dose distribution measured on the films provides a quantitative estimate of the misalignment between the collimator and the disk. EBT3 radiochromic films have been demonstrated to be suitable dosimeters for IVD due to their linear dose-optical density response in a narrow range around the prescribed dose, as well as their capability to be fixed to the shielding disk without giving any distortion in the dose distribution. Off-line analysis of the radiochromic film allowed absolute dose measurements and this is indeed a very important verification of the correct exposure to the target organ, as well as an estimate of the dose to the healthy tissue underlying the shielding. These dose maps allow surgeons and radiation oncologists to take advantage of qualitative and quantitative feedback for setting more accurate treatment strategies and further optimized procedures. The proper alignment using elastic bands has improved the absolute dose accuracy and the collimator disk alignment by more than 50%.
de Denaro, Mario; Bortul, Marina; Vidali, Cristiana; Beorchia, Aulo
2014-01-01
Intraoperative electron radiation therapy (IOERT) cannot usually benefit, as conventional external radiotherapy, from software systems of treatment planning based on computed tomography and from common dose verify procedures. For this reason, in vivo film dosimetry (IVFD) proves to be an effective methodology to evaluate the actual radiation dose delivered to the target. A practical method for IVFD during breast IOERT was carried out to improve information on the dose actually delivered to the tumor target and on the alignment of the shielding disk with respect to the electron beam. Two EBT3 GAFCHROMIC films have been positioned on the two sides of the shielding disk in order to obtain the dose maps at the target and beyond the disk. Moreover the postprocessing analysis of the dose distribution measured on the films provides a quantitative estimate of the misalignment between the collimator and the disk. EBT3 radiochromic films have been demonstrated to be suitable dosimeters for IVD due to their linear dose‐optical density response in a narrow range around the prescribed dose, as well as their capability to be fixed to the shielding disk without giving any distortion in the dose distribution. Off‐line analysis of the radiochromic film allowed absolute dose measurements and this is indeed a very important verification of the correct exposure to the target organ, as well as an estimate of the dose to the healthy tissue underlying the shielding. These dose maps allow surgeons and radiation oncologists to take advantage of qualitative and quantitative feedback for setting more accurate treatment strategies and further optimized procedures. The proper alignment using elastic bands has improved the absolute dose accuracy and the collimator disk alignment by more than 50%. PACS number: 87.55.kh
NASA Astrophysics Data System (ADS)
Prentice, Boone M.; Chumbley, Chad W.; Caprioli, Richard M.
2017-01-01
Matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI IMS) allows for the visualization of molecular distributions within tissue sections. While providing excellent molecular specificity and spatial information, absolute quantification by MALDI IMS remains challenging. Especially in the low molecular weight region of the spectrum, analysis is complicated by matrix interferences and ionization suppression. Though tandem mass spectrometry (MS/MS) can be used to ensure chemical specificity and improve sensitivity by eliminating chemical noise, typical MALDI MS/MS modalities only scan for a single MS/MS event per laser shot. Herein, we describe TOF/TOF instrumentation that enables multiple fragmentation events to be performed in a single laser shot, allowing the intensity of the analyte to be referenced to the intensity of the internal standard in each laser shot while maintaining the benefits of MS/MS. This approach is illustrated by the quantitative analyses of rifampicin (RIF), an antibiotic used to treat tuberculosis, in pooled human plasma using rifapentine (RPT) as an internal standard. The results show greater than 4-fold improvements in relative standard deviation as well as improved coefficients of determination (R2) and accuracy (>93% quality controls, <9% relative errors). This technology is used as an imaging modality to measure absolute RIF concentrations in liver tissue from an animal dosed in vivo. Each microspot in the quantitative image measures the local RIF concentration in the tissue section, providing absolute pixel-to-pixel quantification from different tissue microenvironments. The average concentration determined by IMS is in agreement with the concentration determined by HPLC-MS/MS, showing a percent difference of 10.6%.
NASA Astrophysics Data System (ADS)
Ueno, Yuichiro; Takahashi, Isao; Ishitsu, Takafumi; Tadokoro, Takahiro; Okada, Koichi; Nagumo, Yasushi; Fujishima, Yasutake; Yoshida, Akira; Umegaki, Kikuo
2018-06-01
We developed a pinhole type gamma camera, using a compact detector module of a pixelated CdTe semiconductor, which has suitable sensitivity and quantitative accuracy for low dose rate fields. In order to improve the sensitivity of the pinhole type semiconductor gamma camera, we adopted three methods: a signal processing method to set the discriminating level lower, a high sensitivity pinhole collimator and a smoothing image filter that improves the efficiency of the source identification. We tested basic performances of the developed gamma camera and carefully examined effects of the three methods. From the sensitivity test, we found that the effective sensitivity was about 21 times higher than that of the gamma camera for high dose rate fields which we had previously developed. We confirmed that the gamma camera had sufficient sensitivity and high quantitative accuracy; for example, a weak hot spot (0.9 μSv/h) around a tree root could be detected within 45 min in a low dose rate field test, and errors of measured dose rates with point sources were less than 7% in a dose rate accuracy test.
Phommasone, Koukeo; Althaus, Thomas; Souvanthong, Phonesavanh; Phakhounthong, Khansoudaphone; Soyvienvong, Laxoy; Malapheth, Phatthaphone; Mayxay, Mayfong; Pavlicek, Rebecca L; Paris, Daniel H; Dance, David; Newton, Paul; Lubell, Yoel
2016-02-04
C-Reactive Protein (CRP) has been shown to be an accurate biomarker for discriminating bacterial from viral infections in febrile patients in Southeast Asia. Here we investigate the accuracy of existing rapid qualitative and semi-quantitative tests as compared with a quantitative reference test to assess their potential for use in remote tropical settings. Blood samples were obtained from consecutive patients recruited to a prospective fever study at three sites in rural Laos. At each site, one of three rapid qualitative or semi-quantitative tests was performed, as well as a corresponding quantitative NycoCard Reader II as a reference test. We estimate the sensitivity and specificity of the three tests against a threshold of 10 mg/L and kappa values for the agreement of the two semi-quantitative tests with the results of the reference test. All three tests showed high sensitivity, specificity and kappa values as compared with the NycoCard Reader II. With a threshold of 10 mg/L the sensitivity of the tests ranged from 87-98 % and the specificity from 91-98 %. The weighted kappa values for the semi-quantitative tests were 0.7 and 0.8. The use of CRP rapid tests could offer an inexpensive and effective approach to improve the targeting of antibiotics in remote settings where health facilities are basic and laboratories are absent. This study demonstrates that accurate CRP rapid tests are commercially available; evaluations of their clinical impact and cost-effectiveness at point of care is warranted.
Development and assessment of a hand assist device: GRIPIT.
Kim, Byungchul; In, Hyunki; Lee, Dae-Young; Cho, Kyu-Jin
2017-02-21
Although various hand assist devices have been commercialized for people with paralysis, they are somewhat limited in terms of tool fixation and device attachment method. Hand exoskeleton robots allow users to grasp a wider range of tools but are heavy, complicated, and bulky owing to the presence of numerous actuators and controllers. The GRIPIT hand assist device overcomes the limitations of both conventional devices and exoskeleton robots by providing improved tool fixation and device attachment in a lightweight and compact device. GRIPIT has been designed to assist tripod grasp for people with spinal cord injury because this grasp posture is frequently used in school and offices for such activities as writing and grasping small objects. The main development objective of GRIPIT is to assist users to grasp tools with their own hand using a lightweight, compact assistive device that is manually operated via a single wire. GRIPIT consists of only a glove, a wire, and a small structure that maintains tendon tension to permit a stable grasp. The tendon routing points are designed to apply force to the thumb, index finger, and middle finger to form a tripod grasp. A tension-maintenance structure sustains the grasp posture with appropriate tension. Following device development, four people with spinal cord injury were recruited to verify the writing performance of GRIPIT compared to the performance of a conventional penholder and handwriting. Writing was chosen as the assessment task because it requires a tripod grasp, which is one of the main performance objectives of GRIPIT. New assessment, which includes six different writing tasks, was devised to measure writing ability from various viewpoints including both qualitative and quantitative methods, while most conventional assessments include only qualitative methods or simple time measuring assessments. Appearance, portability, difficulty of wearing, difficulty of grasping the subject, writing sensation, fatigability, and legibility were measured to assess qualitative performance while writing various words and sentences. Results showed that GRIPIT is relatively complicated to wear and use compared to a conventional assist device but has advantages for writing sensation, fatigability, and legibility because it affords sufficient grasp force during writing. Two quantitative performance factors were assessed, accuracy of writing and solidity of writing. To assess accuracy of writing, we asked subjects to draw various figures under given conditions. To assess solidity of writing, pen tip force and the angle variation of the pen were measured. Quantitative evaluation results showed that GRIPIT helps users to write accurately without pen shakes even high force is applied on the pen. Qualitative and quantitative results were better when subjects used GRIPIT than when they used the conventional penholder, mainly because GRIPIT allowed them to exert a higher grasp force. Grasp force is important because disabled people cannot control their fingers and thus need to move their entire arm to write, while non-disabled people only need to move their fingers to write. The tension-maintenance structure developed for GRIPIT provides appropriate grasp force and moment balance on the user's hand, but the other writing method only fixes the pen using friction force or requires the user's arm to generate a grasp force.
Wykrzykowska, Joanna J.; Arbab-Zadeh, Armin; Godoy, Gustavo; Miller, Julie M.; Lin, Shezhang; Vavere, Andrea; Paul, Narinder; Niinuma, Hiroyuki; Hoe, John; Brinker, Jeffrey; Khosa, Faisal; Sarwar, Sheryar; Lima, Joao; Clouse, Melvin E.
2012-01-01
OBJECTIVE Evaluations of stents by MDCT from studies performed at single centers have yielded variable results with a high proportion of unassessable stents. The purpose of this study was to evaluate the accuracy of 64-MDCT angiography (MDCTA) in identifying in-stent restenosis in a multicenter trial. MATERIALS AND METHODS The Coronary Evaluation Using Multidetector Spiral Computed Tomography Angiography Using 64 Detectors (CORE-64) Multicenter Trial and Registry evaluated the accuracy of 64-MDCTA in assessing 405 patients referred for coronary angiography. A total of 75 stents in 52 patients were assessed: 48 of 75 stents (64%) in 36 of 52 patients (69%) could be evaluated. The prevalence of in-stent restenosis by quantitative coronary angiography (QCA) in this subgroup was 23% (17/75). Eighty percent of the stents were ≤ 3.0 mm in diameter. RESULTS The overall sensitivity, specificity, positive predictive value, and negative predictive value to detect 50% in-stent stenosis visually using MDCT compared with QCA was 33.3%, 91.7%, 57.1%, and 80.5%, respectively, with an overall accuracy of 77.1% for the 48 assessable stents. The ability to evaluate stents on MDCTA varied by stent type: Thick-strut stents such as Bx Velocity were assessable in 50% of the cases; Cypher, 62.5% of the cases; and thinner-strut stents such as Taxus, 75% of the cases. We performed quantitative assessment of in-stent contrast attenuation in Hounsfield units and correlated that value with the quantitative percentage of stenosis by QCA. The correlation coefficient between the average attenuation decrease and ≥ 50% stenosis by QCA was 0.25 (p = 0.073). Quantitative assessment failed to improve the accuracy of MDCT over qualitative assessment. CONCLUSION The results of our study showed that 64-MDCT has poor ability to detect in-stent restenosis in small-diameter stents. Evaluability and negative predictive value were better in large-diameter stents. Thus, 64-MDCT may be appropriate for stent assessment in only selected patients. PMID:20028909
Implications of Polishing Techniques in Quantitative X-Ray Microanalysis
Rémond, Guy; Nockolds, Clive; Phillips, Matthew; Roques-Carmes, Claude
2002-01-01
Specimen preparation using abrasives results in surface and subsurface mechanical (stresses, strains), geometrical (roughness), chemical (contaminants, reaction products) and physical modifications (structure, texture, lattice defects). The mechanisms involved in polishing with abrasives are presented to illustrate the effects of surface topography, surface and subsurface composition and induced lattice defects on the accuracy of quantitative x-ray microanalysis of mineral materials with the electron probe microanalyzer (EPMA). PMID:27446758
Li, Zhigang; Wang, Qiaoyun; Lv, Jiangtao; Ma, Zhenhe; Yang, Linjuan
2015-06-01
Spectroscopy is often applied when a rapid quantitative analysis is required, but one challenge is the translation of raw spectra into a final analysis. Derivative spectra are often used as a preliminary preprocessing step to resolve overlapping signals, enhance signal properties, and suppress unwanted spectral features that arise due to non-ideal instrument and sample properties. In this study, to improve quantitative analysis of near-infrared spectra, derivatives of noisy raw spectral data need to be estimated with high accuracy. A new spectral estimator based on singular perturbation technique, called the singular perturbation spectra estimator (SPSE), is presented, and the stability analysis of the estimator is given. Theoretical analysis and simulation experimental results confirm that the derivatives can be estimated with high accuracy using this estimator. Furthermore, the effectiveness of the estimator for processing noisy infrared spectra is evaluated using the analysis of beer spectra. The derivative spectra of the beer and the marzipan are used to build the calibration model using partial least squares (PLS) modeling. The results show that the PLS based on the new estimator can achieve better performance compared with the Savitzky-Golay algorithm and can serve as an alternative choice for quantitative analytical applications.
Quantification of EEG reactivity in comatose patients.
Hermans, Mathilde C; Westover, M Brandon; van Putten, Michel J A M; Hirsch, Lawrence J; Gaspard, Nicolas
2016-01-01
EEG reactivity is an important predictor of outcome in comatose patients. However, visual analysis of reactivity is prone to subjectivity and may benefit from quantitative approaches. In EEG segments recorded during reactivity testing in 59 comatose patients, 13 quantitative EEG parameters were used to compare the spectral characteristics of 1-minute segments before and after the onset of stimulation (spectral temporal symmetry). Reactivity was quantified with probability values estimated using combinations of these parameters. The accuracy of probability values as a reactivity classifier was evaluated against the consensus assessment of three expert clinical electroencephalographers using visual analysis. The binary classifier assessing spectral temporal symmetry in four frequency bands (delta, theta, alpha and beta) showed best accuracy (Median AUC: 0.95) and was accompanied by substantial agreement with the individual opinion of experts (Gwet's AC1: 65-70%), at least as good as inter-expert agreement (AC1: 55%). Probability values also reflected the degree of reactivity, as measured by the inter-experts' agreement regarding reactivity for each individual case. Automated quantitative EEG approaches based on probabilistic description of spectral temporal symmetry reliably quantify EEG reactivity. Quantitative EEG may be useful for evaluating reactivity in comatose patients, offering increased objectivity. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
de Certaines, J D; Henriksen, O; Spisni, A; Cortsen, M; Ring, P B
1993-01-01
Quantitative magnetic resonance imaging may offer unique potential for tissue characterization in vivo. In this connection texture analysis of quantitative MR images may be of special importance. Because evaluation of texture analysis needs large data material, multicenter approaches become mandatory. Within the frame of BME Concerted Action on Tissue Characterization by MRI and MRS, a pilot multicenter study was launched in order to evaluate the technical problems including comparability of relaxation time measurements carried out in the individual sites. Human brain, skeletal muscle, and liver were used as models. A total of 218 healthy volunteers were studied. Fifteen MRI scanners with field strength ranging from 0.08 T to 1.5 T were induced. Measurement accuracy was tested on the Eurospin relaxation time test object (TO5) and the obtained calibration curve was used for correction of the in vivo data. The results established that, by following a standardized procedure, comparable quantitative measurements can be obtained in vivo from a number of MR sites. The overall variation coefficient in vivo was in the same order of magnitude as ex vivo relaxometry. Thus, it is possible to carry out international multicenter studies on quantitative imaging, provided that quality control with respect to measurement accuracy and calibration of the MR equipments are performed.
Calibration methods influence quantitative material decomposition in photon-counting spectral CT
NASA Astrophysics Data System (ADS)
Curtis, Tyler E.; Roeder, Ryan K.
2017-03-01
Photon-counting detectors and nanoparticle contrast agents can potentially enable molecular imaging and material decomposition in computed tomography (CT). Material decomposition has been investigated using both simulated and acquired data sets. However, the effect of calibration methods on material decomposition has not been systematically investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on quantitative material decomposition. A commerciallyavailable photon-counting spectral micro-CT (MARS Bioimaging) was used to acquire images with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material basis matrix values were determined using multiple linear regression models and material decomposition was performed using a maximum a posteriori estimator. The accuracy of quantitative material decomposition was evaluated by the root mean squared error (RMSE), specificity, sensitivity, and area under the curve (AUC). An increased maximum concentration (range) in the calibration significantly improved RMSE, specificity and AUC. The effects of an increased number of concentrations in the calibration were not statistically significant for the conditions in this study. The overall results demonstrated that the accuracy of quantitative material decomposition in spectral CT is significantly influenced by calibration methods, which must therefore be carefully considered for the intended diagnostic imaging application.
Sample and data processing considerations for the NIST quantitative infrared database
NASA Astrophysics Data System (ADS)
Chu, Pamela M.; Guenther, Franklin R.; Rhoderick, George C.; Lafferty, Walter J.; Phillips, William
1999-02-01
Fourier-transform infrared (FT-IR) spectrometry has become a useful real-time in situ analytical technique for quantitative gas phase measurements. In fact, the U.S. Environmental Protection Agency (EPA) has recently approved open-path FT-IR monitoring for the determination of hazardous air pollutants (HAP) identified in EPA's Clean Air Act of 1990. To support infrared based sensing technologies, the National Institute of Standards and Technology (NIST) is currently developing a standard quantitative spectral database of the HAPs based on gravimetrically prepared standard samples. The procedures developed to ensure the quantitative accuracy of the reference data are discussed, including sample preparation, residual sample contaminants, data processing considerations, and estimates of error.
NASA Astrophysics Data System (ADS)
Xu, Jing; Wang, Yu-Tian; Liu, Xiao-Fei
2015-04-01
Edible blend oil is a mixture of vegetable oils. Eligible blend oil can meet the daily need of two essential fatty acids for human to achieve the balanced nutrition. Each vegetable oil has its different composition, so vegetable oils contents in edible blend oil determine nutritional components in blend oil. A high-precision quantitative analysis method to detect the vegetable oils contents in blend oil is necessary to ensure balanced nutrition for human being. Three-dimensional fluorescence technique is high selectivity, high sensitivity, and high-efficiency. Efficiency extraction and full use of information in tree-dimensional fluorescence spectra will improve the accuracy of the measurement. A novel quantitative analysis is proposed based on Quasi-Monte-Carlo integral to improve the measurement sensitivity and reduce the random error. Partial least squares method is used to solve nonlinear equations to avoid the effect of multicollinearity. The recovery rates of blend oil mixed by peanut oil, soybean oil and sunflower are calculated to verify the accuracy of the method, which are increased, compared the linear method used commonly for component concentration measurement.
Choi, Soojin; Kim, Dongyoung; Yang, Junho; Yoh, Jack J
2017-04-01
Quantitative Raman analysis was carried out with geologically mixed samples that have various matrices. In order to compensate the matrix effect in Raman shift, laser-induced breakdown spectroscopy (LIBS) analysis was performed. Raman spectroscopy revealed the geological materials contained in the mixed samples. However, the analysis of a mixture containing different matrices was inaccurate due to the weak signal of the Raman shift, interference, and the strong matrix effect. On the other hand, the LIBS quantitative analysis of atomic carbon and calcium in mixed samples showed high accuracy. In the case of the calcite and gypsum mixture, the coefficient of determination of atomic carbon using LIBS was 0.99, while the signal using Raman was less than 0.9. Therefore, the geological composition of the mixed samples is first obtained using Raman and the LIBS-based quantitative analysis is then applied to the Raman outcome in order to construct highly accurate univariate calibration curves. The study also focuses on a method to overcome matrix effects through the two complementary spectroscopic techniques of Raman spectroscopy and LIBS.
Kockmann, Tobias; Trachsel, Christian; Panse, Christian; Wahlander, Asa; Selevsek, Nathalie; Grossmann, Jonas; Wolski, Witold E; Schlapbach, Ralph
2016-08-01
Quantitative mass spectrometry is a rapidly evolving methodology applied in a large number of omics-type research projects. During the past years, new designs of mass spectrometers have been developed and launched as commercial systems while in parallel new data acquisition schemes and data analysis paradigms have been introduced. Core facilities provide access to such technologies, but also actively support the researchers in finding and applying the best-suited analytical approach. In order to implement a solid fundament for this decision making process, core facilities need to constantly compare and benchmark the various approaches. In this article we compare the quantitative accuracy and precision of current state of the art targeted proteomics approaches single reaction monitoring (SRM), parallel reaction monitoring (PRM) and data independent acquisition (DIA) across multiple liquid chromatography mass spectrometry (LC-MS) platforms, using a readily available commercial standard sample. All workflows are able to reproducibly generate accurate quantitative data. However, SRM and PRM workflows show higher accuracy and precision compared to DIA approaches, especially when analyzing low concentrated analytes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Analogous on-axis interference topographic phase microscopy (AOITPM).
Xiu, P; Liu, Q; Zhou, X; Xu, Y; Kuang, C; Liu, X
2018-05-01
The refractive index (RI) of a sample as an endogenous contrast agent plays an important role in transparent live cell imaging. In tomographic phase microscopy (TPM), 3D quantitative RI maps can be reconstructed based on the measured projections of the RI in multiple directions. The resolution of the RI maps not only depends on the numerical aperture of the employed objective lens, but also is determined by the accuracy of the quantitative phase of the sample measured at multiple scanning illumination angles. This paper reports an analogous on-axis interference TPM, where the interference angle between the sample and reference beams is kept constant for projections in multiple directions to improve the accuracy of the phase maps and the resolution of RI tomograms. The system has been validated with both silica beads and red blood cells. Compared with conventional TPM, the proposed system acquires quantitative RI maps with higher resolution (420 nm @λ = 633 nm) and signal-to-noise ratio that can be beneficial for live cell imaging in biomedical applications. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.
Barousse, Rafael; Socolovsky, Mariano; Luna, Antonio
2017-01-01
Traumatic conditions of peripheral nerves and plexus have been classically evaluated by morphological imaging techniques and electrophysiological tests. New magnetic resonance imaging (MRI) studies based on 3D fat-suppressed techniques are providing high accuracy for peripheral nerve injury evaluation from a qualitative point of view. However, these techniques do not provide quantitative information. Diffusion weighted imaging (DWI) and diffusion tensor imaging (DTI) are functional MRI techniques that are able to evaluate and quantify the movement of water molecules within different biological structures. These techniques have been successfully applied in other anatomical areas, especially in the assessment of central nervous system, and now are being imported, with promising results for peripheral nerve and plexus evaluation. DWI and DTI allow performing a qualitative and quantitative peripheral nerve analysis, providing valuable pathophysiological information about functional integrity of these structures. In the field of trauma and peripheral nerve or plexus injury, several derived parameters from DWI and DTI studies such as apparent diffusion coefficient (ADC) or fractional anisotropy (FA) among others, can be used as potential biomarkers of neural damage providing information about fiber organization, axonal flow or myelin integrity. A proper knowledge of physical basis of these techniques and their limitations is important for an optimal interpretation of the imaging findings and derived data. In this paper, a comprehensive review of the potential applications of DWI and DTI neurographic studies is performed with a focus on traumatic conditions, including main nerve entrapment syndromes in both peripheral nerves and brachial or lumbar plexus. PMID:28932698
LeBlanc, André; Shiao, Tze Chieh; Roy, René; Sleno, Lekha
2014-09-15
Acetaminophen is known to cause hepatoxicity via the formation of a reactive metabolite, N-acetyl p-benzoquinone imine (NAPQI), as a result of covalent binding to liver proteins. Serum albumin (SA) is known to be covalently modified by NAPQI and is present at high concentrations in the bloodstream and is therefore a potential biomarker to assess the levels of protein modification by NAPQI. A newly developed method for the absolute quantitation of serum albumin containing NAPQI covalently bound to its active site cysteine (Cys34) is described. This optimized assay represents the first absolute quantitation of a modified protein, with very low stoichiometric abundance, using a protein-level standard combined with isotope dilution. The LC-MS/MS assay is based on a protein standard modified with a custom-designed reagent, yielding a surrogate peptide (following digestion) that is a positional isomer to the target peptide modified by NAPQI. To illustrate the potential of this approach, the method was applied to quantify NAPQI-modified SA in plasma from rats dosed with acetaminophen. The resulting method is highly sensitive (capable of quantifying down to 0.0006% of total RSA in its NAPQI-modified form) and yields excellent precision and accuracy statistics. A time-course pharmacokinetic study was performed to test the usefulness of this method for following acetaminophen-induced covalent binding at four dosing levels (75-600 mg/kg IP), showing the viability of this approach to directly monitor in vivo samples. This approach can reliably quantify NAPQI-modified albumin, allowing direct monitoring of acetaminophen-related covalent binding.
UHPLC determination of catechins for the quality control of green tea.
Naldi, Marina; Fiori, Jessica; Gotti, Roberto; Périat, Aurélie; Veuthey, Jean-Luc; Guillarme, Davy; Andrisano, Vincenza
2014-01-01
An ultra-high performance liquid chromatography (UHPLC) with UV detection method was developed for the fast quantitation of the most represented and biologically important green tea catechins and caffeine. UHPLC system was equipped with C18 analytical column (50mm×2.1mm, 1.8μm), utilizing a mobile phase composed of pH 2.5 triethanolamine phosphate buffer (0.1M) and acetonitrile in a gradient elution mode; under these conditions six major catechins and caffeine were separated in a 3min run. The method was fully validated in terms of precision, detection and quantification limits, linearity, accuracy, and it was applied to the identification and quantification of catechins and caffeine present in green tea infusions. In particular, commercially available green tea leaves samples of different geographical origin (Sencha, Ceylon Green and Lung Ching) were used for infusion preparations (water at 85°C for 15min). The selectivity of the developed UHPLC method was confirmed by comparison with UHPLC-MS/MS analysis. The recovery of the main six catechins and caffeine on the three analyzed commercial tea samples ranged from 94 to 108% (n=3). Limits of detection (LOD) were comprised in the range 0.1-0.4μgmL(-1). An orthogonal micellar electrokinetic (MEKC) method was applied for comparative purposes on selectivity and quantitative data. The combined use of the results obtained by the two techniques allowed for a fast confirmation on quantitative characterization of commercial samples. Copyright © 2013 Elsevier B.V. All rights reserved.
Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P
2018-01-01
Background Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test’s performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. Methods and findings We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Cholrapid and Cholgold) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). Conclusion No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard. PMID:29387424
Lunardini, Francesca; Bertucco, Matteo; Casellato, Claudia; Bhanpuri, Nasir; Pedrocchi, Alessandra; Sanger, Terence D.
2015-01-01
Motor speed and accuracy are both affected in childhood dystonia. Thus, deriving a speed-accuracy function is an important metric for assessing motor impairments in dystonia. Previous work in dystonia studied the speed-accuracy trade-off during point-to-point tasks. To achieve a more relevant measurement of functional abilities in dystonia, the present study investigates upper-limb kinematics and electromyographic activity of 8 children with dystonia and 8 healthy children during a trajectory-constrained child-relevant task that emulates self-feeding with a spoon and requires continuous monitoring of accuracy. The speed-accuracy trade-off is examined by changing the spoon size to create different accuracy demands. Results demonstrate that the trajectory-constrained speed-accuracy relation is present in both groups, but it is altered in dystonia in terms of increased slope and offset towards longer movement times. Findings are consistent with the hypothesis of increased signal-dependent noise in dystonia, which may partially explain the slow and variable movements observed in dystonia. PMID:25895910
Lunardini, Francesca; Bertucco, Matteo; Casellato, Claudia; Bhanpuri, Nasir; Pedrocchi, Alessandra; Sanger, Terence D
2015-10-01
Motor speed and accuracy are both affected in childhood dystonia. Thus, deriving a speed-accuracy function is an important metric for assessing motor impairments in dystonia. Previous work in dystonia studied the speed-accuracy trade-off during point-to-point tasks. To achieve a more relevant measurement of functional abilities in dystonia, the present study investigates upper-limb kinematics and electromyographic activity of 8 children with dystonia and 8 healthy children during a trajectory-constrained child-relevant task that emulates self-feeding with a spoon and requires continuous monitoring of accuracy. The speed-accuracy trade-off is examined by changing the spoon size to create different accuracy demands. Results demonstrate that the trajectory-constrained speed-accuracy relation is present in both groups, but it is altered in dystonia in terms of increased slope and offset toward longer movement times. Findings are consistent with the hypothesis of increased signal-dependent noise in dystonia, which may partially explain the slow and variable movements observed in dystonia. © The Author(s) 2015.
Scatter characterization and correction for simultaneous multiple small-animal PET imaging.
Prasad, Rameshwar; Zaidi, Habib
2014-04-01
The rapid growth and usage of small-animal positron emission tomography (PET) in molecular imaging research has led to increased demand on PET scanner's time. One potential solution to increase throughput is to scan multiple rodents simultaneously. However, this is achieved at the expense of deterioration of image quality and loss of quantitative accuracy owing to enhanced effects of photon attenuation and Compton scattering. The purpose of this work is, first, to characterize the magnitude and spatial distribution of the scatter component in small-animal PET imaging when scanning single and multiple rodents simultaneously and, second, to assess the relevance and evaluate the performance of scatter correction under similar conditions. The LabPET™-8 scanner was modelled as realistically as possible using Geant4 Application for Tomographic Emission Monte Carlo simulation platform. Monte Carlo simulations allow the separation of unscattered and scattered coincidences and as such enable detailed assessment of the scatter component and its origin. Simple shape-based and more realistic voxel-based phantoms were used to simulate single and multiple PET imaging studies. The modelled scatter component using the single-scatter simulation technique was compared to Monte Carlo simulation results. PET images were also corrected for attenuation and the combined effect of attenuation and scatter on single and multiple small-animal PET imaging evaluated in terms of image quality and quantitative accuracy. A good agreement was observed between calculated and Monte Carlo simulated scatter profiles for single- and multiple-subject imaging. In the LabPET™-8 scanner, the detector covering material (kovar) contributed the maximum amount of scatter events while the scatter contribution due to lead shielding is negligible. The out-of field-of-view (FOV) scatter fraction (SF) is 1.70, 0.76, and 0.11% for lower energy thresholds of 250, 350, and 400 keV, respectively. The increase in SF ranged between 25 and 64% when imaging multiple subjects (three to five) of different size simultaneously in comparison to imaging a single subject. The spill-over ratio (SOR) increases with increasing the number of subjects in the FOV. Scatter correction improved the SOR for both water and air cold compartments of single and multiple imaging studies. The recovery coefficients for different body parts of the mouse whole-body and rat whole-body anatomical models were improved for multiple imaging studies following scatter correction. The magnitude and spatial distribution of the scatter component in small-animal PET imaging of single and multiple subjects simultaneously were characterized, and its impact was evaluated in different situations. Scatter correction improves PET image quality and quantitative accuracy for single rat and simultaneous multiple mice and rat imaging studies, whereas its impact is insignificant in single mouse imaging.
Torrado, G; García-Arieta, A; de los Ríos, F; Menéndez, J C; Torrado, S
1999-03-01
Fourier transform infrared (FTIR) spectroscopy and antifoaming activity test have been employed for the quantitative analysis of dimethicone. Linearity, accuracy and precision are presented for both methods. These methods have been also used to compare different dimethicone-containing proprietary medicines. FTIR spectroscopy has shown to be adequate for quantitation of dimethicone in commercial tablets and capsules in order to comply with USP requirements. The antifoaming activity test is able to detect incompatibilities between dimethicone and other constituents. The presence of certain enzymes in some medicinal products increases the defoaming properties of these formulations.
Precision and Accuracy Parameters in Structured Light 3-D Scanning
NASA Astrophysics Data System (ADS)
Eiríksson, E. R.; Wilm, J.; Pedersen, D. B.; Aanæs, H.
2016-04-01
Structured light systems are popular in part because they can be constructed from off-the-shelf low cost components. In this paper we quantitatively show how common design parameters affect precision and accuracy in such systems, supplying a much needed guide for practitioners. Our quantitative measure is the established VDI/VDE 2634 (Part 2) guideline using precision made calibration artifacts. Experiments are performed on our own structured light setup, consisting of two cameras and a projector. We place our focus on the influence of calibration design parameters, the calibration procedure and encoding strategy and present our findings. Finally, we compare our setup to a state of the art metrology grade commercial scanner. Our results show that comparable, and in some cases better, results can be obtained using the parameter settings determined in this study.
[The water content reference material of water saturated octanol].
Wang, Haifeng; Ma, Kang; Zhang, Wei; Li, Zhanyuan
2011-03-01
The national standards of biofuels specify the technique specification and analytical methods. A water content certified reference material based on the water saturated octanol was developed in order to satisfy the needs of the instrument calibration and the methods validation, assure the accuracy and consistency of results in water content measurements of biofuels. Three analytical methods based on different theories were employed to certify the water content of the reference material, including Karl Fischer coulometric titration, Karl Fischer volumetric titration and quantitative nuclear magnetic resonance. The consistency of coulometric and volumetric titration was achieved through the improvement of methods. The accuracy of the certified result was improved by the introduction of the new method of quantitative nuclear magnetic resonance. Finally, the certified value of reference material is 4.76% with an expanded uncertainty of 0.09%.
Determining fluoride ions in ammonium desulfurization slurry using an ion selective electrode method
NASA Astrophysics Data System (ADS)
Luo, Zhengwei; Guo, Mulin; Chen, Huihui; Lian, Zhouyang; Wei, Wuji
2018-02-01
Determining fluoride ions in ammonia desulphurization slurry using a fluoride ion selective electrode (ISE) is investigated. The influence of pH was studied and the appropriate total ionic strength adjustment buffer and its dosage were optimized. The impact of Fe3+ concentration on the detection results was analyzed under preferable conditions, and the error analysis of the ISE method’s accuracy and precision for measuring fluoride ion concentration in the range of 0.5-2000 mg/L was conducted. The quantitative recovery of F- in ammonium sulfate slurry was assessed. The results showed that when pH ranged from 5.5˜6 and the Fe3+ concentration was less than 750 mg/L, the accuracy and precision test results with quantitative recovery rates of 92.0%-104.2% were obtained.
Response moderation models for conditional dependence between response time and response accuracy.
Bolsinova, Maria; Tijmstra, Jesper; Molenaar, Dylan
2017-05-01
It is becoming more feasible and common to register response times in the application of psychometric tests. Researchers thus have the opportunity to jointly model response accuracy and response time, which provides users with more relevant information. The most common choice is to use the hierarchical model (van der Linden, 2007, Psychometrika, 72, 287), which assumes conditional independence between response time and accuracy, given a person's speed and ability. However, this assumption may be violated in practice if, for example, persons vary their speed or differ in their response strategies, leading to conditional dependence between response time and accuracy and confounding measurement. We propose six nested hierarchical models for response time and accuracy that allow for conditional dependence, and discuss their relationship to existing models. Unlike existing approaches, the proposed hierarchical models allow for various forms of conditional dependence in the model and allow the effect of continuous residual response time on response accuracy to be item-specific, person-specific, or both. Estimation procedures for the models are proposed, as well as two information criteria that can be used for model selection. Parameter recovery and usefulness of the information criteria are investigated using simulation, indicating that the procedure works well and is likely to select the appropriate model. Two empirical applications are discussed to illustrate the different types of conditional dependence that may occur in practice and how these can be captured using the proposed hierarchical models. © 2016 The British Psychological Society.
2010-01-01
Background The challenge today is to develop a modeling and simulation paradigm that integrates structural, molecular and genetic data for a quantitative understanding of physiology and behavior of biological processes at multiple scales. This modeling method requires techniques that maintain a reasonable accuracy of the biological process and also reduces the computational overhead. This objective motivates the use of new methods that can transform the problem from energy and affinity based modeling to information theory based modeling. To achieve this, we transform all dynamics within the cell into a random event time, which is specified through an information domain measure like probability distribution. This allows us to use the “in silico” stochastic event based modeling approach to find the molecular dynamics of the system. Results In this paper, we present the discrete event simulation concept using the example of the signal transduction cascade triggered by extra-cellular Mg2+ concentration in the two component PhoPQ regulatory system of Salmonella Typhimurium. We also present a model to compute the information domain measure of the molecular transport process by estimating the statistical parameters of inter-arrival time between molecules/ions coming to a cell receptor as external signal. This model transforms the diffusion process into the information theory measure of stochastic event completion time to get the distribution of the Mg2+ departure events. Using these molecular transport models, we next study the in-silico effects of this external trigger on the PhoPQ system. Conclusions Our results illustrate the accuracy of the proposed diffusion models in explaining the molecular/ionic transport processes inside the cell. Also, the proposed simulation framework can incorporate the stochasticity in cellular environments to a certain degree of accuracy. We expect that this scalable simulation platform will be able to model more complex biological systems with reasonable accuracy to understand their temporal dynamics. PMID:21143785
Grünewälder, Steffen; Broekhuis, Femke; Macdonald, David Whyte; Wilson, Alan Martin; McNutt, John Weldon; Shawe-Taylor, John; Hailes, Stephen
2012-01-01
We propose a new method, based on machine learning techniques, for the analysis of a combination of continuous data from dataloggers and a sampling of contemporaneous behaviour observations. This data combination provides an opportunity for biologists to study behaviour at a previously unknown level of detail and accuracy; however, continuously recorded data are of little use unless the resulting large volumes of raw data can be reliably translated into actual behaviour. We address this problem by applying a Support Vector Machine and a Hidden-Markov Model that allows us to classify an animal's behaviour using a small set of field observations to calibrate continuously recorded activity data. Such classified data can be applied quantitatively to the behaviour of animals over extended periods and at times during which observation is difficult or impossible. We demonstrate the usefulness of the method by applying it to data from six cheetah (Acinonyx jubatus) in the Okavango Delta, Botswana. Cumulative activity data scores were recorded every five minutes by accelerometers embedded in GPS radio-collars for around one year on average. Direct behaviour sampling of each of the six cheetah were collected in the field for comparatively short periods. Using this approach we are able to classify each five minute activity score into a set of three key behaviour (feeding, mobile and stationary), creating a continuous behavioural sequence for the entire period for which the collars were deployed. Evaluation of our classifier with cross-validation shows the accuracy to be 83%-94%, but that the accuracy for individual classes is reduced with decreasing sample size of direct observations. We demonstrate how these processed data can be used to study behaviour identifying seasonal and gender differences in daily activity and feeding times. Results given here are unlike any that could be obtained using traditional approaches in both accuracy and detail.
Grünewälder, Steffen; Broekhuis, Femke; Macdonald, David Whyte; Wilson, Alan Martin; McNutt, John Weldon; Shawe-Taylor, John; Hailes, Stephen
2012-01-01
We propose a new method, based on machine learning techniques, for the analysis of a combination of continuous data from dataloggers and a sampling of contemporaneous behaviour observations. This data combination provides an opportunity for biologists to study behaviour at a previously unknown level of detail and accuracy; however, continuously recorded data are of little use unless the resulting large volumes of raw data can be reliably translated into actual behaviour. We address this problem by applying a Support Vector Machine and a Hidden-Markov Model that allows us to classify an animal's behaviour using a small set of field observations to calibrate continuously recorded activity data. Such classified data can be applied quantitatively to the behaviour of animals over extended periods and at times during which observation is difficult or impossible. We demonstrate the usefulness of the method by applying it to data from six cheetah (Acinonyx jubatus) in the Okavango Delta, Botswana. Cumulative activity data scores were recorded every five minutes by accelerometers embedded in GPS radio-collars for around one year on average. Direct behaviour sampling of each of the six cheetah were collected in the field for comparatively short periods. Using this approach we are able to classify each five minute activity score into a set of three key behaviour (feeding, mobile and stationary), creating a continuous behavioural sequence for the entire period for which the collars were deployed. Evaluation of our classifier with cross-validation shows the accuracy to be , but that the accuracy for individual classes is reduced with decreasing sample size of direct observations. We demonstrate how these processed data can be used to study behaviour identifying seasonal and gender differences in daily activity and feeding times. Results given here are unlike any that could be obtained using traditional approaches in both accuracy and detail. PMID:23185301
Accuracy of alpha amylase in diagnosing microaspiration in intubated critically-ill patients.
Dewavrin, Florent; Zerimech, Farid; Boyer, Alexandre; Maboudou, Patrice; Balduyck, Malika; Duhamel, Alain; Nseir, Saad
2014-01-01
Amylase concentration in respiratory secretions was reported to be a potentially useful marker for aspiration and pneumonia. The aim of this study was to determine accuracy of α-amylase in diagnosing microaspiration in critically ill patients. Retrospective analysis of prospectively collected data collected in a medical ICU. All patients requiring mechanical ventilation for at least 48 h, and included in a previous randomized controlled trial were eligible for this study, provided that at least one tracheal aspirate was available for α-amylase measurement. As part of the initial trial, pepsin was quantitatively measured in all tracheal aspirates during a 48-h period. All tracheal aspirates were frozen, allowing subsequent measurement of α-amylase for the purpose of the current study. Microaspiration was defined as the presence of at least one positive tracheal aspirate for pepsin (>200 ng.mL-1). Abundant microaspiration was defined as the presence of pepsin at significant level in >74% of tracheal aspirates. Amylase was measured in 1055 tracheal aspirates, collected from 109 patients. Using mean α-amylase level per patient, accuracy of α-amylase in diagnosing microaspiration was moderate (area under the receiver operator curve 0.72±0.05 [95%CI 0.61-0.83], for an α-amylase value of 1685 UI.L-1). However, when α-amylase levels, coming from all samples, were taken into account, area under the receiver operator curve was 0.56±0.05 [0.53-0.60]. Mean α-amylase level, and percentage of tracheal aspirates positive for α-amylase were significantly higher in patients with microaspiration, and in patients with abundant microaspiration compared with those with no microaspiration; and similar in patients with microaspiration compared with those with abundant microaspiration. α-amylase and pepsin were significantly correlated (r2 = 0.305, p = 0.001). Accuracy of mean α-amylase in diagnosing microaspiration is moderate. Further, when all α-amylase levels were taken into account, α-amylase was inaccurate in diagnosing microaspiration, compared with pepsin.
Ghosh, Preetam; Ghosh, Samik; Basu, Kalyan; Das, Sajal K; Zhang, Chaoyang
2010-12-01
The challenge today is to develop a modeling and simulation paradigm that integrates structural, molecular and genetic data for a quantitative understanding of physiology and behavior of biological processes at multiple scales. This modeling method requires techniques that maintain a reasonable accuracy of the biological process and also reduces the computational overhead. This objective motivates the use of new methods that can transform the problem from energy and affinity based modeling to information theory based modeling. To achieve this, we transform all dynamics within the cell into a random event time, which is specified through an information domain measure like probability distribution. This allows us to use the "in silico" stochastic event based modeling approach to find the molecular dynamics of the system. In this paper, we present the discrete event simulation concept using the example of the signal transduction cascade triggered by extra-cellular Mg2+ concentration in the two component PhoPQ regulatory system of Salmonella Typhimurium. We also present a model to compute the information domain measure of the molecular transport process by estimating the statistical parameters of inter-arrival time between molecules/ions coming to a cell receptor as external signal. This model transforms the diffusion process into the information theory measure of stochastic event completion time to get the distribution of the Mg2+ departure events. Using these molecular transport models, we next study the in-silico effects of this external trigger on the PhoPQ system. Our results illustrate the accuracy of the proposed diffusion models in explaining the molecular/ionic transport processes inside the cell. Also, the proposed simulation framework can incorporate the stochasticity in cellular environments to a certain degree of accuracy. We expect that this scalable simulation platform will be able to model more complex biological systems with reasonable accuracy to understand their temporal dynamics.
Bourdel, Nicolas; Collins, Toby; Pizarro, Daniel; Bartoli, Adrien; Da Ines, David; Perreira, Bruno; Canis, Michel
2017-01-01
Augmented Reality (AR) is a technology that can allow a surgeon to see subsurface structures. This works by overlaying information from another modality, such as MRI and fusing it in real time with the endoscopic images. AR has never been developed for a very mobile organ like the uterus and has never been performed for gynecology. Myomas are not always easy to localize in laparoscopic surgery when they do not significantly change the surface of the uterus, or are at multiple locations. To study the accuracy of myoma localization using a new AR system compared to MRI-only localization. Ten residents were asked to localize six myomas (on a uterine model into a laparoscopic box) when either using AR or in conditions that simulate a standard method (only the MRI was available). Myomas were randomly divided in two groups: the control group (MRI only, AR not activated) and the AR group (AR activated). Software was used to automatically measure the distance between the point of contact on the uterine surface and the myoma. We compared these distances to the true shortest distance to obtain accuracy measures. The time taken to perform the task was measured, and an assessment of the complexity was performed. The mean accuracy in the control group was 16.80 mm [0.1-52.2] versus 0.64 mm [0.01-4.71] with AR. In the control group, the mean time to perform the task was 18.68 [6.4-47.1] s compared to 19.6 [3.9-77.5] s with AR. The mean score of difficulty (evaluated for each myoma) was 2.36 [1-4] versus 0.87 [0-4], respectively, for the control and the AR group. We developed an AR system for a very mobile organ. This is the first user study to quantitatively evaluate an AR system for improving a surgical task. In our model, AR improves localization accuracy.
NASA Astrophysics Data System (ADS)
Matras, A.; Kowalczyk, R.
2014-11-01
The analysis results of machining accuracy after the free form surface milling simulations (based on machining EN AW- 7075 alloys) for different machining strategies (Level Z, Radial, Square, Circular) are presented in the work. Particular milling simulations were performed using CAD/CAM Esprit software. The accuracy of obtained allowance is defined as a difference between the theoretical surface of work piece element (the surface designed in CAD software) and the machined surface after a milling simulation. The difference between two surfaces describes a value of roughness, which is as the result of tool shape mapping on the machined surface. Accuracy of the left allowance notifies in direct way a surface quality after the finish machining. Described methodology of usage CAD/CAM software can to let improve a time design of machining process for a free form surface milling by a 5-axis CNC milling machine with omitting to perform the item on a milling machine in order to measure the machining accuracy for the selected strategies and cutting data.
NASA Astrophysics Data System (ADS)
Kobor, J. S.; O'Connor, M. D.; Sherwood, M. N.
2013-12-01
Effective floodplain management and restoration requires a detailed understanding of floodplain processes not readily achieved using standard one-dimensional hydraulic modeling approaches. The application of more advanced numerical models is, however, often limited by the relatively high costs of acquiring the high-resolution topographic data needed for model development using traditional surveying methods. The increasing availability of LiDAR data has the potential to significantly reduce these costs and thus facilitate application of multi-dimensional hydraulic models where budget constraints would have otherwise prohibited their use. The accuracy and suitability of LiDAR data for supporting model development can vary widely depending on the resolution of channel and floodplain features, the data collection density, and the degree of vegetation canopy interference among other factors. More work is needed to develop guidelines for evaluating LiDAR accuracy and determining when and how best the data can be used to support numerical modeling activities. Here we present two recent case studies where LiDAR datasets were used to support floodplain and sediment transport modeling efforts. One LiDAR dataset was collected with a relatively low point density and used to study a small stream channel in coastal Marin County and a second dataset was collected with a higher point density and applied to a larger stream channel in western Sonoma County. Traditional topographic surveying was performed at both sites which provided a quantitative means of evaluating the LiDAR accuracy. We found that with the lower point density dataset, the accuracy of the LiDAR varied significantly between the active stream channel and floodplain whereas the accuracy across the channel/floodplain interface was more uniform with the higher density dataset. Accuracy also varied widely as a function of the density of the riparian vegetation canopy. We found that coupled 1- and 2-dimensional hydraulic models whereby the active channel is simulated in 1-dimension and the floodplain in 2-dimensions provided the best means of utilizing the LiDAR data to evaluate existing conditions and develop alternative flood hazard mitigation and habitat restoration strategies. Such an approach recognizes the limitations of the LiDAR data within active channel areas with dense riparian cover and is cost-effective in that it allows field survey efforts to focus primarily on characterizing active stream channel areas. The multi-dimensional modeling approach also conforms well to the physical realties of the stream system whereby in-channel flows can generally be well-described as a one-dimensional flow problem and floodplain flows are often characterized by multiple and often poorly understood flowpaths. The multi-dimensional modeling approach has the additional advantages of allowing for accurate simulation of the effects of hydraulic structures using well-tested one-dimensional formulae and minimizing the computational burden of the models by not requiring the small spatial resolutions necessary to resolve the geometries of small stream channels in two-dimensions.
Analysis system for characterisation of simple, low-cost microfluidic components
NASA Astrophysics Data System (ADS)
Smith, Suzanne; Naidoo, Thegaran; Nxumalo, Zandile; Land, Kevin; Davies, Emlyn; Fourie, Louis; Marais, Philip; Roux, Pieter
2014-06-01
There is an inherent trade-off between cost and operational integrity of microfluidic components, especially when intended for use in point-of-care devices. We present an analysis system developed to characterise microfluidic components for performing blood cell counting, enabling the balance between function and cost to be established quantitatively. Microfluidic components for sample and reagent introduction, mixing and dispensing of fluids were investigated. A simple inlet port plugging mechanism is used to introduce and dispense a sample of blood, while a reagent is released into the microfluidic system through compression and bursting of a blister pack. Mixing and dispensing of the sample and reagent are facilitated via air actuation. For these microfluidic components to be implemented successfully, a number of aspects need to be characterised for development of an integrated point-of-care device design. The functional components were measured using a microfluidic component analysis system established in-house. Experiments were carried out to determine: 1. the force and speed requirements for sample inlet port plugging and blister pack compression and release using two linear actuators and load cells for plugging the inlet port, compressing the blister pack, and subsequently measuring the resulting forces exerted, 2. the accuracy and repeatability of total volumes of sample and reagent dispensed, and 3. the degree of mixing and dispensing uniformity of the sample and reagent for cell counting analysis. A programmable syringe pump was used for air actuation to facilitate mixing and dispensing of the sample and reagent. Two high speed cameras formed part of the analysis system and allowed for visualisation of the fluidic operations within the microfluidic device. Additional quantitative measures such as microscopy were also used to assess mixing and dilution accuracy, as well as uniformity of fluid dispensing - all of which are important requirements towards the successful implementation of a blood cell counting system.
ERIC Educational Resources Information Center
Cotreau Berube, Elyse A.
2011-01-01
The purpose of this quantitative research study was to investigate the use of rote learning in basic skills of mathematics and spelling of 12 high school students, from a career and technical high school, in an effort to learn if the pedagogy of rote fits in the frameworks of today's education. The study compared the accuracy of…
Ground Truth Sampling and LANDSAT Accuracy Assessment
NASA Technical Reports Server (NTRS)
Robinson, J. W.; Gunther, F. J.; Campbell, W. J.
1982-01-01
It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.
Vrzheshch, P V
2015-01-01
Quantitative evaluation of the accuracy of the rapid equilibrium assumption in the steady-state enzyme kinetics was obtained for an arbitrary mechanism of an enzyme-catalyzed reaction. This evaluation depends only on the structure and properties of the equilibrium segment, but doesn't depend on the structure and properties of the rest (stationary part) of the kinetic scheme. The smaller the values of the edges leaving equilibrium segment in relation to values of the edges within the equilibrium segment, the higher the accuracy of determination of intermediate concentrations and reaction velocity in a case of the rapid equilibrium assumption.
Performance Evaluation and Analysis for Gravity Matching Aided Navigation.
Wu, Lin; Wang, Hubiao; Chai, Hua; Zhang, Lu; Hsu, Houtse; Wang, Yong
2017-04-05
Simulation tests were accomplished in this paper to evaluate the performance of gravity matching aided navigation (GMAN). Four essential factors were focused in this study to quantitatively evaluate the performance: gravity database (DB) resolution, fitting degree of gravity measurements, number of samples in matching, and gravity changes in the matching area. Marine gravity anomaly DB derived from satellite altimetry was employed. Actual dynamic gravimetry accuracy and operating conditions were referenced to design the simulation parameters. The results verified that the improvement of DB resolution, gravimetry accuracy, number of measurement samples, or gravity changes in the matching area generally led to higher positioning accuracies, while the effects of them were different and interrelated. Moreover, three typical positioning accuracy targets of GMAN were proposed, and the conditions to achieve these targets were concluded based on the analysis of several different system requirements. Finally, various approaches were provided to improve the positioning accuracy of GMAN.
Performance Evaluation and Analysis for Gravity Matching Aided Navigation
Wu, Lin; Wang, Hubiao; Chai, Hua; Zhang, Lu; Hsu, Houtse; Wang, Yong
2017-01-01
Simulation tests were accomplished in this paper to evaluate the performance of gravity matching aided navigation (GMAN). Four essential factors were focused in this study to quantitatively evaluate the performance: gravity database (DB) resolution, fitting degree of gravity measurements, number of samples in matching, and gravity changes in the matching area. Marine gravity anomaly DB derived from satellite altimetry was employed. Actual dynamic gravimetry accuracy and operating conditions were referenced to design the simulation parameters. The results verified that the improvement of DB resolution, gravimetry accuracy, number of measurement samples, or gravity changes in the matching area generally led to higher positioning accuracies, while the effects of them were different and interrelated. Moreover, three typical positioning accuracy targets of GMAN were proposed, and the conditions to achieve these targets were concluded based on the analysis of several different system requirements. Finally, various approaches were provided to improve the positioning accuracy of GMAN. PMID:28379178
MRI-assisted PET motion correction for neurologic studies in an integrated MR-PET scanner.
Catana, Ciprian; Benner, Thomas; van der Kouwe, Andre; Byars, Larry; Hamm, Michael; Chonde, Daniel B; Michel, Christian J; El Fakhri, Georges; Schmand, Matthias; Sorensen, A Gregory
2011-01-01
Head motion is difficult to avoid in long PET studies, degrading the image quality and offsetting the benefit of using a high-resolution scanner. As a potential solution in an integrated MR-PET scanner, the simultaneously acquired MRI data can be used for motion tracking. In this work, a novel algorithm for data processing and rigid-body motion correction (MC) for the MRI-compatible BrainPET prototype scanner is described, and proof-of-principle phantom and human studies are presented. To account for motion, the PET prompt and random coincidences and sensitivity data for postnormalization were processed in the line-of-response (LOR) space according to the MRI-derived motion estimates. The processing time on the standard BrainPET workstation is approximately 16 s for each motion estimate. After rebinning in the sinogram space, the motion corrected data were summed, and the PET volume was reconstructed using the attenuation and scatter sinograms in the reference position. The accuracy of the MC algorithm was first tested using a Hoffman phantom. Next, human volunteer studies were performed, and motion estimates were obtained using 2 high-temporal-resolution MRI-based motion-tracking techniques. After accounting for the misalignment between the 2 scanners, perfectly coregistered MRI and PET volumes were reproducibly obtained. The MRI output gates inserted into the PET list-mode allow the temporal correlation of the 2 datasets within 0.2 ms. The Hoffman phantom volume reconstructed by processing the PET data in the LOR space was similar to the one obtained by processing the data using the standard methods and applying the MC in the image space, demonstrating the quantitative accuracy of the procedure. In human volunteer studies, motion estimates were obtained from echo planar imaging and cloverleaf navigator sequences every 3 s and 20 ms, respectively. Motion-deblurred PET images, with excellent delineation of specific brain structures, were obtained using these 2 MRI-based estimates. An MRI-based MC algorithm was implemented for an integrated MR-PET scanner. High-temporal-resolution MRI-derived motion estimates (obtained while simultaneously acquiring anatomic or functional MRI data) can be used for PET MC. An MRI-based MC method has the potential to improve PET image quality, increasing its reliability, reproducibility, and quantitative accuracy, and to benefit many neurologic applications.
Unnikrishnan, Ginu U.; Morgan, Elise F.
2011-01-01
Inaccuracies in the estimation of material properties and errors in the assignment of these properties into finite element models limit the reliability, accuracy, and precision of quantitative computed tomography (QCT)-based finite element analyses of the vertebra. In this work, a new mesh-independent, material mapping procedure was developed to improve the quality of predictions of vertebral mechanical behavior from QCT-based finite element models. In this procedure, an intermediate step, called the material block model, was introduced to determine the distribution of material properties based on bone mineral density, and these properties were then mapped onto the finite element mesh. A sensitivity study was first conducted on a calibration phantom to understand the influence of the size of the material blocks on the computed bone mineral density. It was observed that varying the material block size produced only marginal changes in the predictions of mineral density. Finite element (FE) analyses were then conducted on a square column-shaped region of the vertebra and also on the entire vertebra in order to study the effect of material block size on the FE-derived outcomes. The predicted values of stiffness for the column and the vertebra decreased with decreasing block size. When these results were compared to those of a mesh convergence analysis, it was found that the influence of element size on vertebral stiffness was less than that of the material block size. This mapping procedure allows the material properties in a finite element study to be determined based on the block size required for an accurate representation of the material field, while the size of the finite elements can be selected independently and based on the required numerical accuracy of the finite element solution. The mesh-independent, material mapping procedure developed in this study could be particularly helpful in improving the accuracy of finite element analyses of vertebroplasty and spine metastases, as these analyses typically require mesh refinement at the interfaces between distinct materials. Moreover, the mapping procedure is not specific to the vertebra and could thus be applied to many other anatomic sites. PMID:21823740
Research on the impact factors of GRACE precise orbit determination by dynamic method
NASA Astrophysics Data System (ADS)
Guo, Nan-nan; Zhou, Xu-hua; Li, Kai; Wu, Bin
2018-07-01
With the successful use of GPS-only-based POD (precise orbit determination), more and more satellites carry onboard GPS receivers to support their orbit accuracy requirements. It provides continuous GPS observations in high precision, and becomes an indispensable way to obtain the orbit of LEO satellites. Precise orbit determination of LEO satellites plays an important role for the application of LEO satellites. Numerous factors should be considered in the POD processing. In this paper, several factors that impact precise orbit determination are analyzed, namely the satellite altitude, the time-variable earth's gravity field, the GPS satellite clock error and accelerometer observation. The GRACE satellites provide ideal platform to study the performance of factors for precise orbit determination using zero-difference GPS data. These factors are quantitatively analyzed on affecting the accuracy of dynamic orbit using GRACE observations from 2005 to 2011 by SHORDE software. The study indicates that: (1) with the altitude of the GRACE satellite is lowered from 480 km to 460 km in seven years, the 3D (three-dimension) position accuracy of GRACE satellite orbit is about 3˜4 cm based on long spans data; (2) the accelerometer data improves the 3D position accuracy of GRACE in about 1 cm; (3) the accuracy of zero-difference dynamic orbit is about 6 cm with the GPS satellite clock error products in 5 min sampling interval and can be raised to 4 cm, if the GPS satellite clock error products with 30 s sampling interval can be adopted. (4) the time-variable part of earth gravity field model improves the 3D position accuracy of GRACE in about 0.5˜1.5 cm. Based on this study, we quantitatively analyze the factors that affect precise orbit determination of LEO satellites. This study plays an important role to improve the accuracy of LEO satellites orbit determination.
Measures of model performance based on the log accuracy ratio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.
Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less
Measures of model performance based on the log accuracy ratio
Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.
2018-01-03
Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less
Page, Michael M; Taranto, Mario; Ramsay, Duncan; van Schie, Greg; Glendenning, Paul; Gillett, Melissa J; Vasikaran, Samuel D
2018-01-01
Objective Primary aldosteronism is a curable cause of hypertension which can be treated surgically or medically depending on the findings of adrenal vein sampling studies. Adrenal vein sampling studies are technically demanding with a high failure rate in many centres. The use of intraprocedural cortisol measurement could improve the success rates of adrenal vein sampling but may be impracticable due to cost and effects on procedural duration. Design Retrospective review of the results of adrenal vein sampling procedures since commencement of point-of-care cortisol measurement using a novel single-use semi-quantitative measuring device for cortisol, the adrenal vein sampling Accuracy Kit. Success rate and complications of adrenal vein sampling procedures before and after use of the adrenal vein sampling Accuracy Kit. Routine use of the adrenal vein sampling Accuracy Kit device for intraprocedural measurement of cortisol commenced in 2016. Results Technical success rate of adrenal vein sampling increased from 63% of 99 procedures to 90% of 48 procedures ( P = 0.0007) after implementation of the adrenal vein sampling Accuracy Kit. Failure of right adrenal vein cannulation was the main reason for an unsuccessful study. Radiation dose decreased from 34.2 Gy.cm 2 (interquartile range, 15.8-85.9) to 15.7 Gy.cm 2 (6.9-47.3) ( P = 0.009). No complications were noted, and implementation costs were minimal. Conclusions Point-of-care cortisol measurement during adrenal vein sampling improved cannulation success rates and reduced radiation exposure. The use of the adrenal vein sampling Accuracy Kit is now standard practice at our centre.
Cordella, Claire; Dickerson, Bradford C.; Quimby, Megan; Yunusova, Yana; Green, Jordan R.
2016-01-01
Background Primary progressive aphasia (PPA) is a neurodegenerative aphasic syndrome with three distinct clinical variants: non-fluent (nfvPPA), logopenic (lvPPA), and semantic (svPPA). Speech (non-) fluency is a key diagnostic marker used to aid identification of the clinical variants, and researchers have been actively developing diagnostic tools to assess speech fluency. Current approaches reveal coarse differences in fluency between subgroups, but often fail to clearly differentiate nfvPPA from the variably fluent lvPPA. More robust subtype differentiation may be possible with finer-grained measures of fluency. Aims We sought to identify the quantitative measures of speech rate—including articulation rate and pausing measures—that best differentiated PPA subtypes, specifically the non-fluent group (nfvPPA) from the more fluent groups (lvPPA, svPPA). The diagnostic accuracy of the quantitative speech rate variables was compared to that of a speech fluency impairment rating made by clinicians. Methods and Procedures Automatic estimates of pause and speech segment durations and rate measures were derived from connected speech samples of participants with PPA (N=38; 11 nfvPPA, 14 lvPPA, 13 svPPA) and healthy age-matched controls (N=8). Clinician ratings of fluency impairment were made using a previously validated clinician rating scale developed specifically for use in PPA. Receiver operating characteristic (ROC) analyses enabled a quantification of diagnostic accuracy. Outcomes and Results Among the quantitative measures, articulation rate was the most effective for differentiating between nfvPPA and the more fluent lvPPA and svPPA groups. The diagnostic accuracy of both speech and articulation rate measures was markedly better than that of the clinician rating scale, and articulation rate was the best classifier overall. Area under the curve (AUC) values for articulation rate were good to excellent for identifying nfvPPA from both svPPA (AUC=.96) and lvPPA (AUC=.86). Cross-validation of accuracy results for articulation rate showed good generalizability outside the training dataset. Conclusions Results provide empirical support for (1) the efficacy of quantitative assessments of speech fluency and (2) a distinct non-fluent PPA subtype characterized, at least in part, by an underlying disturbance in speech motor control. The trend toward improved classifier performance for quantitative rate measures demonstrates the potential for a more accurate and reliable approach to subtyping in the fluency domain, and suggests that articulation rate may be a useful input variable as part of a multi-dimensional clinical subtyping approach. PMID:28757671
Valdés, Pablo A.; Jacobs, Valerie; Harris, Brent T.; Wilson, Brian C.; Leblond, Frederic; Paulsen, Keith D.; Roberts, David W.
2015-01-01
OBJECT Previous studies in high-grade gliomas (HGGs) have indicated that protoporphyrin IX (PpIX) accumulates in higher concentrations in tumor tissue, and, when used to guide surgery, it has enabled improved resection leading to increased progression-free survival. Despite the benefits of complete resection and the advances in fluorescence-guided surgery, few studies have investigated the use of PpIX in low-grade gliomas (LGGs). Here, the authors describe their initial experience with 5-aminolevulinic acid (ALA)–induced PpIX fluorescence in a series of patients with LGG. METHODS Twelve patients with presumed LGGs underwent resection of their tumors after receiving 20 μg/kg of ALA approximately 3 hours prior to surgery under an institutional review board–approved protocol. Intraoperative assessments of the resulting PpIX emissions using both qualitative, visible fluorescence and quantitative measurements of PpIX concentration were obtained from tissue locations that were subsequently biopsied and evaluated histopathologically. Mixed models for random effects and receiver operating characteristic curve analysis for diagnostic performance were performed on the fluorescence data relative to the gold-standard histopathology. RESULTS Five of the 12 LGGs (1 ganglioglioma, 1 oligoastrocytoma, 1 pleomorphic xanthoastrocytoma, 1 oligodendroglioma, and 1 ependymoma) demonstrated at least 1 instance of visible fluorescence during surgery. Visible fluorescence evaluated on a specimen-by-specimen basis yielded a diagnostic accuracy of 38.0% (cutoff threshold: visible fluorescence score ≥ 1, area under the curve = 0.514). Quantitative fluorescence yielded a diagnostic accuracy of 67% (for a cutoff threshold of the concentration of PpIX [CPpIX] > 0.0056 μg/ml, area under the curve = 0.66). The authors found that 45% (9/20) of nonvisibly fluorescent tumor specimens, which would have otherwise gone undetected, accumulated diagnostically significant levels of CPpIX that were detected quantitatively. CONCLUSIONS The authors’ initial experience with ALA-induced PpIX fluorescence in LGGs concurs with other literature reports that the resulting visual fluorescence has poor diagnostic accuracy. However, the authors also found that diagnostically significant levels of CPpIX do accumulate in LGGs, and the resulting fluorescence emissions are very often below the detection threshold of current visual fluorescence imaging methods. Indeed, at least in the authors’ initial experience reported here, if quantitative detection methods are deployed, the diagnostic performance of ALA-induced PpIX fluorescence in LGGs approaches the accuracy associated with visual fluorescence in HGGs. PMID:26140489
Hasenkamp, W; Villard, J; Delaloye, J R; Arami, A; Bertsch, A; Jolles, B M; Aminian, K; Renaud, P
2014-06-01
Ligament balance is an important and subjective task performed during total knee arthroplasty (TKA) procedure. For this reason, it is desirable to develop instruments to quantitatively assess the soft-tissue balance since excessive imbalance can accelerate prosthesis wear and lead to early surgical revision. The instrumented distractor proposed in this study can assist surgeons on performing ligament balance by measuring the distraction gap and applied load. Also the device allows the determination of the ligament stiffness which can contribute a better understanding of the intrinsic mechanical behavior of the knee joint. Instrumentation of the device involved the use of hall-sensors for measuring the distractor displacement and strain gauges to transduce the force. The sensors were calibrated and tested to demonstrate their suitability for surgical use. Results show the distraction gap can be measured reliably with 0.1mm accuracy and the distractive loads could be assessed with an accuracy in the range of 4N. These characteristics are consistent with those have been proposed, in this work, for a device that could assist on performing ligament balance while permitting surgeons evaluation based on his experience. Preliminary results from in vitro tests were in accordance with expected stiffness values for medial collateral ligament (MCL) and lateral collateral ligament (LCL). Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
Agrawal, Sony; Cifelli, Steven; Johnstone, Richard; Pechter, David; Barbey, Deborah A; Lin, Karen; Allison, Tim; Agrawal, Shree; Rivera-Gines, Aida; Milligan, James A; Schneeweis, Jonathan; Houle, Kevin; Struck, Alice J; Visconti, Richard; Sills, Matthew; Wildey, Mary Jo
2016-02-01
Quantitative reverse transcription PCR (qRT-PCR) is a valuable tool for characterizing the effects of inhibitors on viral replication. The amplification of target viral genes through the use of specifically designed fluorescent probes and primers provides a reliable method for quantifying RNA. Due to reagent costs, use of these assays for compound evaluation is limited. Until recently, the inability to accurately dispense low volumes of qRT-PCR assay reagents precluded the routine use of this PCR assay for compound evaluation in drug discovery. Acoustic dispensing has become an integral part of drug discovery during the past decade; however, acoustic transfer of microliter volumes of aqueous reagents was time consuming. The Labcyte Echo 525 liquid handler was designed to enable rapid aqueous transfers. We compared the accuracy and precision of a qPCR assay using the Labcyte Echo 525 to those of the BioMek FX, a traditional liquid handler, with the goal of reducing the volume and cost of the assay. The data show that the Echo 525 provides higher accuracy and precision compared to the current process using a traditional liquid handler. Comparable data for assay volumes from 500 nL to 12 µL allowed the miniaturization of the assay, resulting in significant cost savings of drug discovery and process streamlining. © 2015 Society for Laboratory Automation and Screening.