Science.gov

Sample records for accurate quantitative methods

  1. An accurate method of extracting fat droplets in liver images for quantitative evaluation

    NASA Astrophysics Data System (ADS)

    Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie

    2015-03-01

    The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.

  2. Novel micelle PCR-based method for accurate, sensitive and quantitative microbiota profiling.

    PubMed

    Boers, Stefan A; Hays, John P; Jansen, Ruud

    2017-04-05

    In the last decade, many researchers have embraced 16S rRNA gene sequencing techniques, which has led to a wealth of publications and documented differences in the composition of microbial communities derived from many different ecosystems. However, comparison between different microbiota studies is currently very difficult due to the lack of a standardized 16S rRNA gene sequencing protocol. Here we report on a novel approach employing micelle PCR (micPCR) in combination with an internal calibrator that allows for standardization of microbiota profiles via their absolute abundances. The addition of an internal calibrator allows the researcher to express the resulting operational taxonomic units (OTUs) as a measure of 16S rRNA gene copies by correcting the number of sequences of each individual OTU in a sample for efficiency differences in the NGS process. Additionally, accurate quantification of OTUs obtained from negative extraction control samples allows for the subtraction of contaminating bacterial DNA derived from the laboratory environment or chemicals/reagents used. Using equimolar synthetic microbial community samples and low biomass clinical samples, we demonstrate that the calibrated micPCR/NGS methodology possess a much higher precision and a lower limit of detection compared with traditional PCR/NGS, resulting in more accurate microbiota profiles suitable for multi-study comparison.

  3. Novel micelle PCR-based method for accurate, sensitive and quantitative microbiota profiling

    PubMed Central

    Boers, Stefan A.; Hays, John P.; Jansen, Ruud

    2017-01-01

    In the last decade, many researchers have embraced 16S rRNA gene sequencing techniques, which has led to a wealth of publications and documented differences in the composition of microbial communities derived from many different ecosystems. However, comparison between different microbiota studies is currently very difficult due to the lack of a standardized 16S rRNA gene sequencing protocol. Here we report on a novel approach employing micelle PCR (micPCR) in combination with an internal calibrator that allows for standardization of microbiota profiles via their absolute abundances. The addition of an internal calibrator allows the researcher to express the resulting operational taxonomic units (OTUs) as a measure of 16S rRNA gene copies by correcting the number of sequences of each individual OTU in a sample for efficiency differences in the NGS process. Additionally, accurate quantification of OTUs obtained from negative extraction control samples allows for the subtraction of contaminating bacterial DNA derived from the laboratory environment or chemicals/reagents used. Using equimolar synthetic microbial community samples and low biomass clinical samples, we demonstrate that the calibrated micPCR/NGS methodology possess a much higher precision and a lower limit of detection compared with traditional PCR/NGS, resulting in more accurate microbiota profiles suitable for multi-study comparison. PMID:28378789

  4. Method for accurate quantitation of background tissue optical properties in the presence of emission from a strong fluorescence marker

    NASA Astrophysics Data System (ADS)

    Bravo, Jaime; Davis, Scott C.; Roberts, David W.; Paulsen, Keith D.; Kanick, Stephen C.

    2015-03-01

    Quantification of targeted fluorescence markers during neurosurgery has the potential to improve and standardize surgical distinction between normal and cancerous tissues. However, quantitative analysis of marker fluorescence is complicated by tissue background absorption and scattering properties. Correction algorithms that transform raw fluorescence intensity into quantitative units, independent of absorption and scattering, require a paired measurement of localized white light reflectance to provide estimates of the optical properties. This study focuses on the unique problem of developing a spectral analysis algorithm to extract tissue absorption and scattering properties from white light spectra that contain contributions from both elastically scattered photons and fluorescence emission from a strong fluorophore (i.e. fluorescein). A fiber-optic reflectance device was used to perform measurements in a small set of optical phantoms, constructed with Intralipid (1% lipid), whole blood (1% volume fraction) and fluorescein (0.16-10 μg/mL). Results show that the novel spectral analysis algorithm yields accurate estimates of tissue parameters independent of fluorescein concentration, with relative errors of blood volume fraction, blood oxygenation fraction (BOF), and the reduced scattering coefficient (at 521 nm) of <7%, <1%, and <22%, respectively. These data represent a first step towards quantification of fluorescein in tissue in vivo.

  5. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  6. Detection and quantitation of trace phenolphthalein (in pharmaceutical preparations and in forensic exhibits) by liquid chromatography-tandem mass spectrometry, a sensitive and accurate method.

    PubMed

    Sharma, Kakali; Sharma, Shiba P; Lahiri, Sujit C

    2013-01-01

    Phenolphthalein, an acid-base indicator and laxative, is important as a constituent of widely used weight-reducing multicomponent food formulations. Phenolphthalein is an useful reagent in forensic science for the identification of blood stains of suspected victims and for apprehending erring officials accepting bribes in graft or trap cases. The pink-colored alkaline hand washes originating from the phenolphthalein-smeared notes can easily be determined spectrophotometrically. But in many cases, colored solution turns colorless with time, which renders the genuineness of bribe cases doubtful to the judiciary. No method is known till now for the detection and identification of phenolphthalein in colorless forensic exhibits with positive proof. Liquid chromatography-tandem mass spectrometry had been found to be most sensitive, accurate method capable of detection and quantitation of trace phenolphthalein in commercial formulations and colorless forensic exhibits with positive proof. The detection limit of phenolphthalein was found to be 1.66 pg/L or ng/mL, and the calibration curve shows good linearity (r(2) = 0.9974).

  7. Quantitative proteomic analysis by accurate mass retention time pairs.

    PubMed

    Silva, Jeffrey C; Denny, Richard; Dorschel, Craig A; Gorenstein, Marc; Kass, Ignatius J; Li, Guo-Zhong; McKenna, Therese; Nold, Michael J; Richardson, Keith; Young, Phillip; Geromanos, Scott

    2005-04-01

    Current methodologies for protein quantitation include 2-dimensional gel electrophoresis techniques, metabolic labeling, and stable isotope labeling methods to name only a few. The current literature illustrates both pros and cons for each of the previously mentioned methodologies. Keeping with the teachings of William of Ockham, "with all things being equal the simplest solution tends to be correct", a simple LC/MS based methodology is presented that allows relative changes in abundance of proteins in highly complex mixtures to be determined. Utilizing a reproducible chromatographic separations system along with the high mass resolution and mass accuracy of an orthogonal time-of-flight mass spectrometer, the quantitative comparison of tens of thousands of ions emanating from identically prepared control and experimental samples can be made. Using this configuration, we can determine the change in relative abundance of a small number of ions between the two conditions solely by accurate mass and retention time. Employing standard operating procedures for both sample preparation and ESI-mass spectrometry, one typically obtains under 5 ppm mass precision and quantitative variations between 10 and 15%. The principal focus of this paper will demonstrate the quantitative aspects of the methodology and continue with a discussion of the associated, complementary qualitative capabilities.

  8. Accurate, meshless methods for magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.; Raives, Matthias J.

    2016-01-01

    Recently, we explored new meshless finite-volume Lagrangian methods for hydrodynamics: the `meshless finite mass' (MFM) and `meshless finite volume' (MFV) methods; these capture advantages of both smoothed particle hydrodynamics (SPH) and adaptive mesh refinement (AMR) schemes. We extend these to include ideal magnetohydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains nabla \\cdot B≈ 0. We implement these in the code GIZMO, together with state-of-the-art SPH MHD. We consider a large test suite, and show that on all problems the new methods are competitive with AMR using constrained transport (CT) to ensure nabla \\cdot B=0. They correctly capture the growth/structure of the magnetorotational instability, MHD turbulence, and launching of magnetic jets, in some cases converging more rapidly than state-of-the-art AMR. Compared to SPH, the MFM/MFV methods exhibit convergence at fixed neighbour number, sharp shock-capturing, and dramatically reduced noise, divergence errors, and diffusion. Still, `modern' SPH can handle most test problems, at the cost of larger kernels and `by hand' adjustment of artificial diffusion. Compared to non-moving meshes, the new methods exhibit enhanced `grid noise' but reduced advection errors and diffusion, easily include self-gravity, and feature velocity-independent errors and superior angular momentum conservation. They converge more slowly on some problems (smooth, slow-moving flows), but more rapidly on others (involving advection/rotation). In all cases, we show divergence control beyond the Powell 8-wave approach is necessary, or all methods can converge to unphysical answers even at high resolution.

  9. Groundtruth approach to accurate quantitation of fluorescence microarrays

    SciTech Connect

    Mascio-Kegelmeyer, L; Tomascik-Cheeseman, L; Burnett, M S; van Hummelen, P; Wyrobek, A J

    2000-12-01

    To more accurately measure fluorescent signals from microarrays, we calibrated our acquisition and analysis systems by using groundtruth samples comprised of known quantities of red and green gene-specific DNA probes hybridized to cDNA targets. We imaged the slides with a full-field, white light CCD imager and analyzed them with our custom analysis software. Here we compare, for multiple genes, results obtained with and without preprocessing (alignment, color crosstalk compensation, dark field subtraction, and integration time). We also evaluate the accuracy of various image processing and analysis techniques (background subtraction, segmentation, quantitation and normalization). This methodology calibrates and validates our system for accurate quantitative measurement of microarrays. Specifically, we show that preprocessing the images produces results significantly closer to the known ground-truth for these samples.

  10. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    PubMed

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources.

  11. Quantitative imaging methods in osteoporosis

    PubMed Central

    Oei, Ling; Koromani, Fjorda; Rivadeneira, Fernando; Zillikens, M. Carola

    2016-01-01

    Osteoporosis is characterized by a decreased bone mass and quality resulting in an increased fracture risk. Quantitative imaging methods are critical in the diagnosis and follow-up of treatment effects in osteoporosis. Prior radiographic vertebral fractures and bone mineral density (BMD) as a quantitative parameter derived from dual-energy X-ray absorptiometry (DXA) are among the strongest known predictors of future osteoporotic fractures. Therefore, current clinical decision making relies heavily on accurate assessment of these imaging features. Further, novel quantitative techniques are being developed to appraise additional characteristics of osteoporosis including three-dimensional bone architecture with quantitative computed tomography (QCT). Dedicated high-resolution (HR) CT equipment is available to enhance image quality. At the other end of the spectrum, by utilizing post-processing techniques such as the trabecular bone score (TBS) information on three-dimensional architecture can be derived from DXA images. Further developments in magnetic resonance imaging (MRI) seem promising to not only capture bone micro-architecture but also characterize processes at the molecular level. This review provides an overview of various quantitative imaging techniques based on different radiological modalities utilized in clinical osteoporosis care and research. PMID:28090446

  12. Quantitative imaging methods in osteoporosis.

    PubMed

    Oei, Ling; Koromani, Fjorda; Rivadeneira, Fernando; Zillikens, M Carola; Oei, Edwin H G

    2016-12-01

    Osteoporosis is characterized by a decreased bone mass and quality resulting in an increased fracture risk. Quantitative imaging methods are critical in the diagnosis and follow-up of treatment effects in osteoporosis. Prior radiographic vertebral fractures and bone mineral density (BMD) as a quantitative parameter derived from dual-energy X-ray absorptiometry (DXA) are among the strongest known predictors of future osteoporotic fractures. Therefore, current clinical decision making relies heavily on accurate assessment of these imaging features. Further, novel quantitative techniques are being developed to appraise additional characteristics of osteoporosis including three-dimensional bone architecture with quantitative computed tomography (QCT). Dedicated high-resolution (HR) CT equipment is available to enhance image quality. At the other end of the spectrum, by utilizing post-processing techniques such as the trabecular bone score (TBS) information on three-dimensional architecture can be derived from DXA images. Further developments in magnetic resonance imaging (MRI) seem promising to not only capture bone micro-architecture but also characterize processes at the molecular level. This review provides an overview of various quantitative imaging techniques based on different radiological modalities utilized in clinical osteoporosis care and research.

  13. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  14. Accurate methods for large molecular systems.

    PubMed

    Gordon, Mark S; Mullin, Jonathan M; Pruitt, Spencer R; Roskop, Luke B; Slipchenko, Lyudmila V; Boatz, Jerry A

    2009-07-23

    Three exciting new methods that address the accurate prediction of processes and properties of large molecular systems are discussed. The systematic fragmentation method (SFM) and the fragment molecular orbital (FMO) method both decompose a large molecular system (e.g., protein, liquid, zeolite) into small subunits (fragments) in very different ways that are designed to both retain the high accuracy of the chosen quantum mechanical level of theory while greatly reducing the demands on computational time and resources. Each of these methods is inherently scalable and is therefore eminently capable of taking advantage of massively parallel computer hardware while retaining the accuracy of the corresponding electronic structure method from which it is derived. The effective fragment potential (EFP) method is a sophisticated approach for the prediction of nonbonded and intermolecular interactions. Therefore, the EFP method provides a way to further reduce the computational effort while retaining accuracy by treating the far-field interactions in place of the full electronic structure method. The performance of the methods is demonstrated using applications to several systems, including benzene dimer, small organic species, pieces of the alpha helix, water, and ionic liquids.

  15. A new HPLC method for azithromycin quantitation.

    PubMed

    Zubata, Patricia; Ceresole, Rita; Rosasco, Maria Ana; Pizzorno, Maria Teresa

    2002-02-01

    A simple liquid chromatographic method was developed for the estimation of azithromycin raw material and in pharmaceutical forms. The sample was chromatographed on a reverse phase C18 column and eluants monitored at a wavelength of 215 nm. The method was accurate, precise and sufficiently selective. It is applicable for its quantitation, stability and dissolution tests.

  16. Accurate method for computing correlated color temperature.

    PubMed

    Li, Changjun; Cui, Guihua; Melgosa, Manuel; Ruan, Xiukai; Zhang, Yaoju; Ma, Long; Xiao, Kaida; Luo, M Ronnier

    2016-06-27

    For the correlated color temperature (CCT) of a light source to be estimated, a nonlinear optimization problem must be solved. In all previous methods available to compute CCT, the objective function has only been approximated, and their predictions have achieved limited accuracy. For example, different unacceptable CCT values have been predicted for light sources located on the same isotemperature line. In this paper, we propose to compute CCT using the Newton method, which requires the first and second derivatives of the objective function. Following the current recommendation by the International Commission on Illumination (CIE) for the computation of tristimulus values (summations at 1 nm steps from 360 nm to 830 nm), the objective function and its first and second derivatives are explicitly given and used in our computations. Comprehensive tests demonstrate that the proposed method, together with an initial estimation of CCT using Robertson's method [J. Opt. Soc. Am. 58, 1528-1535 (1968)], gives highly accurate predictions below 0.0012 K for light sources with CCTs ranging from 500 K to 106 K.

  17. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  18. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  19. Second Order Accurate Finite Difference Methods

    DTIC Science & Technology

    1984-08-20

    a study of the idealized material has direct applications to some polymer structures (4, 5). Wave propagation studies in hyperelastic materials have...34Acceleration Wave Propagation in Hyperelastic Rods of Variable Cross- section. Wave Motion, V4, pp. 173-180, 1982. 9. M. Hirao and N. Sugimoto...Waves in Hyperelastic Road," Quart. Appl. Math., V37, pp. 377-399, 1979. 11. G. A. Sod. "A Survey of Several Finite Difference Methods for Systems of

  20. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  1. Optimization of sample preparation for accurate results in quantitative NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Yamazaki, Taichi; Nakamura, Satoe; Saito, Takeshi

    2017-04-01

    Quantitative nuclear magnetic resonance (qNMR) spectroscopy has received high marks as an excellent measurement tool that does not require the same reference standard as the analyte. Measurement parameters have been discussed in detail and high-resolution balances have been used for sample preparation. However, the high-resolution balances, such as an ultra-microbalance, are not general-purpose analytical tools and many analysts may find those balances difficult to use, thereby hindering accurate sample preparation for qNMR measurement. In this study, we examined the relationship between the resolution of the balance and the amount of sample weighed during sample preparation. We were able to confirm the accuracy of the assay results for samples weighed on a high-resolution balance, such as the ultra-microbalance. Furthermore, when an appropriate tare and amount of sample was weighed on a given balance, accurate assay results were obtained with another high-resolution balance. Although this is a fundamental result, it offers important evidence that would enhance the versatility of the qNMR method.

  2. FANSe: an accurate algorithm for quantitative mapping of large scale sequencing reads

    PubMed Central

    Zhang, Gong; Fedyunin, Ivan; Kirchner, Sebastian; Xiao, Chuanle; Valleriani, Angelo; Ignatova, Zoya

    2012-01-01

    The most crucial step in data processing from high-throughput sequencing applications is the accurate and sensitive alignment of the sequencing reads to reference genomes or transcriptomes. The accurate detection of insertions and deletions (indels) and errors introduced by the sequencing platform or by misreading of modified nucleotides is essential for the quantitative processing of the RNA-based sequencing (RNA-Seq) datasets and for the identification of genetic variations and modification patterns. We developed a new, fast and accurate algorithm for nucleic acid sequence analysis, FANSe, with adjustable mismatch allowance settings and ability to handle indels to accurately and quantitatively map millions of reads to small or large reference genomes. It is a seed-based algorithm which uses the whole read information for mapping and high sensitivity and low ambiguity are achieved by using short and non-overlapping reads. Furthermore, FANSe uses hotspot score to prioritize the processing of highly possible matches and implements modified Smith–Watermann refinement with reduced scoring matrix to accelerate the calculation without compromising its sensitivity. The FANSe algorithm stably processes datasets from various sequencing platforms, masked or unmasked and small or large genomes. It shows a remarkable coverage of low-abundance mRNAs which is important for quantitative processing of RNA-Seq datasets. PMID:22379138

  3. FANSe: an accurate algorithm for quantitative mapping of large scale sequencing reads.

    PubMed

    Zhang, Gong; Fedyunin, Ivan; Kirchner, Sebastian; Xiao, Chuanle; Valleriani, Angelo; Ignatova, Zoya

    2012-06-01

    The most crucial step in data processing from high-throughput sequencing applications is the accurate and sensitive alignment of the sequencing reads to reference genomes or transcriptomes. The accurate detection of insertions and deletions (indels) and errors introduced by the sequencing platform or by misreading of modified nucleotides is essential for the quantitative processing of the RNA-based sequencing (RNA-Seq) datasets and for the identification of genetic variations and modification patterns. We developed a new, fast and accurate algorithm for nucleic acid sequence analysis, FANSe, with adjustable mismatch allowance settings and ability to handle indels to accurately and quantitatively map millions of reads to small or large reference genomes. It is a seed-based algorithm which uses the whole read information for mapping and high sensitivity and low ambiguity are achieved by using short and non-overlapping reads. Furthermore, FANSe uses hotspot score to prioritize the processing of highly possible matches and implements modified Smith-Watermann refinement with reduced scoring matrix to accelerate the calculation without compromising its sensitivity. The FANSe algorithm stably processes datasets from various sequencing platforms, masked or unmasked and small or large genomes. It shows a remarkable coverage of low-abundance mRNAs which is important for quantitative processing of RNA-Seq datasets.

  4. A method to accurately quantitate intensities of (32)P-DNA bands when multiple bands appear in a single lane of a gel is used to study dNTP insertion opposite a benzo[a]pyrene-dG adduct by Sulfolobus DNA polymerases Dpo4 and Dbh.

    PubMed

    Sholder, Gabriel; Loechler, Edward L

    2015-01-01

    Quantitating relative (32)P-band intensity in gels is desired, e.g., to study primer-extension kinetics of DNA polymerases (DNAPs). Following imaging, multiple (32)P-bands are often present in lanes. Though individual bands appear by eye to be simple and well-resolved, scanning reveals they are actually skewed-Gaussian in shape and neighboring bands are overlapping, which complicates quantitation, because slower migrating bands often have considerable contributions from the trailing edges of faster migrating bands. A method is described to accurately quantitate adjacent (32)P-bands, which relies on having a standard: a simple skewed-Gaussian curve from an analogous pure, single-component band (e.g., primer alone). This single-component scan/curve is superimposed on its corresponding band in an experimentally determined scan/curve containing multiple bands (e.g., generated in a primer-extension reaction); intensity exceeding the single-component scan/curve is attributed to other components (e.g., insertion products). Relative areas/intensities are determined via pixel analysis, from which relative molarity of components is computed. Common software is used. Commonly used alternative methods (e.g., drawing boxes around bands) are shown to be less accurate. Our method was used to study kinetics of dNTP primer-extension opposite a benzo[a]pyrene-N(2)-dG-adduct with four DNAPs, including Sulfolobus solfataricus Dpo4 and Sulfolobus acidocaldarius Dbh. Vmax/Km is similar for correct dCTP insertion with Dpo4 and Dbh. Compared to Dpo4, Dbh misinsertion is slower for dATP (∼20-fold), dGTP (∼110-fold) and dTTP (∼6-fold), due to decreases in Vmax. These findings provide support that Dbh is in the same Y-Family DNAP class as eukaryotic DNAP κ and bacterial DNAP IV, which accurately bypass N(2)-dG adducts, as well as establish the scan-method described herein as an accurate method to quantitate relative intensity of overlapping bands in a single lane, whether generated

  5. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  6. Accurate detection and quantitation of heteroplasmic mitochondrial point mutations by pyrosequencing.

    PubMed

    White, Helen E; Durston, Victoria J; Seller, Anneke; Fratter, Carl; Harvey, John F; Cross, Nicholas C P

    2005-01-01

    Disease-causing mutations in mitochondrial DNA (mtDNA) are typically heteroplasmic and therefore interpretation of genetic tests for mitochondrial disorders can be problematic. Detection of low level heteroplasmy is technically demanding and it is often difficult to discriminate between the absence of a mutation or the failure of a technique to detect the mutation in a particular tissue. The reliable measurement of heteroplasmy in different tissues may help identify individuals who are at risk of developing specific complications and allow improved prognostic advice for patients and family members. We have evaluated Pyrosequencing technology for the detection and estimation of heteroplasmy for six mitochondrial point mutations associated with the following diseases: Leber's hereditary optical neuropathy (LHON), G3460A, G11778A, and T14484C; mitochondrial encephalopathy with lactic acidosis and stroke-like episodes (MELAS), A3243G; myoclonus epilepsy with ragged red fibers (MERRF), A8344G, and neurogenic muscle weakness, ataxia, and retinitis pigmentosa (NARP)/Leighs: T8993G/C. Results obtained from the Pyrosequencing assays for 50 patients with presumptive mitochondrial disease were compared to those obtained using the commonly used diagnostic technique of polymerase chain reaction (PCR) and restriction enzyme digestion. The Pyrosequencing assays provided accurate genotyping and quantitative determination of mutational load with a sensitivity and specificity of 100%. The MELAS A3243G mutation was detected reliably at a level of 1% heteroplasmy. We conclude that Pyrosequencing is a rapid and robust method for detecting heteroplasmic mitochondrial point mutations.

  7. Foucault test: a quantitative evaluation method.

    PubMed

    Rodríguez, Gustavo; Villa, Jesús; Ivanov, Rumen; González, Efrén; Martínez, Geminiano

    2016-08-01

    Reliable and accurate testing methods are essential to guiding the polishing process during the figuring of optical telescope mirrors. With the natural advancement of technology, the procedures and instruments used to carry out this delicate task have consistently increased in sensitivity, but also in complexity and cost. Fortunately, throughout history, the Foucault knife-edge test has shown the potential to measure transverse aberrations in the order of the wavelength, mainly when described in terms of physical theory, which allows a quantitative interpretation of its characteristic shadowmaps. Our previous publication on this topic derived a closed mathematical formulation that directly relates the knife-edge position with the observed irradiance pattern. The present work addresses the quite unexplored problem of the wavefront's gradient estimation from experimental captures of the test, which is achieved by means of an optimization algorithm featuring a proposed ad hoc cost function. The partial derivatives thereby calculated are then integrated by means of a Fourier-based algorithm to retrieve the mirror's actual surface profile. To date and to the best of our knowledge, this is the very first time that a complete mathematical-grounded treatment of this optical phenomenon is presented, complemented by an image-processing algorithm which allows a quantitative calculation of the corresponding slope at any given point of the mirror's surface, so that it becomes possible to accurately estimate the aberrations present in the analyzed concave device just through its associated foucaultgrams.

  8. Quantifying Methane Fluxes Simply and Accurately: The Tracer Dilution Method

    NASA Astrophysics Data System (ADS)

    Rella, Christopher; Crosson, Eric; Green, Roger; Hater, Gary; Dayton, Dave; Lafleur, Rick; Merrill, Ray; Tan, Sze; Thoma, Eben

    2010-05-01

    Methane is an important atmospheric constituent with a wide variety of sources, both natural and anthropogenic, including wetlands and other water bodies, permafrost, farms, landfills, and areas with significant petrochemical exploration, drilling, transport, or processing, or refining occurs. Despite its importance to the carbon cycle, its significant impact as a greenhouse gas, and its ubiquity in modern life as a source of energy, its sources and sinks in marine and terrestrial ecosystems are only poorly understood. This is largely because high quality, quantitative measurements of methane fluxes in these different environments have not been available, due both to the lack of robust field-deployable instrumentation as well as to the fact that most significant sources of methane extend over large areas (from 10's to 1,000,000's of square meters) and are heterogeneous emitters - i.e., the methane is not emitted evenly over the area in question. Quantifying the total methane emissions from such sources becomes a tremendous challenge, compounded by the fact that atmospheric transport from emission point to detection point can be highly variable. In this presentation we describe a robust, accurate, and easy-to-deploy technique called the tracer dilution method, in which a known gas (such as acetylene, nitrous oxide, or sulfur hexafluoride) is released in the same vicinity of the methane emissions. Measurements of methane and the tracer gas are then made downwind of the release point, in the so-called far-field, where the area of methane emissions cannot be distinguished from a point source (i.e., the two gas plumes are well-mixed). In this regime, the methane emissions are given by the ratio of the two measured concentrations, multiplied by the known tracer emission rate. The challenges associated with atmospheric variability and heterogeneous methane emissions are handled automatically by the transport and dispersion of the tracer. We present detailed methane flux

  9. A Simple and Accurate Method for Measuring Enzyme Activity.

    ERIC Educational Resources Information Center

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  10. Accurate Quantitative Sensing of Intracellular pH based on Self-ratiometric Upconversion Luminescent Nanoprobe

    NASA Astrophysics Data System (ADS)

    Li, Cuixia; Zuo, Jing; Zhang, Li; Chang, Yulei; Zhang, Youlin; Tu, Langping; Liu, Xiaomin; Xue, Bin; Li, Qiqing; Zhao, Huiying; Zhang, Hong; Kong, Xianggui

    2016-12-01

    Accurate quantitation of intracellular pH (pHi) is of great importance in revealing the cellular activities and early warning of diseases. A series of fluorescence-based nano-bioprobes composed of different nanoparticles or/and dye pairs have already been developed for pHi sensing. Till now, biological auto-fluorescence background upon UV-Vis excitation and severe photo-bleaching of dyes are the two main factors impeding the accurate quantitative detection of pHi. Herein, we have developed a self-ratiometric luminescence nanoprobe based on förster resonant energy transfer (FRET) for probing pHi, in which pH-sensitive fluorescein isothiocyanate (FITC) and upconversion nanoparticles (UCNPs) were served as energy acceptor and donor, respectively. Under 980 nm excitation, upconversion emission bands at 475 nm and 645 nm of NaYF4:Yb3+, Tm3+ UCNPs were used as pHi response and self-ratiometric reference signal, respectively. This direct quantitative sensing approach has circumvented the traditional software-based subsequent processing of images which may lead to relatively large uncertainty of the results. Due to efficient FRET and fluorescence background free, a highly-sensitive and accurate sensing has been achieved, featured by 3.56 per unit change in pHi value 3.0–7.0 with deviation less than 0.43. This approach shall facilitate the researches in pHi related areas and development of the intracellular drug delivery systems.

  11. Accurate Quantitative Sensing of Intracellular pH based on Self-ratiometric Upconversion Luminescent Nanoprobe

    PubMed Central

    Li, Cuixia; Zuo, Jing; Zhang, Li; Chang, Yulei; Zhang, Youlin; Tu, Langping; Liu, Xiaomin; Xue, Bin; Li, Qiqing; Zhao, Huiying; Zhang, Hong; Kong, Xianggui

    2016-01-01

    Accurate quantitation of intracellular pH (pHi) is of great importance in revealing the cellular activities and early warning of diseases. A series of fluorescence-based nano-bioprobes composed of different nanoparticles or/and dye pairs have already been developed for pHi sensing. Till now, biological auto-fluorescence background upon UV-Vis excitation and severe photo-bleaching of dyes are the two main factors impeding the accurate quantitative detection of pHi. Herein, we have developed a self-ratiometric luminescence nanoprobe based on förster resonant energy transfer (FRET) for probing pHi, in which pH-sensitive fluorescein isothiocyanate (FITC) and upconversion nanoparticles (UCNPs) were served as energy acceptor and donor, respectively. Under 980 nm excitation, upconversion emission bands at 475 nm and 645 nm of NaYF4:Yb3+, Tm3+ UCNPs were used as pHi response and self-ratiometric reference signal, respectively. This direct quantitative sensing approach has circumvented the traditional software-based subsequent processing of images which may lead to relatively large uncertainty of the results. Due to efficient FRET and fluorescence background free, a highly-sensitive and accurate sensing has been achieved, featured by 3.56 per unit change in pHi value 3.0–7.0 with deviation less than 0.43. This approach shall facilitate the researches in pHi related areas and development of the intracellular drug delivery systems. PMID:27934889

  12. A fluorescence-based quantitative real-time PCR assay for accurate Pocillopora damicornis species identification

    NASA Astrophysics Data System (ADS)

    Thomas, Luke; Stat, Michael; Evans, Richard D.; Kennington, W. Jason

    2016-09-01

    Pocillopora damicornis is one of the most extensively studied coral species globally, but high levels of phenotypic plasticity within the genus make species identification based on morphology alone unreliable. As a result, there is a compelling need to develop cheap and time-effective molecular techniques capable of accurately distinguishing P. damicornis from other congeneric species. Here, we develop a fluorescence-based quantitative real-time PCR (qPCR) assay to genotype a single nucleotide polymorphism that accurately distinguishes P. damicornis from other morphologically similar Pocillopora species. We trial the assay across colonies representing multiple Pocillopora species and then apply the assay to screen samples of Pocillopora spp. collected at regional scales along the coastline of Western Australia. This assay offers a cheap and time-effective alternative to Sanger sequencing and has broad applications including studies on gene flow, dispersal, recruitment and physiological thresholds of P. damicornis.

  13. Quantitative proteomics using the high resolution accurate mass capabilities of the quadrupole-orbitrap mass spectrometer.

    PubMed

    Gallien, Sebastien; Domon, Bruno

    2014-08-01

    High resolution/accurate mass hybrid mass spectrometers have considerably advanced shotgun proteomics and the recent introduction of fast sequencing capabilities has expanded its use for targeted approaches. More specifically, the quadrupole-orbitrap instrument has a unique configuration and its new features enable a wide range of experiments. An overview of the analytical capabilities of this instrument is presented, with a focus on its application to quantitative analyses. The high resolution, the trapping capability and the versatility of the instrument have allowed quantitative proteomic workflows to be redefined and new data acquisition schemes to be developed. The initial proteomic applications have shown an improvement of the analytical performance. However, as quantification relies on ion trapping, instead of ion beam, further refinement of the technique can be expected.

  14. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    ERIC Educational Resources Information Center

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  15. Accurate upwind-monotone (nonoscillatory) methods for conservation laws

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1992-01-01

    The well known MUSCL scheme of Van Leer is constructed using a piecewise linear approximation. The MUSCL scheme is second order accurate at the smooth part of the solution except at extrema where the accuracy degenerates to first order due to the monotonicity constraint. To construct accurate schemes which are free from oscillations, the author introduces the concept of upwind monotonicity. Several classes of schemes, which are upwind monotone and of uniform second or third order accuracy are then presented. Results for advection with constant speed are shown. It is also shown that the new scheme compares favorably with state of the art methods.

  16. Highly sensitive capillary electrophoresis-mass spectrometry for rapid screening and accurate quantitation of drugs of abuse in urine.

    PubMed

    Kohler, Isabelle; Schappler, Julie; Rudaz, Serge

    2013-05-30

    The combination of capillary electrophoresis (CE) and mass spectrometry (MS) is particularly well adapted to bioanalysis due to its high separation efficiency, selectivity, and sensitivity; its short analytical time; and its low solvent and sample consumption. For clinical and forensic toxicology, a two-step analysis is usually performed: first, a screening step for compound identification, and second, confirmation and/or accurate quantitation in cases of presumed positive results. In this study, a fast and sensitive CE-MS workflow was developed for the screening and quantitation of drugs of abuse in urine samples. A CE with a time-of-flight MS (CE-TOF/MS) screening method was developed using a simple urine dilution and on-line sample preconcentration with pH-mediated stacking. The sample stacking allowed for a high loading capacity (20.5% of the capillary length), leading to limits of detection as low as 2 ng mL(-1) for drugs of abuse. Compound quantitation of positive samples was performed by CE-MS/MS with a triple quadrupole MS equipped with an adapted triple-tube sprayer and an electrospray ionization (ESI) source. The CE-ESI-MS/MS method was validated for two model compounds, cocaine (COC) and methadone (MTD), according to the Guidance of the Food and Drug Administration. The quantitative performance was evaluated for selectivity, response function, the lower limit of quantitation, trueness, precision, and accuracy. COC and MTD detection in urine samples was determined to be accurate over the range of 10-1000 ng mL(-1) and 21-1000 ng mL(-1), respectively.

  17. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  18. Accurate Method for Determining Adhesion of Cantilever Beams

    SciTech Connect

    Michalske, T.A.; de Boer, M.P.

    1999-01-08

    Using surface micromachined samples, we demonstrate the accurate measurement of cantilever beam adhesion by using test structures which are adhered over long attachment lengths. We show that this configuration has a deep energy well, such that a fracture equilibrium is easily reached. When compared to the commonly used method of determining the shortest attached beam, the present method is much less sensitive to variations in surface topography or to details of capillary drying.

  19. On an efficient and accurate method to integrate restricted three-body orbits

    NASA Technical Reports Server (NTRS)

    Murison, Marc A.

    1989-01-01

    This work is a quantitative analysis of the advantages of the Bulirsch-Stoer (1966) method, demonstrating that this method is certainly worth considering when working with small N dynamical systems. The results, qualitatively suspected by many users, are quantitatively confirmed as follows: (1) the Bulirsch-Stoer extrapolation method is very fast and moderately accurate; (2) regularization of the equations of motion stabilizes the error behavior of the method and is, of course, essential during close approaches; and (3) when applicable, a manifold-correction algorithm reduces numerical errors to the limits of machine accuracy. In addition, for the specific case of the restricted three-body problem, even a small eccentricity for the orbit of the primaries drastically affects the accuracy of integrations, whether regularized or not; the circular restricted problem integrates much more accurately.

  20. Mapping methods for computationally efficient and accurate structural reliability

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1992-01-01

    Mapping methods are developed to improve the accuracy and efficiency of probabilistic structural analyses with coarse finite element meshes. The mapping methods consist of: (1) deterministic structural analyses with fine (convergent) finite element meshes, (2) probabilistic structural analyses with coarse finite element meshes, (3) the relationship between the probabilistic structural responses from the coarse and fine finite element meshes, and (4) a probabilistic mapping. The results show that the scatter of the probabilistic structural responses and structural reliability can be accurately predicted using a coarse finite element model with proper mapping methods. Therefore, large structures can be analyzed probabilistically using finite element methods.

  1. Development and Validation of a Highly Accurate Quantitative Real-Time PCR Assay for Diagnosis of Bacterial Vaginosis

    PubMed Central

    Smith, William L.; Chadwick, Sean G.; Toner, Geoffrey; Mordechai, Eli; Adelson, Martin E.; Aguin, Tina J.; Sobel, Jack D.

    2016-01-01

    Bacterial vaginosis (BV) is the most common gynecological infection in the United States. Diagnosis based on Amsel's criteria can be challenging and can be aided by laboratory-based testing. A standard method for diagnosis in research studies is enumeration of bacterial morphotypes of a Gram-stained vaginal smear (i.e., Nugent scoring). However, this technique is subjective, requires specialized training, and is not widely available. Therefore, a highly accurate molecular assay for the diagnosis of BV would be of great utility. We analyzed 385 vaginal specimens collected prospectively from subjects who were evaluated for BV by clinical signs and Nugent scoring. We analyzed quantitative real-time PCR (qPCR) assays on DNA extracted from these specimens to quantify nine organisms associated with vaginal health or disease: Gardnerella vaginalis, Atopobium vaginae, BV-associated bacteria 2 (BVAB2, an uncultured member of the order Clostridiales), Megasphaera phylotype 1 or 2, Lactobacillus iners, Lactobacillus crispatus, Lactobacillus gasseri, and Lactobacillus jensenii. We generated a logistic regression model that identified G. vaginalis, A. vaginae, and Megasphaera phylotypes 1 and 2 as the organisms for which quantification provided the most accurate diagnosis of symptomatic BV, as defined by Amsel's criteria and Nugent scoring, with 92% sensitivity, 95% specificity, 94% positive predictive value, and 94% negative predictive value. The inclusion of Lactobacillus spp. did not contribute sufficiently to the quantitative model for symptomatic BV detection. This molecular assay is a highly accurate laboratory tool to assist in the diagnosis of symptomatic BV. PMID:26818677

  2. Quantitation of Insulin-Like Growth Factor 1 in Serum by Liquid Chromatography High Resolution Accurate-Mass Mass Spectrometry.

    PubMed

    Ketha, Hemamalini; Singh, Ravinder J

    2016-01-01

    Insulin-like growth factor 1 (IGF-1) is a 70 amino acid peptide hormone which acts as the principal mediator of the effects of growth hormone (GH). Due to a wide variability in circulating concentration of GH, IGF-1 quantitation is the first step in the diagnosis of GH excess or deficiency. Majority (>95 %) of IGF-1 circulates as a ternary complex along with its principle binding protein insulin-like growth factor 1 binding protein 3 (IGFBP-3) and acid labile subunit. The assay design approach for IGF-1 quantitation has to include a step to dissociate IGF-1 from its ternary complex. Several commercial assays employ a buffer containing acidified ethanol to achieve this. Despite several modifications, commercially available immunoassays have been shown to have challenges with interference from IGFBP-3. Additionally, inter-method comparison between IGF-1 immunoassays has been shown to be suboptimal. Mass spectrometry has been utilized for quantitation of IGF-1. In this chapter a liquid chromatography high resolution accurate-mass mass spectrometry (LC-HRAMS) based method for IGF-1 quantitation has been described.

  3. Automated Quantitative Nuclear Cardiology Methods

    PubMed Central

    Motwani, Manish; Berman, Daniel S.; Germano, Guido; Slomka, Piotr J.

    2016-01-01

    Quantitative analysis of SPECT and PET has become a major part of nuclear cardiology practice. Current software tools can automatically segment the left ventricle, quantify function, establish myocardial perfusion maps and estimate global and local measures of stress/rest perfusion – all with minimal user input. State-of-the-art automated techniques have been shown to offer high diagnostic accuracy for detecting coronary artery disease, as well as predict prognostic outcomes. This chapter briefly reviews these techniques, highlights several challenges and discusses the latest developments. PMID:26590779

  4. CT Scan Method Accurately Assesses Humeral Head Retroversion

    PubMed Central

    Boileau, P.; Mazzoleni, N.; Walch, G.; Urien, J. P.

    2008-01-01

    Humeral head retroversion is not well described with the literature controversial regarding accuracy of measurement methods and ranges of normal values. We therefore determined normal humeral head retroversion and assessed the measurement methods. We measured retroversion in 65 cadaveric humeri, including 52 paired specimens, using four methods: radiographic, computed tomography (CT) scan, computer-assisted, and direct methods. We also assessed the distance between the humeral head central axis and the bicipital groove. CT scan methods accurately measure humeral head retroversion, while radiographic methods do not. The retroversion with respect to the transepicondylar axis was 17.9° and 21.5° with respect to the trochlear tangent axis. The difference between the right and left humeri was 8.9°. The distance between the central axis of the humeral head and the bicipital groove was 7.0 mm and was consistent between right and left humeri. Humeral head retroversion may be most accurately obtained using the patient’s own anatomic landmarks or, if not, identifiable retroversion as measured by those landmarks on contralateral side or the bicipital groove. PMID:18264854

  5. There's plenty of gloom at the bottom: the many challenges of accurate quantitation in size-based oligomeric separations.

    PubMed

    Striegel, André M

    2013-11-01

    There is a variety of small-molecule species (e.g., tackifiers, plasticizers, oligosaccharides) the size-based characterization of which is of considerable scientific and industrial importance. Likewise, quantitation of the amount of oligomers in a polymer sample is crucial for the import and export of substances into the USA and European Union (EU). While the characterization of ultra-high molar mass macromolecules by size-based separation techniques is generally considered a challenge, it is this author's contention that a greater challenge is encountered when trying to perform, for quantitation purposes, separations in and of the oligomeric region. The latter thesis is expounded herein, by detailing the various obstacles encountered en route to accurate, quantitative oligomeric separations by entropically dominated techniques such as size-exclusion chromatography, hydrodynamic chromatography, and asymmetric flow field-flow fractionation, as well as by methods which are, principally, enthalpically driven such as liquid adsorption and temperature gradient interaction chromatography. These obstacles include, among others, the diminished sensitivity of static light scattering (SLS) detection at low molar masses, the non-constancy of the response of SLS and of commonly employed concentration-sensitive detectors across the oligomeric region, and the loss of oligomers through the accumulation wall membrane in asymmetric flow field-flow fractionation. The battle is not lost, however, because, with some care and given a sufficient supply of sample, the quantitation of both individual oligomeric species and of the total oligomeric region is often possible.

  6. An Accurate and Efficient Method of Computing Differential Seismograms

    NASA Astrophysics Data System (ADS)

    Hu, S.; Zhu, L.

    2013-12-01

    Inversion of seismic waveforms for Earth structure usually requires computing partial derivatives of seismograms with respect to velocity model parameters. We developed an accurate and efficient method to calculate differential seismograms for multi-layered elastic media, based on the Thompson-Haskell propagator matrix technique. We first derived the partial derivatives of the Haskell matrix and its compound matrix respect to the layer parameters (P wave velocity, shear wave velocity and density). We then derived the partial derivatives of surface displacement kernels in the frequency-wavenumber domain. The differential seismograms are obtained by using the frequency-wavenumber double integration method. The implementation is computationally efficient and the total computing time is proportional to the time of computing the seismogram itself, i.e., independent of the number of layers in the model. We verified the correctness of results by comparing with differential seismograms computed using the finite differences method. Our results are more accurate because of the analytical nature of the derived partial derivatives.

  7. Accurate optical CD profiler based on specialized finite element method

    NASA Astrophysics Data System (ADS)

    Carrero, Jesus; Perçin, Gökhan

    2012-03-01

    As the semiconductor industry is moving to very low-k1 patterning solutions, the metrology problems facing process engineers are becoming much more complex. Choosing the right optical critical dimension (OCD) metrology technique is essential for bridging the metrology gap and achieving the required manufacturing volume throughput. The critical dimension scanning electron microscope (CD-SEM) measurement is usually distorted by the high aspect ratio of the photoresist and hard mask layers. CD-SEM measurements cease to correlate with complex three-dimensional profiles, such as the cases for double patterning and FinFETs, thus necessitating sophisticated, accurate and fast computational methods to bridge the gap. In this work, a suite of computational methods that complement advanced OCD equipment, and enabling them to operate at higher accuracies, are developed. In this article, a novel method for accurately modeling OCD profiles is presented. A finite element formulation in primal form is used to discretize the equations. The implementation uses specialized finite element spaces to solve Maxwell equations in two dimensions.

  8. Method for Accurate Surface Temperature Measurements During Fast Induction Heating

    NASA Astrophysics Data System (ADS)

    Larregain, Benjamin; Vanderesse, Nicolas; Bridier, Florent; Bocher, Philippe; Arkinson, Patrick

    2013-07-01

    A robust method is proposed for the measurement of surface temperature fields during induction heating. It is based on the original coupling of temperature-indicating lacquers and a high-speed camera system. Image analysis tools have been implemented to automatically extract the temporal evolution of isotherms. This method was applied to the fast induction treatment of a 4340 steel spur gear, allowing the full history of surface isotherms to be accurately documented for a sequential heating, i.e., a medium frequency preheating followed by a high frequency final heating. Three isotherms, i.e., 704, 816, and 927°C, were acquired every 0.3 ms with a spatial resolution of 0.04 mm per pixel. The information provided by the method is described and discussed. Finally, the transformation temperature Ac1 is linked to the temperature on specific locations of the gear tooth.

  9. Novel dispersion tolerant interferometry method for accurate measurements of displacement

    NASA Astrophysics Data System (ADS)

    Bradu, Adrian; Maria, Michael; Leick, Lasse; Podoleanu, Adrian G.

    2015-05-01

    We demonstrate that the recently proposed master-slave interferometry method is able to provide true dispersion free depth profiles in a spectrometer-based set-up that can be used for accurate displacement measurements in sensing and optical coherence tomography. The proposed technique is based on correlating the channelled spectra produced by the linear camera in the spectrometer with previously recorded masks. As such technique is not based on Fourier transformations (FT), it does not require any resampling of data and is immune to any amounts of dispersion left unbalanced in the system. In order to prove the tolerance of technique to dispersion, different lengths of optical fiber are used in the interferometer to introduce dispersion and it is demonstrated that neither the sensitivity profile versus optical path difference (OPD) nor the depth resolution are affected. In opposition, it is shown that the classical FT based methods using calibrated data provide less accurate optical path length measurements and exhibit a quicker decays of sensitivity with OPD.

  10. Accurate radiation temperature and chemical potential from quantitative photoluminescence analysis of hot carrier populations

    NASA Astrophysics Data System (ADS)

    Gibelli, François; Lombez, Laurent; Guillemoles, Jean-François

    2017-02-01

    In order to characterize hot carrier populations in semiconductors, photoluminescence measurement is a convenient tool, enabling us to probe the carrier thermodynamical properties in a contactless way. However, the analysis of the photoluminescence spectra is based on some assumptions which will be discussed in this work. We especially emphasize the importance of the variation of the material absorptivity that should be considered to access accurate thermodynamical properties of the carriers, especially by varying the excitation power. The proposed method enables us to obtain more accurate results of thermodynamical properties by taking into account a rigorous physical description and finds direct application in investigating hot carrier solar cells, which are an adequate concept for achieving high conversion efficiencies with a relatively simple device architecture.

  11. Accurate radiation temperature and chemical potential from quantitative photoluminescence analysis of hot carrier populations.

    PubMed

    Gibelli, François; Lombez, Laurent; Guillemoles, Jean-François

    2017-02-15

    In order to characterize hot carrier populations in semiconductors, photoluminescence measurement is a convenient tool, enabling us to probe the carrier thermodynamical properties in a contactless way. However, the analysis of the photoluminescence spectra is based on some assumptions which will be discussed in this work. We especially emphasize the importance of the variation of the material absorptivity that should be considered to access accurate thermodynamical properties of the carriers, especially by varying the excitation power. The proposed method enables us to obtain more accurate results of thermodynamical properties by taking into account a rigorous physical description and finds direct application in investigating hot carrier solar cells, which are an adequate concept for achieving high conversion efficiencies with a relatively simple device architecture.

  12. Accurate finite difference methods for time-harmonic wave propagation

    NASA Technical Reports Server (NTRS)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  13. Qualitative versus quantitative methods in psychiatric research.

    PubMed

    Razafsha, Mahdi; Behforuzi, Hura; Azari, Hassan; Zhang, Zhiqun; Wang, Kevin K; Kobeissy, Firas H; Gold, Mark S

    2012-01-01

    Qualitative studies are gaining their credibility after a period of being misinterpreted as "not being quantitative." Qualitative method is a broad umbrella term for research methodologies that describe and explain individuals' experiences, behaviors, interactions, and social contexts. In-depth interview, focus groups, and participant observation are among the qualitative methods of inquiry commonly used in psychiatry. Researchers measure the frequency of occurring events using quantitative methods; however, qualitative methods provide a broader understanding and a more thorough reasoning behind the event. Hence, it is considered to be of special importance in psychiatry. Besides hypothesis generation in earlier phases of the research, qualitative methods can be employed in questionnaire design, diagnostic criteria establishment, feasibility studies, as well as studies of attitude and beliefs. Animal models are another area that qualitative methods can be employed, especially when naturalistic observation of animal behavior is important. However, since qualitative results can be researcher's own view, they need to be statistically confirmed, quantitative methods. The tendency to combine both qualitative and quantitative methods as complementary methods has emerged over recent years. By applying both methods of research, scientists can take advantage of interpretative characteristics of qualitative methods as well as experimental dimensions of quantitative methods.

  14. Development and Validation of a Highly Accurate Quantitative Real-Time PCR Assay for Diagnosis of Bacterial Vaginosis.

    PubMed

    Hilbert, David W; Smith, William L; Chadwick, Sean G; Toner, Geoffrey; Mordechai, Eli; Adelson, Martin E; Aguin, Tina J; Sobel, Jack D; Gygax, Scott E

    2016-04-01

    Bacterial vaginosis (BV) is the most common gynecological infection in the United States. Diagnosis based on Amsel's criteria can be challenging and can be aided by laboratory-based testing. A standard method for diagnosis in research studies is enumeration of bacterial morphotypes of a Gram-stained vaginal smear (i.e., Nugent scoring). However, this technique is subjective, requires specialized training, and is not widely available. Therefore, a highly accurate molecular assay for the diagnosis of BV would be of great utility. We analyzed 385 vaginal specimens collected prospectively from subjects who were evaluated for BV by clinical signs and Nugent scoring. We analyzed quantitative real-time PCR (qPCR) assays on DNA extracted from these specimens to quantify nine organisms associated with vaginal health or disease:Gardnerella vaginalis,Atopobium vaginae, BV-associated bacteria 2 (BVAB2, an uncultured member of the orderClostridiales),Megasphaeraphylotype 1 or 2,Lactobacillus iners,Lactobacillus crispatus,Lactobacillus gasseri, andLactobacillus jensenii We generated a logistic regression model that identifiedG. vaginalis,A. vaginae, andMegasphaeraphylotypes 1 and 2 as the organisms for which quantification provided the most accurate diagnosis of symptomatic BV, as defined by Amsel's criteria and Nugent scoring, with 92% sensitivity, 95% specificity, 94% positive predictive value, and 94% negative predictive value. The inclusion ofLactobacillusspp. did not contribute sufficiently to the quantitative model for symptomatic BV detection. This molecular assay is a highly accurate laboratory tool to assist in the diagnosis of symptomatic BV.

  15. Quantitative Hydrocarbon Energies from the PMO Method.

    ERIC Educational Resources Information Center

    Cooper, Charles F.

    1979-01-01

    Details a procedure for accurately calculating the quantum mechanical energies of hydrocarbons using the perturbational molecular orbital (PMO) method, which does not require the use of a computer. (BT)

  16. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation

    PubMed Central

    Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua

    2015-01-01

    In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247

  17. A Global Approach to Accurate and Automatic Quantitative Analysis of NMR Spectra by Complex Least-Squares Curve Fitting

    NASA Astrophysics Data System (ADS)

    Martin, Y. L.

    The performance of quantitative analysis of 1D NMR spectra depends greatly on the choice of the NMR signal model. Complex least-squares analysis is well suited for optimizing the quantitative determination of spectra containing a limited number of signals (<30) obtained under satisfactory conditions of signal-to-noise ratio (>20). From a general point of view it is concluded, on the basis of mathematical considerations and numerical simulations, that, in the absence of truncation of the free-induction decay, complex least-squares curve fitting either in the time or in the frequency domain and linear-prediction methods are in fact nearly equivalent and give identical results. However, in the situation considered, complex least-squares analysis in the frequency domain is more flexible since it enables the quality of convergence to be appraised at every resonance position. An efficient data-processing strategy has been developed which makes use of an approximate conjugate-gradient algorithm. All spectral parameters (frequency, damping factors, amplitudes, phases, initial delay associated with intensity, and phase parameters of a baseline correction) are simultaneously managed in an integrated approach which is fully automatable. The behavior of the error as a function of the signal-to-noise ratio is theoretically estimated, and the influence of apodization is discussed. The least-squares curve fitting is theoretically proved to be the most accurate approach for quantitative analysis of 1D NMR data acquired with reasonable signal-to-noise ratio. The method enables complex spectral residuals to be sorted out. These residuals, which can be cumulated thanks to the possibility of correcting for frequency shifts and phase errors, extract systematic components, such as isotopic satellite lines, and characterize the shape and the intensity of the spectral distortion with respect to the Lorentzian model. This distortion is shown to be nearly independent of the chemical species

  18. SILAC-Based Quantitative Strategies for Accurate Histone Posttranslational Modification Profiling Across Multiple Biological Samples.

    PubMed

    Cuomo, Alessandro; Soldi, Monica; Bonaldi, Tiziana

    2017-01-01

    Histone posttranslational modifications (hPTMs) play a key role in regulating chromatin dynamics and fine-tuning DNA-based processes. Mass spectrometry (MS) has emerged as a versatile technology for the analysis of histones, contributing to the dissection of hPTMs, with special strength in the identification of novel marks and in the assessment of modification cross talks. Stable isotope labeling by amino acid in cell culture (SILAC), when adapted to histones, permits the accurate quantification of PTM changes among distinct functional states; however, its application has been mainly confined to actively dividing cell lines. A spike-in strategy based on SILAC can be used to overcome this limitation and profile hPTMs across multiple samples. We describe here the adaptation of SILAC to the analysis of histones, in both standard and spike-in setups. We also illustrate its coupling to an implemented "shotgun" workflow, by which heavy arginine-labeled histone peptides, produced upon Arg-C digestion, are qualitatively and quantitatively analyzed in an LC-MS/MS system that combines ultrahigh-pressure liquid chromatography (UHPLC) with new-generation Orbitrap high-resolution instrument.

  19. Ad hoc methods for accurate determination of Bader's atomic boundary

    NASA Astrophysics Data System (ADS)

    Polestshuk, Pavel M.

    2013-08-01

    In addition to the recently published triangulation method [P. M. Polestshuk, J. Comput. Chem. 34, 206 (2013)], 10.1002/jcc.23121, two new highly accurate approaches, ZFSX and SINTY, for the integration over an atomic region covered by a zero-flux surface (zfs) were developed and efficiently interfaced into the TWOE program. ZFSX method was realized as three independent modules (ZFSX-1, ZFSX-3, and ZFSX-5) handling interatomic surfaces of a different complexity. Details of algorithmic implementation of ZFSX and SINTY are discussed. A special attention to an extended analysis of errors in calculations of atomic properties is paid. It was shown that uncertainties in zfs determination caused by ZFSX and SINTY approaches contribute negligibly (less than 10-6 a.u.) to the total atomic integration errors. Moreover, the new methods are able to evaluate atomic integrals with a reasonable time and can be universally applied for the systems of any complexity. It is suggested, therefore, that ZFSX and SINTY can be regarded as benchmark methods for the computation of any Quantum Theory of Atoms in Molecules atomic property.

  20. An accurate moving boundary formulation in cut-cell methods

    NASA Astrophysics Data System (ADS)

    Schneiders, Lennart; Hartmann, Daniel; Meinke, Matthias; Schröder, Wolfgang

    2013-02-01

    A cut-cell method for Cartesian meshes to simulate viscous compressible flows with moving boundaries is presented. We focus on eliminating unphysical oscillations occurring in Cartesian grid methods extended to moving-boundary problems. In these methods, cells either lie completely in the fluid or solid region or are intersected by the boundary. For the latter cells, the time dependent volume fraction lying in the fluid region can be so small that explicit time-integration schemes become unstable and a special treatment of these cells is necessary. When the boundary moves, a fluid cell may become a cut cell or a solid cell may become a small cell at the next time level. This causes an abrupt change in the discretization operator and a suddenly modified truncation error of the numerical scheme. This temporally discontinuous alteration is shown to act like an unphysical source term, which deteriorates the numerical solution, i.e., it generates unphysical oscillations in the hydrodynamic forces exerted on the moving boundary. We develop an accurate moving boundary formulation based on the varying discretization operators yielding a cut-cell method which avoids these discontinuities. Results for canonical two- and three-dimensional test cases evidence the accuracy and robustness of the newly developed scheme.

  1. Ad hoc methods for accurate determination of Bader's atomic boundary.

    PubMed

    Polestshuk, Pavel M

    2013-08-07

    In addition to the recently published triangulation method [P. M. Polestshuk, J. Comput. Chem. 34, 206 (2013)], two new highly accurate approaches, ZFSX and SINTY, for the integration over an atomic region covered by a zero-flux surface (zfs) were developed and efficiently interfaced into the TWOE program. ZFSX method was realized as three independent modules (ZFSX-1, ZFSX-3, and ZFSX-5) handling interatomic surfaces of a different complexity. Details of algorithmic implementation of ZFSX and SINTY are discussed. A special attention to an extended analysis of errors in calculations of atomic properties is paid. It was shown that uncertainties in zfs determination caused by ZFSX and SINTY approaches contribute negligibly (less than 10(-6) a.u.) to the total atomic integration errors. Moreover, the new methods are able to evaluate atomic integrals with a reasonable time and can be universally applied for the systems of any complexity. It is suggested, therefore, that ZFSX and SINTY can be regarded as benchmark methods for the computation of any Quantum Theory of Atoms in Molecules atomic property.

  2. Accurate measurement method for tube's endpoints based on machine vision

    NASA Astrophysics Data System (ADS)

    Liu, Shaoli; Jin, Peng; Liu, Jianhua; Wang, Xiao; Sun, Peng

    2017-01-01

    Tubes are used widely in aerospace vehicles, and their accurate assembly can directly affect the assembling reliability and the quality of products. It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly. However, the traditional tube inspection method is time-consuming and complex operations. Therefore, a new measurement method for a tube's endpoints based on machine vision is proposed. First, reflected light on tube's surface can be removed by using photometric linearization. Then, based on the optimization model for the tube's endpoint measurements and the principle of stereo matching, the global coordinates and the relative distance of the tube's endpoint are obtained. To confirm the feasibility, 11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured. The experiment results show that the measurement repeatability accuracy is 0.167 mm, and the absolute accuracy is 0.328 mm. The measurement takes less than 1 min. The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement.

  3. Quantitative Methods in Psychology: Inevitable and Useless

    PubMed Central

    Toomela, Aaro

    2010-01-01

    Science begins with the question, what do I want to know? Science becomes science, however, only when this question is justified and the appropriate methodology is chosen for answering the research question. Research question should precede the other questions; methods should be chosen according to the research question and not vice versa. Modern quantitative psychology has accepted method as primary; research questions are adjusted to the methods. For understanding thinking in modern quantitative psychology, two epistemologies should be distinguished: structural-systemic that is based on Aristotelian thinking, and associative-quantitative that is based on Cartesian–Humean thinking. The first aims at understanding the structure that underlies the studied processes; the second looks for identification of cause–effect relationships between the events with no possible access to the understanding of the structures that underlie the processes. Quantitative methodology in particular as well as mathematical psychology in general, is useless for answering questions about structures and processes that underlie observed behaviors. Nevertheless, quantitative science is almost inevitable in a situation where the systemic-structural basis of behavior is not well understood; all sorts of applied decisions can be made on the basis of quantitative studies. In order to proceed, psychology should study structures; methodologically, constructive experiments should be added to observations and analytic experiments. PMID:21833199

  4. Quantitative Methods for Software Selection and Evaluation

    DTIC Science & Technology

    2006-09-01

    Quantitative Methods for Software Selection and Evaluation Michael S. Bandor September 2006 Acquisition Support Program...5 2 Evaluation Methods ...Abstract When performing a “buy” analysis and selecting a product as part of a software acquisition strategy , most organizations will consider primarily

  5. IRIS: Towards an Accurate and Fast Stage Weight Prediction Method

    NASA Astrophysics Data System (ADS)

    Taponier, V.; Balu, A.

    2002-01-01

    The knowledge of the structural mass fraction (or the mass ratio) of a given stage, which affects the performance of a rocket, is essential for the analysis of new or upgraded launchers or stages, whose need is increased by the quick evolution of the space programs and by the necessity of their adaptation to the market needs. The availability of this highly scattered variable, ranging between 0.05 and 0.15, is of primary importance at the early steps of the preliminary design studies. At the start of the staging and performance studies, the lack of frozen weight data (to be obtained later on from propulsion, trajectory and sizing studies) leads to rely on rough estimates, generally derived from printed sources and adapted. When needed, a consolidation can be acquired trough a specific analysis activity involving several techniques and implying additional effort and time. The present empirical approach allows thus to get approximated values (i.e. not necessarily accurate or consistent), inducing some result inaccuracy as well as, consequently, difficulties of performance ranking for a multiple option analysis, and an increase of the processing duration. This forms a classical harsh fact of the preliminary design system studies, insufficiently discussed to date. It appears therefore highly desirable to have, for all the evaluation activities, a reliable, fast and easy-to-use weight or mass fraction prediction method. Additionally, the latter should allow for a pre selection of the alternative preliminary configurations, making possible a global system approach. For that purpose, an attempt at modeling has been undertaken, whose objective was the determination of a parametric formulation of the mass fraction, to be expressed from a limited number of parameters available at the early steps of the project. It is based on the innovative use of a statistical method applicable to a variable as a function of several independent parameters. A specific polynomial generator

  6. Renal Cortical Lactate Dehydrogenase: A Useful, Accurate, Quantitative Marker of In Vivo Tubular Injury and Acute Renal Failure

    PubMed Central

    Zager, Richard A.; Johnson, Ali C. M.; Becker, Kirsten

    2013-01-01

    Studies of experimental acute kidney injury (AKI) are critically dependent on having precise methods for assessing the extent of tubular cell death. However, the most widely used techniques either provide indirect assessments (e.g., BUN, creatinine), suffer from the need for semi-quantitative grading (renal histology), or reflect the status of residual viable, not the number of lost, renal tubular cells (e.g., NGAL content). Lactate dehydrogenase (LDH) release is a highly reliable test for assessing degrees of in vitro cell death. However, its utility as an in vivo AKI marker has not been defined. Towards this end, CD-1 mice were subjected to graded renal ischemia (0, 15, 22, 30, 40, or 60 min) or to nephrotoxic (glycerol; maleate) AKI. Sham operated mice, or mice with AKI in the absence of acute tubular necrosis (ureteral obstruction; endotoxemia), served as negative controls. Renal cortical LDH or NGAL levels were assayed 2 or 24 hrs later. Ischemic, glycerol, and maleate-induced AKI were each associated with striking, steep, inverse correlations (r, −0.89) between renal injury severity and renal LDH content. With severe AKI, >65% LDH declines were observed. Corresponding prompt plasma and urinary LDH increases were observed. These observations, coupled with the maintenance of normal cortical LDH mRNA levels, indicated the renal LDH efflux, not decreased LDH synthesis, caused the falling cortical LDH levels. Renal LDH content was well maintained with sham surgery, ureteral obstruction or endotoxemic AKI. In contrast to LDH, renal cortical NGAL levels did not correlate with AKI severity. In sum, the above results indicate that renal cortical LDH assay is a highly accurate quantitative technique for gauging the extent of experimental acute ischemic and toxic renal injury. That it avoids the limitations of more traditional AKI markers implies great potential utility in experimental studies that require precise quantitation of tubule cell death. PMID:23825563

  7. Method and apparatus for chromatographic quantitative analysis

    DOEpatents

    Fritz, James S.; Gjerde, Douglas T.; Schmuckler, Gabriella

    1981-06-09

    An improved apparatus and method for the quantitative analysis of a solution containing a plurality of anion species by ion exchange chromatography which utilizes a single eluent and a single ion exchange bed which does not require periodic regeneration. The solution containing the anions is added to an anion exchange resin bed which is a low capacity macroreticular polystyrene-divinylbenzene resin containing quarternary ammonium functional groups, and is eluted therefrom with a dilute solution of a low electrical conductance organic acid salt. As each anion species is eluted from the bed, it is quantitatively sensed by conventional detection means such as a conductivity cell.

  8. Electric Field Quantitative Measurement System and Method

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R. (Inventor)

    2016-01-01

    A method and system are provided for making a quantitative measurement of an electric field. A plurality of antennas separated from one another by known distances are arrayed in a region that extends in at least one dimension. A voltage difference between at least one selected pair of antennas is measured. Each voltage difference is divided by the known distance associated with the selected pair of antennas corresponding thereto to generate a resulting quantity. The plurality of resulting quantities defined over the region quantitatively describe an electric field therein.

  9. A fast, accurate, and reliable reconstruction method of the lumbar spine vertebrae using positional MRI.

    PubMed

    Simons, Craig J; Cobb, Loren; Davidson, Bradley S

    2014-04-01

    In vivo measurement of lumbar spine configuration is useful for constructing quantitative biomechanical models. Positional magnetic resonance imaging (MRI) accommodates a larger range of movement in most joints than conventional MRI and does not require a supine position. However, this is achieved at the expense of image resolution and contrast. As a result, quantitative research using positional MRI has required long reconstruction times and is sensitive to incorrectly identifying the vertebral boundary due to low contrast between bone and surrounding tissue in the images. We present a semi-automated method used to obtain digitized reconstructions of lumbar vertebrae in any posture of interest. This method combines a high-resolution reference scan with a low-resolution postural scan to provide a detailed and accurate representation of the vertebrae in the posture of interest. Compared to a criterion standard, translational reconstruction error ranged from 0.7 to 1.6 mm and rotational reconstruction error ranged from 0.3 to 2.6°. Intraclass correlation coefficients indicated high interrater reliability for measurements within the imaging plane (ICC 0.97-0.99). Computational efficiency indicates that this method may be used to compile data sets large enough to account for population variance, and potentially expand the use of positional MRI as a quantitative biomechanics research tool.

  10. Quantitatively accurate activity measurements with a dedicated cardiac SPECT camera: Physical phantom experiments

    SciTech Connect

    Pourmoghaddas, Amir Wells, R. Glenn

    2016-01-15

    Purpose: Recently, there has been increased interest in dedicated cardiac single photon emission computed tomography (SPECT) scanners with pinhole collimation and improved detector technology due to their improved count sensitivity and resolution over traditional parallel-hole cameras. With traditional cameras, energy-based approaches are often used in the clinic for scatter compensation because they are fast and easily implemented. Some of the cardiac cameras use cadmium-zinc-telluride (CZT) detectors which can complicate the use of energy-based scatter correction (SC) due to the low-energy tail—an increased number of unscattered photons detected with reduced energy. Modified energy-based scatter correction methods can be implemented, but their level of accuracy is unclear. In this study, the authors validated by physical phantom experiments the quantitative accuracy and reproducibility of easily implemented correction techniques applied to {sup 99m}Tc myocardial imaging with a CZT-detector-based gamma camera with multiple heads, each with a single-pinhole collimator. Methods: Activity in the cardiac compartment of an Anthropomorphic Torso phantom (Data Spectrum Corporation) was measured through 15 {sup 99m}Tc-SPECT acquisitions. The ratio of activity concentrations in organ compartments resembled a clinical {sup 99m}Tc-sestamibi scan and was kept consistent across all experiments (1.2:1 heart to liver and 1.5:1 heart to lung). Two background activity levels were considered: no activity (cold) and an activity concentration 1/10th of the heart (hot). A plastic “lesion” was placed inside of the septal wall of the myocardial insert to simulate the presence of a region without tracer uptake and contrast in this lesion was calculated for all images. The true net activity in each compartment was measured with a dose calibrator (CRC-25R, Capintec, Inc.). A 10 min SPECT image was acquired using a dedicated cardiac camera with CZT detectors (Discovery NM530c, GE

  11. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, Bahram A.; Maestre, Marcos F.; Fish, Richard H.; Johnston, William E.

    1997-01-01

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations add reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage.

  12. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, B.A.; Maestre, M.F.; Fish, R.H.; Johnston, W.E.

    1997-09-23

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations and reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage. 11 figs.

  13. Automated and quantitative headspace in-tube extraction for the accurate determination of highly volatile compounds from wines and beers.

    PubMed

    Zapata, Julián; Mateo-Vivaracho, Laura; Lopez, Ricardo; Ferreira, Vicente

    2012-03-23

    An automatic headspace in-tube extraction (ITEX) method for the accurate determination of acetaldehyde, ethyl acetate, diacetyl and other volatile compounds from wine and beer has been developed and validated. Method accuracy is based on the nearly quantitative transference of volatile compounds from the sample to the ITEX trap. For achieving that goal most methodological aspects and parameters have been carefully examined. The vial and sample sizes and the trapping materials were found to be critical due to the pernicious saturation effects of ethanol. Small 2 mL vials containing very small amounts of sample (20 μL of 1:10 diluted sample) and a trap filled with 22 mg of Bond Elut ENV resins could guarantee a complete trapping of sample vapors. The complete extraction requires 100 × 0.5 mL pumping strokes at 60 °C and takes 24 min. Analytes are further desorbed at 240 °C into the GC injector under a 1:5 split ratio. The proportion of analytes finally transferred to the trap ranged from 85 to 99%. The validation of the method showed satisfactory figures of merit. Determination coefficients were better than 0.995 in all cases and good repeatability was also obtained (better than 7% in all cases). Reproducibility was better than 8.3% except for acetaldehyde (13.1%). Detection limits were below the odor detection thresholds of these target compounds in wine and beer and well below the normal ranges of occurrence. Recoveries were not significantly different to 100%, except in the case of acetaldehyde. In such a case it could be determined that the method is not able to break some of the adducts that this compound forms with sulfites. However, such problem was avoided after incubating the sample with glyoxal. The method can constitute a general and reliable alternative for the analysis of very volatile compounds in other difficult matrixes.

  14. Quantitative Method of Measuring Metastatic Activity

    NASA Technical Reports Server (NTRS)

    Morrison, Dennis R. (Inventor)

    1999-01-01

    The metastatic potential of tumors can be evaluated by the quantitative detection of urokinase and DNA. The cell sample selected for examination is analyzed for the presence of high levels of urokinase and abnormal DNA using analytical flow cytometry and digital image analysis. Other factors such as membrane associated uroldnase, increased DNA synthesis rates and certain receptors can be used in the method for detection of potentially invasive tumors.

  15. Accurate, fast and cost-effective diagnostic test for monosomy 1p36 using real-time quantitative PCR.

    PubMed

    Cunha, Pricila da Silva; Pena, Heloisa B; D'Angelo, Carla Sustek; Koiffmann, Celia P; Rosenfeld, Jill A; Shaffer, Lisa G; Stofanko, Martin; Gonçalves-Dornelas, Higgor; Pena, Sérgio Danilo Junho

    2014-01-01

    Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5-0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH), which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR) assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH), and/or multiplex ligation-dependent probe amplification (MLPA) all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs.

  16. Accurate, Fast and Cost-Effective Diagnostic Test for Monosomy 1p36 Using Real-Time Quantitative PCR

    PubMed Central

    Cunha, Pricila da Silva; Pena, Heloisa B.; D'Angelo, Carla Sustek; Koiffmann, Celia P.; Rosenfeld, Jill A.; Shaffer, Lisa G.; Stofanko, Martin; Gonçalves-Dornelas, Higgor; Pena, Sérgio Danilo Junho

    2014-01-01

    Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5–0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH), which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR) assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH), and/or multiplex ligation-dependent probe amplification (MLPA) all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs. PMID:24839341

  17. Quantitative Analysis of Intra-chromosomal Contacts: The 3C-qPCR Method.

    PubMed

    Ea, Vuthy; Court, Franck; Forné, Thierry

    2017-01-01

    The chromosome conformation capture (3C) technique is fundamental to many population-based methods investigating chromatin dynamics and organization in eukaryotes. Here, we provide a modified quantitative 3C (3C-qPCR) protocol for improved quantitative analyses of intra-chromosomal contacts. We also describe an algorithm for data normalization which allows more accurate comparisons between contact profiles.

  18. Mapping methods for computationally efficient and accurate structural reliability

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1992-01-01

    Mapping methods are developed to improve the accuracy and efficiency of probabilistic structural analyses with coarse finite element meshes. The mapping methods consist of the following: (1) deterministic structural analyses with fine (convergent) finite element meshes; (2) probabilistic structural analyses with coarse finite element meshes; (3) the relationship between the probabilistic structural responses from the coarse and fine finite element meshes; and (4) a probabilistic mapping. The results show that the scatter in the probabilistic structural responses and structural reliability can be efficiently predicted using a coarse finite element model and proper mapping methods with good accuracy. Therefore, large structures can be efficiently analyzed probabilistically using finite element methods.

  19. Quantitative methods in assessment of neurologic function.

    PubMed

    Potvin, A R; Tourtellotte, W W; Syndulko, K; Potvin, J

    1981-01-01

    Traditionally, neurologists have emphasized qualitative techniques for assessing results of clinical trials. However, in recent years qualitative evaluations have been increasingly augmented by quantitative tests for measuring neurologic functions pertaining to mental state, strength, steadiness, reactions, speed, coordination, sensation, fatigue, gait, station, and simulated activities of daily living. Quantitative tests have long been used by psychologists for evaluating asymptomatic function, assessing human information processing, and predicting proficiency in skilled tasks; however, their methodology has never been directly assessed for validity in a clinical environment. In this report, relevant contributions from the literature on asymptomatic human performance and that on clinical quantitative neurologic function are reviewed and assessed. While emphasis is focused on tests appropriate for evaluating clinical neurologic trials, evaluations of tests for reproducibility, reliability, validity, and examiner training procedures, and for effects of motivation, learning, handedness, age, and sex are also reported and interpreted. Examples of statistical strategies for data analysis, scoring systems, data reduction methods, and data display concepts are presented. Although investigative work still remains to be done, it appears that carefully selected and evaluated tests of sensory and motor function should be an essential factor for evaluating clinical trials in an objective manner.

  20. Method accurately measures mean particle diameters of monodisperse polystyrene latexes

    NASA Technical Reports Server (NTRS)

    Kubitschek, H. E.

    1967-01-01

    Photomicrographic method determines mean particle diameters of monodisperse polystyrene latexes. Many diameters are measured simultaneously by measuring row lengths of particles in a triangular array at a glass-oil interface. The method provides size standards for electronic particle counters and prevents distortions, softening, and flattening.

  1. How accurate is the Kubelka-Munk theory of diffuse reflection? A quantitative answer

    NASA Astrophysics Data System (ADS)

    Joseph, Richard I.; Thomas, Michael E.

    2012-10-01

    The (heuristic) Kubelka-Munk theory of diffuse reflectance and transmittance of a film on a substrate, which is widely used because it gives simple analytic results, is compared to the rigorous radiative transfer model of Chandrasekhar. The rigorous model has to be numerically solved, thus is less intuitive. The Kubelka-Munk theory uses an absorption coefficient and scatter coefficient as inputs, similar to the rigorous model of Chandrasekhar. The relationship between these two sets of coefficients is addressed. It is shown that the Kubelka-Munk theory is remarkably accurate if one uses the proper albedo parameter.

  2. Express method of construction of accurate inverse pole figures

    NASA Astrophysics Data System (ADS)

    Perlovich, Yu; Isaenkova, M.; Fesenko, V.

    2016-04-01

    With regard to metallic materials with the FCC and BCC crystal lattice a new method for constructing the X-ray texture inverse pole figures (IPF) by using tilt curves of spinning sample, characterized by high accuracy and rapidity (express), was proposed. In contrast to the currently widespread method to construct IPF using orientation distribution function (ODF), synthesized in several partial direct pole figures, the proposed method is based on a simple geometrical interpretation of a measurement procedure, requires a minimal operating time of the X-ray diffractometer.

  3. Quantitative methods in classical perturbation theory.

    NASA Astrophysics Data System (ADS)

    Giorgilli, A.

    Poincaré proved that the series commonly used in Celestial mechanics are typically non convergent, although their usefulness is generally evident. Recent work in perturbation theory has enlightened this conjecture of Poincaré, bringing into evidence that the series of perturbation theory, although non convergent in general, furnish nevertheless valuable approximations to the true orbits for a very large time, which in some practical cases could be comparable with the age of the universe. The aim of the author's paper is to introduce the quantitative methods of perturbation theory which allow to obtain such powerful results.

  4. The chain collocation method: A spectrally accurate calculus of forms

    NASA Astrophysics Data System (ADS)

    Rufat, Dzhelil; Mason, Gemma; Mullen, Patrick; Desbrun, Mathieu

    2014-01-01

    Preserving in the discrete realm the underlying geometric, topological, and algebraic structures at stake in partial differential equations has proven to be a fruitful guiding principle for numerical methods in a variety of fields such as elasticity, electromagnetism, or fluid mechanics. However, structure-preserving methods have traditionally used spaces of piecewise polynomial basis functions for differential forms. Yet, in many problems where solutions are smoothly varying in space, a spectral numerical treatment is called for. In an effort to provide structure-preserving numerical tools with spectral accuracy on logically rectangular grids over periodic or bounded domains, we present a spectral extension of the discrete exterior calculus (DEC), with resulting computational tools extending well-known collocation-based spectral methods. Its efficient implementation using fast Fourier transforms is provided as well.

  5. Machine Learning methods for Quantitative Radiomic Biomarkers

    PubMed Central

    Parmar, Chintan; Grossmann, Patrick; Bussink, Johan; Lambin, Philippe; Aerts, Hugo J. W. L.

    2015-01-01

    Radiomics extracts and mines large number of medical imaging features quantifying tumor phenotypic characteristics. Highly accurate and reliable machine-learning approaches can drive the success of radiomic applications in clinical care. In this radiomic study, fourteen feature selection methods and twelve classification methods were examined in terms of their performance and stability for predicting overall survival. A total of 440 radiomic features were extracted from pre-treatment computed tomography (CT) images of 464 lung cancer patients. To ensure the unbiased evaluation of different machine-learning methods, publicly available implementations along with reported parameter configurations were used. Furthermore, we used two independent radiomic cohorts for training (n = 310 patients) and validation (n = 154 patients). We identified that Wilcoxon test based feature selection method WLCX (stability = 0.84 ± 0.05, AUC = 0.65 ± 0.02) and a classification method random forest RF (RSD = 3.52%, AUC = 0.66 ± 0.03) had highest prognostic performance with high stability against data perturbation. Our variability analysis indicated that the choice of classification method is the most dominant source of performance variation (34.21% of total variance). Identification of optimal machine-learning methods for radiomic applications is a crucial step towards stable and clinically relevant radiomic biomarkers, providing a non-invasive way of quantifying and monitoring tumor-phenotypic characteristics in clinical practice. PMID:26278466

  6. Machine Learning methods for Quantitative Radiomic Biomarkers.

    PubMed

    Parmar, Chintan; Grossmann, Patrick; Bussink, Johan; Lambin, Philippe; Aerts, Hugo J W L

    2015-08-17

    Radiomics extracts and mines large number of medical imaging features quantifying tumor phenotypic characteristics. Highly accurate and reliable machine-learning approaches can drive the success of radiomic applications in clinical care. In this radiomic study, fourteen feature selection methods and twelve classification methods were examined in terms of their performance and stability for predicting overall survival. A total of 440 radiomic features were extracted from pre-treatment computed tomography (CT) images of 464 lung cancer patients. To ensure the unbiased evaluation of different machine-learning methods, publicly available implementations along with reported parameter configurations were used. Furthermore, we used two independent radiomic cohorts for training (n = 310 patients) and validation (n = 154 patients). We identified that Wilcoxon test based feature selection method WLCX (stability = 0.84 ± 0.05, AUC = 0.65 ± 0.02) and a classification method random forest RF (RSD = 3.52%, AUC = 0.66 ± 0.03) had highest prognostic performance with high stability against data perturbation. Our variability analysis indicated that the choice of classification method is the most dominant source of performance variation (34.21% of total variance). Identification of optimal machine-learning methods for radiomic applications is a crucial step towards stable and clinically relevant radiomic biomarkers, providing a non-invasive way of quantifying and monitoring tumor-phenotypic characteristics in clinical practice.

  7. An accurate fuzzy edge detection method using wavelet details subimages

    NASA Astrophysics Data System (ADS)

    Sedaghat, Nafiseh; Pourreza, Hamidreza

    2010-02-01

    Edge detection is a basic and important subject in computer vision and image processing. An edge detector is defined as a mathematical operator of small spatial extent that responds in some way to these discontinuities, usually classifying every image pixel as either belonging to an edge or not. Many researchers have been spent attempting to develop effective edge detection algorithms. Despite this extensive research, the task of finding the edges that correspond to true physical boundaries remains a difficult problem.Edge detection algorithms based on the application of human knowledge show their flexibility and suggest that the use of human knowledge is a reasonable alternative. In this paper we propose a fuzzy inference system with two inputs: gradient and wavelet details. First input is calculated by Sobel operator and the second is calculated by wavelet transform of input image and then reconstruction of image only with details subimages by inverse wavelet transform. There are many fuzzy edge detection methods, but none of them utilize wavelet transform as it is used in this paper. For evaluating our method, we detect edges of images with different brightness characteristics and compare results with canny edge detector. The results show the high performance of our method in finding true edges.

  8. Mapping methods for computationally efficient and accurate structural reliability

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1991-01-01

    The influence of mesh coarseness in the structural reliability is evaluated. The objectives are to describe the alternatives and to demonstrate their effectiveness. The results show that special mapping methods can be developed by using: (1) deterministic structural responses from a fine (convergent) finite element mesh; (2) probabilistic distributions of structural responses from a coarse finite element mesh; (3) the relationship between the probabilistic structural responses from the coarse and fine finite element meshes; and (4) probabilistic mapping. The structural responses from different finite element meshes are highly correlated.

  9. Pendant bubble method for an accurate characterization of superhydrophobic surfaces.

    PubMed

    Ling, William Yeong Liang; Ng, Tuck Wah; Neild, Adrian

    2011-12-06

    The commonly used sessile drop method for measuring contact angles and surface tension suffers from errors on superhydrophobic surfaces. This occurs from unavoidable experimental error in determining the vertical location of the liquid-solid-vapor interface due to a camera's finite pixel resolution, thereby necessitating the development and application of subpixel algorithms. We demonstrate here the advantage of a pendant bubble in decreasing the resulting error prior to the application of additional algorithms. For sessile drops to attain an equivalent accuracy, the pixel count would have to be increased by 2 orders of magnitude.

  10. Quantitative spectroscopy of hot stars: accurate atomic data applied on a large scale as driver of recent breakthroughs

    NASA Astrophysics Data System (ADS)

    Przybilla, N.; Schaffenroth, V.; Nieva, M. F.; Butler, K.

    2016-10-01

    OB-type stars present hotbeds for non-LTE physics because of their strong radiation fields that drive the atmospheric plasma out of local thermodynamic equilibrium. We report on recent breakthroughs in the quantitative analysis of the optical and UV-spectra of OB-type stars that were facilitated by application of accurate and precise atomic data on a large scale. An astrophysicist's dream has come true, by bringing observed and model spectra into close match over wide parts of the observed wavelength ranges. This allows tight observational constraints to be derived from OB-type stars for a wide range of applications in astrophysics. However, despite the progress made, many details of the modelling may be improved further. We discuss atomic data needs in terms of laboratory measurements and also ab-initio calculations. Particular emphasis is given to quantitative spectroscopy in the near-IR, which will be the focus in the era of the upcoming extremely large telescopes.

  11. Restriction Site Tiling Analysis: accurate discovery and quantitative genotyping of genome-wide polymorphisms using nucleotide arrays

    PubMed Central

    2010-01-01

    High-throughput genotype data can be used to identify genes important for local adaptation in wild populations, phenotypes in lab stocks, or disease-related traits in human medicine. Here we advance microarray-based genotyping for population genomics with Restriction Site Tiling Analysis. The approach simultaneously discovers polymorphisms and provides quantitative genotype data at 10,000s of loci. It is highly accurate and free from ascertainment bias. We apply the approach to uncover genomic differentiation in the purple sea urchin. PMID:20403197

  12. Individualizing amikacin regimens: accurate method to achieve therapeutic concentrations.

    PubMed

    Zaske, D E; Cipolle, R J; Rotschafer, J C; Kohls, P R; Strate, R G

    1991-11-01

    Amikacin's pharmacokinetics and dosage requirements were studied in 98 patients receiving treatment for gram-negative infections. A wide interpatient variation in the kinetic parameters of the drug occurred in all patients and in patients who had normal serum creatinine levels or normal creatinine clearance. The half-life ranged from 0.7 to 14.4 h in 74 patients who had normal serum creatinine levels and from 0.7 to 7.2 h in 37 patients who had normal creatinine clearance. The necessary daily dose to obtain therapeutic serum concentrations ranged from 1.25 to 57 mg/kg in patients with normal serum creatinine levels and from 10 to 57 mg/kg in patients with normal creatinine clearance. In four patients (4%), a significant change in baseline serum creatinine level (greater than 0.5 mg/dl) occurred during or after treatment, which may have been amikacin-associated toxicity. Overt ototoxicity occurred in one patient. The method of individualizing dosage regimens provided a clinically useful means of rapidly attaining therapeutic peak and trough serum concentrations.

  13. Analytical methods for quantitation of prenylated flavonoids from hops.

    PubMed

    Nikolić, Dejan; van Breemen, Richard B

    2013-01-01

    The female flowers of hops (Humulus lupulus L.) are used as a flavoring agent in the brewing industry. There is growing interest in possible health benefits of hops, particularly as estrogenic and chemopreventive agents. Among the possible active constituents, most of the attention has focused on prenylated flavonoids, which can chemically be classified as prenylated chalcones and prenylated flavanones. Among chalcones, xanthohumol (XN) and desmethylxanthohumol (DMX) have been the most studied, while among flavanones, 8-prenylnaringenin (8-PN) and 6-prenylnaringenin (6-PN) have received the most attention. Because of the interest in medicinal properties of prenylated flavonoids, there is demand for accurate, reproducible and sensitive analytical methods to quantify these compounds in various matrices. Such methods are needed, for example, for quality control and standardization of hop extracts, measurement of the content of prenylated flavonoids in beer, and to determine pharmacokinetic properties of prenylated flavonoids in animals and humans. This review summarizes currently available analytical methods for quantitative analysis of the major prenylated flavonoids, with an emphasis on the LC-MS and LC-MS-MS methods and their recent applications to biomedical research on hops. This review covers all methods in which prenylated flavonoids have been measured, either as the primary analytes or as a part of a larger group of analytes. The review also discusses methodological issues relating to the quantitative analysis of these compounds regardless of the chosen analytical approach.

  14. Analytical methods for quantitation of prenylated flavonoids from hops

    PubMed Central

    Nikolić, Dejan; van Breemen, Richard B.

    2013-01-01

    The female flowers of hops (Humulus lupulus L.) are used as a flavoring agent in the brewing industry. There is growing interest in possible health benefits of hops, particularly as estrogenic and chemopreventive agents. Among the possible active constituents, most of the attention has focused on prenylated flavonoids, which can chemically be classified as prenylated chalcones and prenylated flavanones. Among chalcones, xanthohumol (XN) and desmethylxanthohumol (DMX) have been the most studied, while among flavanones, 8-prenylnaringenin (8-PN) and 6-prenylnaringenin (6-PN) have received the most attention. Because of the interest in medicinal properties of prenylated flavonoids, there is demand for accurate, reproducible and sensitive analytical methods to quantify these compounds in various matrices. Such methods are needed, for example, for quality control and standardization of hop extracts, measurement of the content of prenylated flavonoids in beer, and to determine pharmacokinetic properties of prenylated flavonoids in animals and humans. This review summarizes currently available analytical methods for quantitative analysis of the major prenylated flavonoids, with an emphasis on the LC-MS and LC-MS-MS methods and their recent applications to biomedical research on hops. This review covers all methods in which prenylated flavonoids have been measured, either as the primary analytes or as a part of a larger group of analytes. The review also discusses methodological issues relating to the quantitative analysis of these compounds regardless of the chosen analytical approach. PMID:24077106

  15. [Progress in stable isotope labeled quantitative proteomics methods].

    PubMed

    Zhou, Yuan; Shan, Yichu; Zhang, Lihua; Zhang, Yukui

    2013-06-01

    Quantitative proteomics is an important research field in post-genomics era. There are two strategies for proteome quantification: label-free methods and stable isotope labeling methods which have become the most important strategy for quantitative proteomics at present. In the past few years, a number of quantitative methods have been developed, which support the fast development in biology research. In this work, we discuss the progress in the stable isotope labeling methods for quantitative proteomics including relative and absolute quantitative proteomics, and then give our opinions on the outlook of proteome quantification methods.

  16. An Inexpensive, Accurate, and Precise Wet-Mount Method for Enumerating Aquatic Viruses

    PubMed Central

    Cunningham, Brady R.; Brum, Jennifer R.; Schwenck, Sarah M.; Sullivan, Matthew B.

    2015-01-01

    Viruses affect biogeochemical cycling, microbial mortality, gene flow, and metabolic functions in diverse environments through infection and lysis of microorganisms. Fundamental to quantitatively investigating these roles is the determination of viral abundance in both field and laboratory samples. One current, widely used method to accomplish this with aquatic samples is the “filter mount” method, in which samples are filtered onto costly 0.02-μm-pore-size ceramic filters for enumeration of viruses by epifluorescence microscopy. Here we describe a cost-effective (ca. 500-fold-lower materials cost) alternative virus enumeration method in which fluorescently stained samples are wet mounted directly onto slides, after optional chemical flocculation of viruses in samples with viral concentrations of <5 × 107 viruses ml−1. The concentration of viruses in the sample is then determined from the ratio of viruses to a known concentration of added microsphere beads via epifluorescence microscopy. Virus concentrations obtained by using this wet-mount method, with and without chemical flocculation, were significantly correlated with, and had precision equivalent to, those obtained by the filter mount method across concentrations ranging from 2.17 × 106 to 1.37 × 108 viruses ml−1 when tested by using cultivated viral isolates and natural samples from marine and freshwater environments. In summary, the wet-mount method is significantly less expensive than the filter mount method and is appropriate for rapid, precise, and accurate enumeration of aquatic viruses over a wide range of viral concentrations (≥1 × 106 viruses ml−1) encountered in field and laboratory samples. PMID:25710369

  17. An inexpensive, accurate, and precise wet-mount method for enumerating aquatic viruses.

    PubMed

    Cunningham, Brady R; Brum, Jennifer R; Schwenck, Sarah M; Sullivan, Matthew B; John, Seth G

    2015-05-01

    Viruses affect biogeochemical cycling, microbial mortality, gene flow, and metabolic functions in diverse environments through infection and lysis of microorganisms. Fundamental to quantitatively investigating these roles is the determination of viral abundance in both field and laboratory samples. One current, widely used method to accomplish this with aquatic samples is the "filter mount" method, in which samples are filtered onto costly 0.02-μm-pore-size ceramic filters for enumeration of viruses by epifluorescence microscopy. Here we describe a cost-effective (ca. 500-fold-lower materials cost) alternative virus enumeration method in which fluorescently stained samples are wet mounted directly onto slides, after optional chemical flocculation of viruses in samples with viral concentrations of <5×10(7) viruses ml(-1). The concentration of viruses in the sample is then determined from the ratio of viruses to a known concentration of added microsphere beads via epifluorescence microscopy. Virus concentrations obtained by using this wet-mount method, with and without chemical flocculation, were significantly correlated with, and had precision equivalent to, those obtained by the filter mount method across concentrations ranging from 2.17×10(6) to 1.37×10(8) viruses ml(-1) when tested by using cultivated viral isolates and natural samples from marine and freshwater environments. In summary, the wet-mount method is significantly less expensive than the filter mount method and is appropriate for rapid, precise, and accurate enumeration of aquatic viruses over a wide range of viral concentrations (≥1×10(6) viruses ml(-1)) encountered in field and laboratory samples.

  18. Simple, fast, and accurate methodology for quantitative analysis using Fourier transform infrared spectroscopy, with bio-hybrid fuel cell examples.

    PubMed

    Mackie, David M; Jahnke, Justin P; Benyamin, Marcus S; Sumner, James J

    2016-01-01

    The standard methodologies for quantitative analysis (QA) of mixtures using Fourier transform infrared (FTIR) instruments have evolved until they are now more complicated than necessary for many users' purposes. We present a simpler methodology, suitable for widespread adoption of FTIR QA as a standard laboratory technique across disciplines by occasional users.•Algorithm is straightforward and intuitive, yet it is also fast, accurate, and robust.•Relies on component spectra, minimization of errors, and local adaptive mesh refinement.•Tested successfully on real mixtures of up to nine components. We show that our methodology is robust to challenging experimental conditions such as similar substances, component percentages differing by three orders of magnitude, and imperfect (noisy) spectra. As examples, we analyze biological, chemical, and physical aspects of bio-hybrid fuel cells.

  19. In vivo osteogenesis assay: a rapid method for quantitative analysis.

    PubMed

    Dennis, J E; Konstantakos, E K; Arm, D; Caplan, A I

    1998-08-01

    A quantitative in vivo osteogenesis assay is a useful tool for the analysis of cells and bioactive factors that affect the amount or rate of bone formation. There are currently two assays in general use for the in vivo assessment of osteogenesis by isolated cells: diffusion chambers and porous calcium phosphate ceramics. Due to the relative ease of specimen preparation and reproducibility of results, the porous ceramic assay was chosen for the development of a rapid method for quantitating in vivo bone formation. The ceramic cube implantation technique consists of combining osteogenic cells with 27-mm3 porous calcium phosphate ceramics, implanting the cell-ceramic composites subcutaneously into an immuno-tolerant host, and, after 2-6 weeks, harvesting and preparing the ceramic implants for histologic analysis. A drawback to the analysis of bone formation within these porous ceramics is that the entire cube must be examined to find small foci of bone present in some samples; a single cross-sectional area is not representative. For this reason, image analysis of serial sections from ceramics is often prohibitively time-consuming. Two alternative scoring methodologies were tested and compared to bone volume measurements obtained by image analysis. The two subjective scoring methods were: (1) Bone Scale: the amount of bone within pores of the ceramic implant is estimated on a scale of 0-4 based on the degree of bone fill (0=no bone, 1=up to 25%, 2=25 to 75%, 4=75 to 100% fill); and (2) Percentage Bone: the amount of bone is estimated by determining the percentage of ceramic pores which contain bone. Every tenth section of serially sectioned cubes was scored by each of these methods under double-blind conditions, and the Bone Scale and Percentage Bone results were directly compared to image analysis measurements from identical samples. Correlation coefficients indicate that the Percentage Bone method was more accurate than the Bone Scale scoring method. The Bone Scale

  20. Preferential access to genetic information from endogenous hominin ancient DNA and accurate quantitative SNP-typing via SPEX

    PubMed Central

    Brotherton, Paul; Sanchez, Juan J.; Cooper, Alan; Endicott, Phillip

    2010-01-01

    The analysis of targeted genetic loci from ancient, forensic and clinical samples is usually built upon polymerase chain reaction (PCR)-generated sequence data. However, many studies have shown that PCR amplification from poor-quality DNA templates can create sequence artefacts at significant levels. With hominin (human and other hominid) samples, the pervasive presence of highly PCR-amplifiable human DNA contaminants in the vast majority of samples can lead to the creation of recombinant hybrids and other non-authentic artefacts. The resulting PCR-generated sequences can then be difficult, if not impossible, to authenticate. In contrast, single primer extension (SPEX)-based approaches can genotype single nucleotide polymorphisms from ancient fragments of DNA as accurately as modern DNA. A single SPEX-type assay can amplify just one of the duplex DNA strands at target loci and generate a multi-fold depth-of-coverage, with non-authentic recombinant hybrids reduced to undetectable levels. Crucially, SPEX-type approaches can preferentially access genetic information from damaged and degraded endogenous ancient DNA templates over modern human DNA contaminants. The development of SPEX-type assays offers the potential for highly accurate, quantitative genotyping from ancient hominin samples. PMID:19864251

  1. Method of quantitating dsDNA

    DOEpatents

    Stark, Peter C.; Kuske, Cheryl R.; Mullen, Kenneth I.

    2002-01-01

    A method for quantitating dsDNA in an aqueous sample solution containing an unknown amount of dsDNA. A first aqueous test solution containing a known amount of a fluorescent dye-dsDNA complex and at least one fluorescence-attenutating contaminant is prepared. The fluorescence intensity of the test solution is measured. The first test solution is diluted by a known amount to provide a second test solution having a known concentration of dsDNA. The fluorescence intensity of the second test solution is measured. Additional diluted test solutions are similarly prepared until a sufficiently dilute test solution having a known amount of dsDNA is prepared that has a fluorescence intensity that is not attenuated upon further dilution. The value of the maximum absorbance of this solution between 200-900 nanometers (nm), referred to herein as the threshold absorbance, is measured. A sample solution having an unknown amount of dsDNA and an absorbance identical to that of the sufficiently dilute test solution at the same chosen wavelength is prepared. Dye is then added to the sample solution to form the fluorescent dye-dsDNA-complex, after which the fluorescence intensity of the sample solution is measured and the quantity of dsDNA in the sample solution is determined. Once the threshold absorbance of a sample solution obtained from a particular environment has been determined, any similarly prepared sample solution taken from a similar environment and having the same value for the threshold absorbance can be quantified for dsDNA by adding a large excess of dye to the sample solution and measuring its fluorescence intensity.

  2. A High Resolution/Accurate Mass (HRAM) Data-Dependent MS3 Neutral Loss Screening, Classification, and Relative Quantitation Methodology for Carbonyl Compounds in Saliva

    NASA Astrophysics Data System (ADS)

    Dator, Romel; Carrà, Andrea; Maertens, Laura; Guidolin, Valeria; Villalta, Peter W.; Balbo, Silvia

    2016-10-01

    Reactive carbonyl compounds (RCCs) are ubiquitous in the environment and are generated endogenously as a result of various physiological and pathological processes. These compounds can react with biological molecules inducing deleterious processes believed to be at the basis of their toxic effects. Several of these compounds are implicated in neurotoxic processes, aging disorders, and cancer. Therefore, a method characterizing exposures to these chemicals will provide insights into how they may influence overall health and contribute to disease pathogenesis. Here, we have developed a high resolution accurate mass (HRAM) screening strategy allowing simultaneous identification and relative quantitation of DNPH-derivatized carbonyls in human biological fluids. The screening strategy involves the diagnostic neutral loss of hydroxyl radical triggering MS3 fragmentation, which is only observed in positive ionization mode of DNPH-derivatized carbonyls. Unique fragmentation pathways were used to develop a classification scheme for characterizing known and unanticipated/unknown carbonyl compounds present in saliva. Furthermore, a relative quantitation strategy was implemented to assess variations in the levels of carbonyl compounds before and after exposure using deuterated d 3 -DNPH. This relative quantitation method was tested on human samples before and after exposure to specific amounts of alcohol. The nano-electrospray ionization (nano-ESI) in positive mode afforded excellent sensitivity with detection limits on-column in the high-attomole levels. To the best of our knowledge, this is the first report of a method using HRAM neutral loss screening of carbonyl compounds. In addition, the method allows simultaneous characterization and relative quantitation of DNPH-derivatized compounds using nano-ESI in positive mode.

  3. Robust and Accurate Shock Capturing Method for High-Order Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Atkins, Harold L.; Pampell, Alyssa

    2011-01-01

    A simple yet robust and accurate approach for capturing shock waves using a high-order discontinuous Galerkin (DG) method is presented. The method uses the physical viscous terms of the Navier-Stokes equations as suggested by others; however, the proposed formulation of the numerical viscosity is continuous and compact by construction, and does not require the solution of an auxiliary diffusion equation. This work also presents two analyses that guided the formulation of the numerical viscosity and certain aspects of the DG implementation. A local eigenvalue analysis of the DG discretization applied to a shock containing element is used to evaluate the robustness of several Riemann flux functions, and to evaluate algorithm choices that exist within the underlying DG discretization. A second analysis examines exact solutions to the DG discretization in a shock containing element, and identifies a "model" instability that will inevitably arise when solving the Euler equations using the DG method. This analysis identifies the minimum viscosity required for stability. The shock capturing method is demonstrated for high-speed flow over an inviscid cylinder and for an unsteady disturbance in a hypersonic boundary layer. Numerical tests are presented that evaluate several aspects of the shock detection terms. The sensitivity of the results to model parameters is examined with grid and order refinement studies.

  4. Accurate compressed look up table method for CGH in 3D holographic display.

    PubMed

    Gao, Chuan; Liu, Juan; Li, Xin; Xue, Gaolei; Jia, Jia; Wang, Yongtian

    2015-12-28

    Computer generated hologram (CGH) should be obtained with high accuracy and high speed in 3D holographic display, and most researches focus on the high speed. In this paper, a simple and effective computation method for CGH is proposed based on Fresnel diffraction theory and look up table. Numerical simulations and optical experiments are performed to demonstrate its feasibility. The proposed method can obtain more accurate reconstructed images with lower memory usage compared with split look up table method and compressed look up table method without sacrificing the computational speed in holograms generation, so it is called accurate compressed look up table method (AC-LUT). It is believed that AC-LUT method is an effective method to calculate the CGH of 3D objects for real-time 3D holographic display where the huge information data is required, and it could provide fast and accurate digital transmission in various dynamic optical fields in the future.

  5. Simple, flexible, and accurate phase retrieval method for generalized phase-shifting interferometry.

    PubMed

    Yatabe, Kohei; Ishikawa, Kenji; Oikawa, Yasuhiro

    2017-01-01

    This paper presents a non-iterative phase retrieval method from randomly phase-shifted fringe images. By combining the hyperaccurate least squares ellipse fitting method with the subspace method (usually called the principal component analysis), a fast and accurate phase retrieval algorithm is realized. The proposed method is simple, flexible, and accurate. It can be easily coded without iteration, initial guess, or tuning parameter. Its flexibility comes from the fact that totally random phase-shifting steps and any number of fringe images greater than two are acceptable without any specific treatment. Finally, it is accurate because the hyperaccurate least squares method and the modified subspace method enable phase retrieval with a small error as shown by the simulations. A MATLAB code, which is used in the experimental section, is provided within the paper to demonstrate its simplicity and easiness.

  6. Method for accurate growth of vertical-cavity surface-emitting lasers

    DOEpatents

    Chalmers, S.A.; Killeen, K.P.; Lear, K.L.

    1995-03-14

    The authors report a method for accurate growth of vertical-cavity surface-emitting lasers (VCSELs). The method uses a single reflectivity spectrum measurement to determine the structure of the partially completed VCSEL at a critical point of growth. This information, along with the extracted growth rates, allows imprecisions in growth parameters to be compensated for during growth of the remaining structure, which can then be completed with very accurate critical dimensions. Using this method, they can now routinely grow lasing VCSELs with Fabry-Perot cavity resonance wavelengths controlled to within 0.5%. 4 figs.

  7. Method for accurate growth of vertical-cavity surface-emitting lasers

    DOEpatents

    Chalmers, Scott A.; Killeen, Kevin P.; Lear, Kevin L.

    1995-01-01

    We report a method for accurate growth of vertical-cavity surface-emitting lasers (VCSELs). The method uses a single reflectivity spectrum measurement to determine the structure of the partially completed VCSEL at a critical point of growth. This information, along with the extracted growth rates, allows imprecisions in growth parameters to be compensated for during growth of the remaining structure, which can then be completed with very accurate critical dimensions. Using this method, we can now routinely grow lasing VCSELs with Fabry-Perot cavity resonance wavelengths controlled to within 0.5%.

  8. From themes to hypotheses: following up with quantitative methods.

    PubMed

    Morgan, David L

    2015-06-01

    One important category of mixed-methods research designs consists of quantitative studies that follow up on qualitative research. In this case, the themes that serve as the results from the qualitative methods generate hypotheses for testing through the quantitative methods. That process requires operationalization to translate the concepts from the qualitative themes into quantitative variables. This article illustrates these procedures with examples that range from simple operationalization to the evaluation of complex models. It concludes with an argument for not only following up qualitative work with quantitative studies but also the reverse, and doing so by going beyond integrating methods within single projects to include broader mutual attention from qualitative and quantitative researchers who work in the same field.

  9. Accurate quantification of tio2 nanoparticles collected on air filters using a microwave-assisted acid digestion method

    PubMed Central

    Mudunkotuwa, Imali A.; Anthony, T. Renée; Grassian, Vicki H.; Peters, Thomas M.

    2016-01-01

    Titanium dioxide (TiO2) particles, including nanoparticles with diameters smaller than 100 nm, are used extensively in consumer products. In a 2011 current intelligence bulletin, the National Institute of Occupational Safety and Health (NIOSH) recommended methods to assess worker exposures to fine and ultrafine TiO2 particles and associated occupational exposure limits for these particles. However, there are several challenges and problems encountered with these recommended exposure assessment methods involving the accurate quantitation of titanium dioxide collected on air filters using acid digestion followed by inductively coupled plasma optical emission spectroscopy (ICP-OES). Specifically, recommended digestion methods include the use of chemicals, such as perchloric acid, which are typically unavailable in most accredited industrial hygiene laboratories due to highly corrosive and oxidizing properties. Other alternative methods that are used typically involve the use of nitric acid or combination of nitric acid and sulfuric acid, which yield very poor recoveries for titanium dioxide. Therefore, given the current state of the science, it is clear that a new method is needed for exposure assessment. In this current study, a microwave-assisted acid digestion method has been specifically designed to improve the recovery of titanium in TiO2 nanoparticles for quantitative analysis using ICP-OES. The optimum digestion conditions were determined by changing several variables including the acids used, digestion time, and temperature. Consequently, the optimized digestion temperature of 210°C with concentrated sulfuric and nitric acid (2:1 v/v) resulted in a recovery of >90% for TiO2. The method is expected to provide for a more accurate quantification of airborne TiO2 particles in the workplace environment. PMID:26181824

  10. Accurate quantification of tio2 nanoparticles collected on air filters using a microwave-assisted acid digestion method.

    PubMed

    Mudunkotuwa, Imali A; Anthony, T Renée; Grassian, Vicki H; Peters, Thomas M

    2016-01-01

    Titanium dioxide (TiO(2)) particles, including nanoparticles with diameters smaller than 100 nm, are used extensively in consumer products. In a 2011 current intelligence bulletin, the National Institute of Occupational Safety and Health (NIOSH) recommended methods to assess worker exposures to fine and ultrafine TiO(2) particles and associated occupational exposure limits for these particles. However, there are several challenges and problems encountered with these recommended exposure assessment methods involving the accurate quantitation of titanium dioxide collected on air filters using acid digestion followed by inductively coupled plasma optical emission spectroscopy (ICP-OES). Specifically, recommended digestion methods include the use of chemicals, such as perchloric acid, which are typically unavailable in most accredited industrial hygiene laboratories due to highly corrosive and oxidizing properties. Other alternative methods that are used typically involve the use of nitric acid or combination of nitric acid and sulfuric acid, which yield very poor recoveries for titanium dioxide. Therefore, given the current state of the science, it is clear that a new method is needed for exposure assessment. In this current study, a microwave-assisted acid digestion method has been specifically designed to improve the recovery of titanium in TiO(2) nanoparticles for quantitative analysis using ICP-OES. The optimum digestion conditions were determined by changing several variables including the acids used, digestion time, and temperature. Consequently, the optimized digestion temperature of 210°C with concentrated sulfuric and nitric acid (2:1 v/v) resulted in a recovery of >90% for TiO(2). The method is expected to provide for a more accurate quantification of airborne TiO(2) particles in the workplace environment.

  11. Method for depth-resolved quantitation of optical properties in layered media using spatially modulated quantitative spectroscopy.

    PubMed

    Saager, Rolf B; Truong, Alex; Cuccia, David J; Durkin, Anthony J

    2011-07-01

    We have demonstrated that spatially modulated quantitative spectroscopy (SMoQS) is capable of extracting absolute optical properties from homogeneous tissue simulating phantoms that span both the visible and near-infrared wavelength regimes. However, biological tissue, such as skin, is highly structured, presenting challenges to quantitative spectroscopic techniques based on homogeneous models. In order to more accurately address the challenges associated with skin, we present a method for depth-resolved optical property quantitation based on a two layer model. Layered Monte Carlo simulations and layered tissue simulating phantoms are used to determine the efficacy and accuracy of SMoQS to quantify layer specific optical properties of layered media. Initial results from both the simulation and experiment show that this empirical method is capable of determining top layer thickness within tens of microns across a physiological range for skin. Layer specific chromophore concentration can be determined to <±10% the actual values, on average, whereas bulk quantitation in either visible or near infrared spectroscopic regimes significantly underestimates the layer specific chromophore concentration and can be confounded by top layer thickness.

  12. Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy

    NASA Astrophysics Data System (ADS)

    Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui

    2014-06-01

    Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.

  13. A new method to synthesize competitor RNAs for accurate analyses by competitive RT-PCR.

    PubMed

    Ishibashi, O

    1997-12-03

    A method to synthesize competitor RNAs as internal standards for competitive RT-PCR is improved by using the long accurate PCR (LA-PCR) technique. Competitor templates synthesized by the new method are almost the same in length, and possibly in secondary structure, as target mRNAs to be quantified except that they include the short deletion within the segments to be amplified. This allows the reverse transcription to be achieved with almost the same efficiency from both target mRNAs and competitor RNAs. Therefore, more accurate quantification can be accomplished by using such competitor RNAs.

  14. Infectious titres of sheep scrapie and bovine spongiform encephalopathy agents cannot be accurately predicted from quantitative laboratory test results.

    PubMed

    González, Lorenzo; Thorne, Leigh; Jeffrey, Martin; Martin, Stuart; Spiropoulos, John; Beck, Katy E; Lockey, Richard W; Vickery, Christopher M; Holder, Thomas; Terry, Linda

    2012-11-01

    It is widely accepted that abnormal forms of the prion protein (PrP) are the best surrogate marker for the infectious agent of prion diseases and, in practice, the detection of such disease-associated (PrP(d)) and/or protease-resistant (PrP(res)) forms of PrP is the cornerstone of diagnosis and surveillance of the transmissible spongiform encephalopathies (TSEs). Nevertheless, some studies question the consistent association between infectivity and abnormal PrP detection. To address this discrepancy, 11 brain samples of sheep affected with natural scrapie or experimental bovine spongiform encephalopathy were selected on the basis of the magnitude and predominant types of PrP(d) accumulation, as shown by immunohistochemical (IHC) examination; contra-lateral hemi-brain samples were inoculated at three different dilutions into transgenic mice overexpressing ovine PrP and were also subjected to quantitative analysis by three biochemical tests (BCTs). Six samples gave 'low' infectious titres (10⁶·⁵ to 10⁶·⁷ LD₅₀ g⁻¹) and five gave 'high titres' (10⁸·¹ to ≥ 10⁸·⁷ LD₅₀ g⁻¹) and, with the exception of the Western blot analysis, those two groups tended to correspond with samples with lower PrP(d)/PrP(res) results by IHC/BCTs. However, no statistical association could be confirmed due to high individual sample variability. It is concluded that although detection of abnormal forms of PrP by laboratory methods remains useful to confirm TSE infection, infectivity titres cannot be predicted from quantitative test results, at least for the TSE sources and host PRNP genotypes used in this study. Furthermore, the near inverse correlation between infectious titres and Western blot results (high protease pre-treatment) argues for a dissociation between infectivity and PrP(res).

  15. Review of Quantitative Software Reliability Methods

    SciTech Connect

    Chu, T.L.; Yue, M.; Martinez-Guridi, M.; Lehner, J.

    2010-09-17

    The current U.S. Nuclear Regulatory Commission (NRC) licensing process for digital systems rests on deterministic engineering criteria. In its 1995 probabilistic risk assessment (PRA) policy statement, the Commission encouraged the use of PRA technology in all regulatory matters to the extent supported by the state-of-the-art in PRA methods and data. Although many activities have been completed in the area of risk-informed regulation, the risk-informed analysis process for digital systems has not yet been satisfactorily developed. Since digital instrumentation and control (I&C) systems are expected to play an increasingly important role in nuclear power plant (NPP) safety, the NRC established a digital system research plan that defines a coherent set of research programs to support its regulatory needs. One of the research programs included in the NRC's digital system research plan addresses risk assessment methods and data for digital systems. Digital I&C systems have some unique characteristics, such as using software, and may have different failure causes and/or modes than analog I&C systems; hence, their incorporation into NPP PRAs entails special challenges. The objective of the NRC's digital system risk research is to identify and develop methods, analytical tools, and regulatory guidance for (1) including models of digital systems into NPP PRAs, and (2) using information on the risks of digital systems to support the NRC's risk-informed licensing and oversight activities. For several years, Brookhaven National Laboratory (BNL) has worked on NRC projects to investigate methods and tools for the probabilistic modeling of digital systems, as documented mainly in NUREG/CR-6962 and NUREG/CR-6997. However, the scope of this research principally focused on hardware failures, with limited reviews of software failure experience and software reliability methods. NRC also sponsored research at the Ohio State University investigating the modeling of digital systems

  16. Semi-quantitative method to estimate levels of Campylobacter

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Introduction: Research projects utilizing live animals and/or systems often require reliable, accurate quantification of Campylobacter following treatments. Even with marker strains, conventional methods designed to quantify are labor and material intensive requiring either serial dilutions or MPN ...

  17. Chemoenzymatic method for glycomics: isolation, identification, and quantitation

    PubMed Central

    Yang, Shuang; Rubin, Abigail; Eshghi, Shadi Toghi; Zhang, Hui

    2015-01-01

    Over the past decade, considerable progress has been made with respect to the analytical methods for analysis of glycans from biological sources. Regardless of the specific methods that are used, glycan analysis includes isolation, identification, and quantitation. Derivatization is indispensable to increase their identification. Derivatization of glycans can be performed by permethylation or carbodiimide coupling / esterification. By introducing a fluorophore or chromophore at their reducing end, glycans can be separated by electrophoresis or chromatography. The fluorogenically labeled glycans can be quantitated using fluorescent detection. The recently developed approaches using solid-phase such as glycoprotein immobilization for glycan extraction and on-tissue glycan mass spectrometry imaging demonstrate advantages over methods performed in solution. Derivatization of sialic acids is favorably implemented on the solid support using carbodiimide coupling, and the released glycans can be further modified at the reducing end or permethylated for quantitative analysis. In this review, methods for glycan isolation, identification, and quantitation are discussed. PMID:26390280

  18. Method for quantitating sensitivity to a staphylococcal bacteriocin.

    PubMed Central

    Van Norman, G; Groman, N

    1979-01-01

    A convenient method for quantitating the sensitivity of large numbers of bacterial strains (presently Corynebacterium diphtheriae) to a Staphylococcus aureus phage type 71 bacteriocin is described. Images PMID:121117

  19. Fluorometric method of quantitative cell mutagenesis

    DOEpatents

    Dolbeare, Frank A.

    1982-01-01

    A method for assaying a cell culture for mutagenesis is described. A cell culture is stained first with a histochemical stain, and then a fluorescent stain. Normal cells in the culture are stained by both the histochemical and fluorescent stains, while abnormal cells are stained only by the fluorescent stain. The two stains are chosen so that the histochemical stain absorbs the wavelengths that the fluorescent stain emits. After the counterstained culture is subjected to exciting light, the fluorescence from the abnormal cells is detected.

  20. Fluorometric method of quantitative cell mutagenesis

    DOEpatents

    Dolbeare, F.A.

    1980-12-12

    A method for assaying a cell culture for mutagenesis is described. A cell culture is stained first with a histochemical stain, and then a fluorescent stain. Normal cells in the culture are stained by both the histochemical and fluorescent stains, while abnormal cells are stained only by the fluorescent stain. The two stains are chosen so that the histochemical stain absorbs the wavelengths that the fluorescent stain emits. After the counterstained culture is subjected to exciting light, the fluorescence from the abnormal cells is detected.

  1. The U.S. Department of Agriculture Automated Multiple-Pass Method accurately assesses sodium intakes

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Accurate and practical methods to monitor sodium intake of the U.S. population are critical given current sodium reduction strategies. While the gold standard for estimating sodium intake is the 24 hour urine collection, few studies have used this biomarker to evaluate the accuracy of a dietary ins...

  2. LSimpute: accurate estimation of missing values in microarray data with least squares methods.

    PubMed

    Bø, Trond Hellem; Dysvik, Bjarte; Jonassen, Inge

    2004-02-20

    Microarray experiments generate data sets with information on the expression levels of thousands of genes in a set of biological samples. Unfortunately, such experiments often produce multiple missing expression values, normally due to various experimental problems. As many algorithms for gene expression analysis require a complete data matrix as input, the missing values have to be estimated in order to analyze the available data. Alternatively, genes and arrays can be removed until no missing values remain. However, for genes or arrays with only a small number of missing values, it is desirable to impute those values. For the subsequent analysis to be as informative as possible, it is essential that the estimates for the missing gene expression values are accurate. A small amount of badly estimated missing values in the data might be enough for clustering methods, such as hierachical clustering or K-means clustering, to produce misleading results. Thus, accurate methods for missing value estimation are needed. We present novel methods for estimation of missing values in microarray data sets that are based on the least squares principle, and that utilize correlations between both genes and arrays. For this set of methods, we use the common reference name LSimpute. We compare the estimation accuracy of our methods with the widely used KNNimpute on three complete data matrices from public data sets by randomly knocking out data (labeling as missing). From these tests, we conclude that our LSimpute methods produce estimates that consistently are more accurate than those obtained using KNNimpute. Additionally, we examine a more classic approach to missing value estimation based on expectation maximization (EM). We refer to our EM implementations as EMimpute, and the estimate errors using the EMimpute methods are compared with those our novel methods produce. The results indicate that on average, the estimates from our best performing LSimpute method are at least as

  3. An Effective Method to Accurately Calculate the Phase Space Factors for β - β - Decay

    DOE PAGES

    Neacsu, Andrei; Horoi, Mihai

    2016-01-01

    Accurate calculations of the electron phase space factors are necessary for reliable predictions of double-beta decay rates and for the analysis of the associated electron angular and energy distributions. We present an effective method to calculate these phase space factors that takes into account the distorted Coulomb field of the daughter nucleus, yet it allows one to easily calculate the phase space factors with good accuracy relative to the most exact methods available in the recent literature.

  4. Accurate determination of specific heat at high temperatures using the flash diffusivity method

    NASA Technical Reports Server (NTRS)

    Vandersande, J. W.; Zoltan, A.; Wood, C.

    1989-01-01

    The flash diffusivity method of Parker et al. (1961) was used to measure accurately the specific heat of test samples simultaneously with thermal diffusivity, thus obtaining the thermal conductivity of these materials directly. The accuracy of data obtained on two types of materials (n-type silicon-germanium alloys and niobium), was + or - 3 percent. It is shown that the method is applicable up to at least 1300 K.

  5. Mitochondrial DNA as a non-invasive biomarker: Accurate quantification using real time quantitative PCR without co-amplification of pseudogenes and dilution bias

    SciTech Connect

    Malik, Afshan N.; Shahni, Rojeen; Rodriguez-de-Ledesma, Ana; Laftah, Abas; Cunningham, Phil

    2011-08-19

    Highlights: {yields} Mitochondrial dysfunction is central to many diseases of oxidative stress. {yields} 95% of the mitochondrial genome is duplicated in the nuclear genome. {yields} Dilution of untreated genomic DNA leads to dilution bias. {yields} Unique primers and template pretreatment are needed to accurately measure mitochondrial DNA content. -- Abstract: Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that the methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as {beta}-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a 'dilution bias' when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.

  6. An effective method for accurate prediction of the first hyperpolarizability of alkalides.

    PubMed

    Wang, Jia-Nan; Xu, Hong-Liang; Sun, Shi-Ling; Gao, Ting; Li, Hong-Zhi; Li, Hui; Su, Zhong-Min

    2012-01-15

    The proper theoretical calculation method for nonlinear optical (NLO) properties is a key factor to design the excellent NLO materials. Yet it is a difficult task to obatin the accurate NLO property of large scale molecule. In present work, an effective intelligent computing method, as called extreme learning machine-neural network (ELM-NN), is proposed to predict accurately the first hyperpolarizability (β(0)) of alkalides from low-accuracy first hyperpolarizability. Compared with neural network (NN) and genetic algorithm neural network (GANN), the root-mean-square deviations of the predicted values obtained by ELM-NN, GANN, and NN with their MP2 counterpart are 0.02, 0.08, and 0.17 a.u., respectively. It suggests that the predicted values obtained by ELM-NN are more accurate than those calculated by NN and GANN methods. Another excellent point of ELM-NN is the ability to obtain the high accuracy level calculated values with less computing cost. Experimental results show that the computing time of MP2 is 2.4-4 times of the computing time of ELM-NN. Thus, the proposed method is a potentially powerful tool in computational chemistry, and it may predict β(0) of the large scale molecules, which is difficult to obtain by high-accuracy theoretical method due to dramatic increasing computational cost.

  7. [Reconstituting evaluation methods based on both qualitative and quantitative paradigms].

    PubMed

    Miyata, Hiroaki; Okubo, Suguru; Yoshie, Satoru; Kai, Ichiro

    2011-01-01

    Debate about the relationship between quantitative and qualitative paradigms is often muddled and confusing and the clutter of terms and arguments has resulted in the concepts becoming obscure and unrecognizable. In this study we conducted content analysis regarding evaluation methods of qualitative healthcare research. We extracted descriptions on four types of evaluation paradigm (validity/credibility, reliability/credibility, objectivity/confirmability, and generalizability/transferability), and classified them into subcategories. In quantitative research, there has been many evaluation methods based on qualitative paradigms, and vice versa. Thus, it might not be useful to consider evaluation methods of qualitative paradigm are isolated from those of quantitative methods. Choosing practical evaluation methods based on the situation and prior conditions of each study is an important approach for researchers.

  8. Uncertainty of quantitative microbiological methods of pharmaceutical analysis.

    PubMed

    Gunar, O V; Sakhno, N G

    2015-12-30

    The total uncertainty of quantitative microbiological methods, used in pharmaceutical analysis, consists of several components. The analysis of the most important sources of the quantitative microbiological methods variability demonstrated no effect of culture media and plate-count techniques in the estimation of microbial count while the highly significant effect of other factors (type of microorganism, pharmaceutical product and individual reading and interpreting errors) was established. The most appropriate method of statistical analysis of such data was ANOVA which enabled not only the effect of individual factors to be estimated but also their interactions. Considering all the elements of uncertainty and combining them mathematically the combined relative uncertainty of the test results was estimated both for method of quantitative examination of non-sterile pharmaceuticals and microbial count technique without any product. These data did not exceed 35%, appropriated for a traditional plate count methods.

  9. Accurate mass replacement method for the sediment concentration measurement with a constant volume container

    NASA Astrophysics Data System (ADS)

    Ban, Yunyun; Chen, Tianqin; Yan, Jun; Lei, Tingwu

    2017-04-01

    The measurement of sediment concentration in water is of great importance in soil erosion research and soil and water loss monitoring systems. The traditional weighing method has long been the foundation of all the other measuring methods and instrument calibration. The development of a new method to replace the traditional oven-drying method is of interest in research and practice for the quick and efficient measurement of sediment concentration, especially field measurements. A new method is advanced in this study for accurately measuring the sediment concentration based on the accurate measurement of the mass of the sediment-water mixture in the confined constant volume container (CVC). A sediment-laden water sample is put into the CVC to determine its mass before the CVC is filled with water and weighed again for the total mass of the water and sediments in the container. The known volume of the CVC, the mass of sediment-laden water, and sediment particle density are used to calculate the mass of water, which is replaced by sediments, therefore sediment concentration of the sample is calculated. The influence of water temperature was corrected by measuring water density to determine the temperature of water before measurements were conducted. The CVC was used to eliminate the surface tension effect so as to obtain the accurate volume of water and sediment mixture. Experimental results showed that the method was capable of measuring the sediment concentration from 0.5 up to 1200 kg m‑3. A good liner relationship existed between the designed and measured sediment concentrations with all the coefficients of determination greater than 0.999 and the averaged relative error less than 0.2%. All of these seem to indicate that the new method is capable of measuring a full range of sediment concentration above 0.5 kg m‑3 to replace the traditional oven-drying method as a standard method for evaluating and calibrating other methods.

  10. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    PubMed

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples.

  11. A safe and accurate method to perform esthetic mandibular contouring surgery for Far Eastern Asians.

    PubMed

    Hsieh, A M-C; Huon, L-K; Jiang, H-R; Liu, S Y-C

    2017-05-01

    A tapered mandibular contour is popular with Far Eastern Asians. This study describes a safe and accurate method of using preoperative virtual surgical planning (VSP) and an intraoperative ostectomy guide to maximize the esthetic outcomes of mandibular symmetry and tapering while mitigating injury to the inferior alveolar nerve (IAN). Twelve subjects with chief complaints of a wide and square lower face underwent this protocol from January to June 2015. VSP was used to confirm symmetry and preserve the IAN while maximizing the surgeon's ability to taper the lower face via mandibular inferior border ostectomy. The accuracy of this method was confirmed by superimposition of the perioperative computed tomography scans in all subjects. No subjects complained of prolonged paresthesia after 3 months. A safe and accurate protocol for achieving an esthetic lower face in indicated Far Eastern individuals is described.

  12. A fast and accurate method to predict 2D and 3D aerodynamic boundary layer flows

    NASA Astrophysics Data System (ADS)

    Bijleveld, H. A.; Veldman, A. E. P.

    2014-12-01

    A quasi-simultaneous interaction method is applied to predict 2D and 3D aerodynamic flows. This method is suitable for offshore wind turbine design software as it is a very accurate and computationally reasonably cheap method. This study shows the results for a NACA 0012 airfoil. The two applied solvers converge to the experimental values when the grid is refined. We also show that in separation the eigenvalues remain positive thus avoiding the Goldstein singularity at separation. In 3D we show a flow over a dent in which separation occurs. A rotating flat plat is used to show the applicability of the method for rotating flows. The shown capabilities of the method indicate that the quasi-simultaneous interaction method is suitable for design methods for offshore wind turbine blades.

  13. An accurate and practical method for inference of weak gravitational lensing from galaxy images

    NASA Astrophysics Data System (ADS)

    Bernstein, Gary M.; Armstrong, Robert; Krawiec, Christina; March, Marisa C.

    2016-07-01

    We demonstrate highly accurate recovery of weak gravitational lensing shear using an implementation of the Bayesian Fourier Domain (BFD) method proposed by Bernstein & Armstrong, extended to correct for selection biases. The BFD formalism is rigorously correct for Nyquist-sampled, background-limited, uncrowded images of background galaxies. BFD does not assign shapes to galaxies, instead compressing the pixel data D into a vector of moments M, such that we have an analytic expression for the probability P(M|g) of obtaining the observations with gravitational lensing distortion g along the line of sight. We implement an algorithm for conducting BFD's integrations over the population of unlensed source galaxies which measures ≈10 galaxies s-1 core-1 with good scaling properties. Initial tests of this code on ≈109 simulated lensed galaxy images recover the simulated shear to a fractional accuracy of m = (2.1 ± 0.4) × 10-3, substantially more accurate than has been demonstrated previously for any generally applicable method. Deep sky exposures generate a sufficiently accurate approximation to the noiseless, unlensed galaxy population distribution assumed as input to BFD. Potential extensions of the method include simultaneous measurement of magnification and shear; multiple-exposure, multiband observations; and joint inference of photometric redshifts and lensing tomography.

  14. Applying Quantitative Genetic Methods to Primate Social Behavior

    PubMed Central

    Brent, Lauren J. N.

    2013-01-01

    Increasingly, behavioral ecologists have applied quantitative genetic methods to investigate the evolution of behaviors in wild animal populations. The promise of quantitative genetics in unmanaged populations opens the door for simultaneous analysis of inheritance, phenotypic plasticity, and patterns of selection on behavioral phenotypes all within the same study. In this article, we describe how quantitative genetic techniques provide studies of the evolution of behavior with information that is unique and valuable. We outline technical obstacles for applying quantitative genetic techniques that are of particular relevance to studies of behavior in primates, especially those living in noncaptive populations, e.g., the need for pedigree information, non-Gaussian phenotypes, and demonstrate how many of these barriers are now surmountable. We illustrate this by applying recent quantitative genetic methods to spatial proximity data, a simple and widely collected primate social behavior, from adult rhesus macaques on Cayo Santiago. Our analysis shows that proximity measures are consistent across repeated measurements on individuals (repeatable) and that kin have similar mean measurements (heritable). Quantitative genetics may hold lessons of considerable importance for studies of primate behavior, even those without a specific genetic focus. PMID:24659839

  15. Allele Specific Locked Nucleic Acid Quantitative PCR (ASLNAqPCR): An Accurate and Cost-Effective Assay to Diagnose and Quantify KRAS and BRAF Mutation

    PubMed Central

    Morandi, Luca; de Biase, Dario; Visani, Michela; Cesari, Valentina; De Maglio, Giovanna; Pizzolitto, Stefano; Pession, Annalisa; Tallini, Giovanni

    2012-01-01

    The use of tyrosine kinase inhibitors (TKIs) requires the testing for hot spot mutations of the molecular effectors downstream the membrane-bound tyrosine kinases since their wild type status is expected for response to TKI therapy. We report a novel assay that we have called Allele Specific Locked Nucleic Acid quantitative PCR (ASLNAqPCR). The assay uses LNA-modified allele specific primers and LNA-modified beacon probes to increase sensitivity, specificity and to accurately quantify mutations. We designed primers specific for codon 12/13 KRAS mutations and BRAF V600E, and validated the assay with 300 routine samples from a variety of sources, including cytology specimens. All were analyzed by ASLNAqPCR and Sanger sequencing. Discordant cases were pyrosequenced. ASLNAqPCR correctly identified BRAF and KRAS mutations in all discordant cases and all had a mutated/wild type DNA ratio below the analytical sensitivity of the Sanger method. ASLNAqPCR was 100% specific with greater accuracy, positive and negative predictive values compared with Sanger sequencing. The analytical sensitivity of ASLNAqPCR is 0.1%, allowing quantification of mutated DNA in small neoplastic cell clones. ASLNAqPCR can be performed in any laboratory with real-time PCR equipment, is very cost-effective and can easily be adapted to detect hot spot mutations in other oncogenes. PMID:22558339

  16. A General Method for Targeted Quantitative Cross-Linking Mass Spectrometry

    PubMed Central

    Chavez, Juan D.; Eng, Jimmy K.; Schweppe, Devin K.; Cilia, Michelle; Rivera, Keith; Zhong, Xuefei; Wu, Xia; Allen, Terrence; Khurgel, Moshe; Kumar, Akhilesh; Lampropoulos, Athanasios; Larsson, Mårten; Maity, Shuvadeep; Morozov, Yaroslav; Pathmasiri, Wimal; Perez-Neut, Mathew; Pineyro-Ruiz, Coriness; Polina, Elizabeth; Post, Stephanie; Rider, Mark; Tokmina-Roszyk, Dorota; Tyson, Katherine; Vieira Parrine Sant'Ana, Debora; Bruce, James E.

    2016-01-01

    Chemical cross-linking mass spectrometry (XL-MS) provides protein structural information by identifying covalently linked proximal amino acid residues on protein surfaces. The information gained by this technique is complementary to other structural biology methods such as x-ray crystallography, NMR and cryo-electron microscopy[1]. The extension of traditional quantitative proteomics methods with chemical cross-linking can provide information on the structural dynamics of protein structures and protein complexes. The identification and quantitation of cross-linked peptides remains challenging for the general community, requiring specialized expertise ultimately limiting more widespread adoption of the technique. We describe a general method for targeted quantitative mass spectrometric analysis of cross-linked peptide pairs. We report the adaptation of the widely used, open source software package Skyline, for the analysis of quantitative XL-MS data as a means for data analysis and sharing of methods. We demonstrate the utility and robustness of the method with a cross-laboratory study and present data that is supported by and validates previously published data on quantified cross-linked peptide pairs. This advance provides an easy to use resource so that any lab with access to a LC-MS system capable of performing targeted quantitative analysis can quickly and accurately measure dynamic changes in protein structure and protein interactions. PMID:27997545

  17. A high-order accurate embedded boundary method for first order hyperbolic equations

    NASA Astrophysics Data System (ADS)

    Mattsson, Ken; Almquist, Martin

    2017-04-01

    A stable and high-order accurate embedded boundary method for first order hyperbolic equations is derived. Where the grid-boundaries and the physical boundaries do not coincide, high order interpolation is used. The boundary stencils are based on a summation-by-parts framework, and the boundary conditions are imposed by the SAT penalty method, which guarantees linear stability for one-dimensional problems. Second-, fourth-, and sixth-order finite difference schemes are considered. The resulting schemes are fully explicit. Accuracy and numerical stability of the proposed schemes are demonstrated for both linear and nonlinear hyperbolic systems in one and two spatial dimensions.

  18. The Block recursion library: accurate calculation of resolvent submatrices using the block recursion method

    NASA Astrophysics Data System (ADS)

    Godin, T. J.; Haydock, Roger

    1991-04-01

    The Block Recursion Library, a collection of FORTRAN subroutines, calculates submatrices of the resolvent of a linear operator. The resolvent, in matrix theory, is a powerful tool for extracting information about solutions of linear systems. The routines use the block recursion method and achieve high accuracy for very large systems of coupled equations. This technique is a generalization of the scalar recursion method, an accurate technique for finding the local density of states. A sample program uses these routines to find the quantum mechanical transmittance of a randomly disordered two-dimensional cluster of atoms.

  19. Investigation of low frequency electrolytic solution behavior with an accurate electrical impedance method

    NASA Astrophysics Data System (ADS)

    Ho, Kung-Chu; Su, Vin-Cent; Huang, Da-Yo; Lee, Ming-Lun; Chou, Nai-Kuan; Kuan, Chieh-Hsiung

    2017-01-01

    This paper reports the investigation of strong electrolytic solutions operated in low frequency regime through an accurate electrical impedance method realized with a specific microfluidic device and high resolution instruments. Experimental results show the better repeatability and accuracy of the proposed impedance method. Moreover, all electrolytic solutions appear the so-called relaxation frequency at each peak value of dielectric loss due to relaxing total polarization inside the device. The relaxation frequency of concentrated electrolytes becomes higher owing to the stronger total polarization behavior coming from the higher conductivity as well as the lower resistance in the electrolytic solutions.

  20. [A accurate identification method for Chinese materia medica--systematic identification of Chinese materia medica].

    PubMed

    Wang, Xue-Yong; Liao, Cai-Li; Liu, Si-Qi; Liu, Chun-Sheng; Shao, Ai-Juan; Huang, Lu-Qi

    2013-05-01

    This paper put forward a more accurate identification method for identification of Chinese materia medica (CMM), the systematic identification of Chinese materia medica (SICMM) , which might solve difficulties in CMM identification used the ordinary traditional ways. Concepts, mechanisms and methods of SICMM were systematically introduced and possibility was proved by experiments. The establishment of SICMM will solve problems in identification of Chinese materia medica not only in phenotypic characters like the mnorphous, microstructure, chemical constituents, but also further discovery evolution and classification of species, subspecies and population in medical plants. The establishment of SICMM will improve the development of identification of CMM and create a more extensive study space.

  1. A method to accurately estimate the muscular torques of human wearing exoskeletons by torque sensors.

    PubMed

    Hwang, Beomsoo; Jeon, Doyoung

    2015-04-09

    In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.

  2. Multifrequency excitation method for rapid and accurate dynamic test of micromachined gyroscope chips.

    PubMed

    Deng, Yan; Zhou, Bin; Xing, Chao; Zhang, Rong

    2014-10-17

    A novel multifrequency excitation (MFE) method is proposed to realize rapid and accurate dynamic testing of micromachined gyroscope chips. Compared with the traditional sweep-frequency excitation (SFE) method, the computational time for testing one chip under four modes at a 1-Hz frequency resolution and 600-Hz bandwidth was dramatically reduced from 10 min to 6 s. A multifrequency signal with an equal amplitude and initial linear-phase-difference distribution was generated to ensure test repeatability and accuracy. The current test system based on LabVIEW using the SFE method was modified to use the MFE method without any hardware changes. The experimental results verified that the MFE method can be an ideal solution for large-scale dynamic testing of gyroscope chips and gyroscopes.

  3. An accurate, robust, and easy-to-implement method for integration over arbitrary polyhedra: Application to embedded interface methods

    NASA Astrophysics Data System (ADS)

    Sudhakar, Y.; Moitinho de Almeida, J. P.; Wall, Wolfgang A.

    2014-09-01

    We present an accurate method for the numerical integration of polynomials over arbitrary polyhedra. Using the divergence theorem, the method transforms the domain integral into integrals evaluated over the facets of the polyhedra. The necessity of performing symbolic computation during such transformation is eliminated by using one dimensional Gauss quadrature rule. The facet integrals are computed with the help of quadratures available for triangles and quadrilaterals. Numerical examples, in which the proposed method is used to integrate the weak form of the Navier-Stokes equations in an embedded interface method (EIM), are presented. The results show that our method is as accurate and generalized as the most widely used volume decomposition based methods. Moreover, since the method involves neither volume decomposition nor symbolic computations, it is much easier for computer implementation. Also, the present method is more efficient than other available integration methods based on the divergence theorem. Efficiency of the method is also compared with the volume decomposition based methods and moment fitting methods. To our knowledge, this is the first article that compares both accuracy and computational efficiency of methods relying on volume decomposition and those based on the divergence theorem.

  4. A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes

    SciTech Connect

    Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.; Arkin, Adam P.

    2004-12-01

    We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, and its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.

  5. Accurate Time/Frequency Transfer Method Using Bi-Directional WDM Transmission

    NASA Technical Reports Server (NTRS)

    Imaoka, Atsushi; Kihara, Masami

    1996-01-01

    An accurate time transfer method is proposed using b-directional wavelength division multiplexing (WDM) signal transmission along a single optical fiber. This method will be used in digital telecommunication networks and yield a time synchronization accuracy of better than 1 ns for long transmission lines over several tens of kilometers. The method can accurately measure the difference in delay between two wavelength signals caused by the chromatic dispersion of the fiber in conventional simple bi-directional dual-wavelength frequency transfer methods. We describe the characteristics of this difference in delay and then show that the accuracy of the delay measurements can be obtained below 0.1 ns by transmitting 156 Mb/s times reference signals of 1.31 micrometer and 1.55 micrometers along a 50 km fiber using the proposed method. The sub-nanosecond delay measurement using the simple bi-directional dual-wavelength transmission along a 100 km fiber with a wavelength spacing of 1 nm in the 1.55 micrometer range is also shown.

  6. An accurate and efficient computation method of the hydration free energy of a large, complex molecule.

    PubMed

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-07

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of 〈UUV〉/2 (〈UUV〉 is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since 〈UUV〉 can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.

  7. Wave propagation models for quantitative defect detection by ultrasonic methods

    NASA Astrophysics Data System (ADS)

    Srivastava, Ankit; Bartoli, Ivan; Coccia, Stefano; Lanza di Scalea, Francesco

    2008-03-01

    Ultrasonic guided wave testing necessitates of quantitative, rather than qualitative, information on flaw size, shape and position. This quantitative diagnosis ability can be used to provide meaningful data to a prognosis algorithm for remaining life prediction, or simply to generate data sets for a statistical defect classification algorithm. Quantitative diagnostics needs models able to represent the interaction of guided waves with various defect scenarios. One such model is the Global-Local (GL) method, which uses a full finite element discretization of the region around a flaw to properly represent wave diffraction, and a suitable set of wave functions to simulate regions away from the flaw. Displacement and stress continuity conditions are imposed at the boundary between the global and the local regions. In this paper the GL method is expanded to take advantage of the Semi-Analytical Finite Element (SAFE) method in the global portion of the waveguide. The SAFE method is efficient because it only requires the discretization of the cross-section of the waveguide to obtain the wave dispersion solutions and it can handle complex structures such as multilayered sandwich panels. The GL method is applied to predicting quantitatively the interaction of guided waves with defects in aluminum and composites structural components.

  8. Accurate method for including solid-fluid boundary interactions in mesoscopic model fluids

    SciTech Connect

    Berkenbos, A. Lowe, C.P.

    2008-04-20

    Particle models are attractive methods for simulating the dynamics of complex mesoscopic fluids. Many practical applications of this methodology involve flow through a solid geometry. As the system is modeled using particles whose positions move continuously in space, one might expect that implementing the correct stick boundary condition exactly at the solid-fluid interface is straightforward. After all, unlike discrete methods there is no mapping onto a grid to contend with. In this article we describe a method that, for axisymmetric flows, imposes both the no-slip condition and continuity of stress at the interface. We show that the new method then accurately reproduces correct hydrodynamic behavior right up to the location of the interface. As such, computed flow profiles are correct even using a relatively small number of particles to model the fluid.

  9. Accurate force fields and methods for modelling organic molecular crystals at finite temperatures.

    PubMed

    Nyman, Jonas; Pundyke, Orla Sheehan; Day, Graeme M

    2016-06-21

    We present an assessment of the performance of several force fields for modelling intermolecular interactions in organic molecular crystals using the X23 benchmark set. The performance of the force fields is compared to several popular dispersion corrected density functional methods. In addition, we present our implementation of lattice vibrational free energy calculations in the quasi-harmonic approximation, using several methods to account for phonon dispersion. This allows us to also benchmark the force fields' reproduction of finite temperature crystal structures. The results demonstrate that anisotropic atom-atom multipole-based force fields can be as accurate as several popular DFT-D methods, but have errors 2-3 times larger than the current best DFT-D methods. The largest error in the examined force fields is a systematic underestimation of the (absolute) lattice energy.

  10. Accurate Wind Characterization in Complex Terrain Using the Immersed Boundary Method

    SciTech Connect

    Lundquist, K A; Chow, F K; Lundquist, J K; Kosovic, B

    2009-09-30

    This paper describes an immersed boundary method (IBM) that facilitates the explicit resolution of complex terrain within the Weather Research and Forecasting (WRF) model. Two different interpolation methods, trilinear and inverse distance weighting, are used at the core of the IBM algorithm. Functional aspects of the algorithm's implementation and the accuracy of results are considered. Simulations of flow over a three-dimensional hill with shallow terrain slopes are preformed with both WRF's native terrain-following coordinate and with both IB methods. Comparisons of flow fields from the three simulations show excellent agreement, indicating that both IB methods produce accurate results. However, when ease of implementation is considered, inverse distance weighting is superior. Furthermore, inverse distance weighting is shown to be more adept at handling highly complex urban terrain, where the trilinear interpolation algorithm breaks down. This capability is demonstrated by using the inverse distance weighting core of the IBM to model atmospheric flow in downtown Oklahoma City.

  11. Can a quantitative simulation of an Otto engine be accurately rendered by a simple Novikov model with heat leak?

    NASA Astrophysics Data System (ADS)

    Fischer, A.; Hoffmann, K.-H.

    2004-03-01

    In this case study a complex Otto engine simulation provides data including, but not limited to, effects from losses due to heat conduction, exhaust losses and frictional losses. This data is used as a benchmark to test whether the Novikov engine with heat leak, a simple endoreversible model, can reproduce the complex engine behavior quantitatively by an appropriate choice of model parameters. The reproduction obtained proves to be of high quality.

  12. A new class of accurate, mesh-free hydrodynamic simulation methods

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2015-06-01

    We present two new Lagrangian methods for hydrodynamics, in a systematic comparison with moving-mesh, smoothed particle hydrodynamics (SPH), and stationary (non-moving) grid methods. The new methods are designed to simultaneously capture advantages of both SPH and grid-based/adaptive mesh refinement (AMR) schemes. They are based on a kernel discretization of the volume coupled to a high-order matrix gradient estimator and a Riemann solver acting over the volume `overlap'. We implement and test a parallel, second-order version of the method with self-gravity and cosmological integration, in the code GIZMO:1 this maintains exact mass, energy and momentum conservation; exhibits superior angular momentum conservation compared to all other methods we study; does not require `artificial diffusion' terms; and allows the fluid elements to move with the flow, so resolution is automatically adaptive. We consider a large suite of test problems, and find that on all problems the new methods appear competitive with moving-mesh schemes, with some advantages (particularly in angular momentum conservation), at the cost of enhanced noise. The new methods have many advantages versus SPH: proper convergence, good capturing of fluid-mixing instabilities, dramatically reduced `particle noise' and numerical viscosity, more accurate sub-sonic flow evolution, and sharp shock-capturing. Advantages versus non-moving meshes include: automatic adaptivity, dramatically reduced advection errors and numerical overmixing, velocity-independent errors, accurate coupling to gravity, good angular momentum conservation and elimination of `grid alignment' effects. We can, for example, follow hundreds of orbits of gaseous discs, while AMR and SPH methods break down in a few orbits. However, fixed meshes minimize `grid noise'. These differences are important for a range of astrophysical problems.

  13. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms

    PubMed Central

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667

  14. A hybrid Boundary Element Unstructured Transmission-line (BEUT) method for accurate 2D electromagnetic simulation

    NASA Astrophysics Data System (ADS)

    Simmons, Daniel; Cools, Kristof; Sewell, Phillip

    2016-11-01

    Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removes staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications.

  15. Method for accurate optical alignment using diffraction rings from lenses with spherical aberration.

    PubMed

    Gwynn, R B; Christensen, D A

    1993-03-01

    A useful alignment method is presented that exploits the closely spaced concentric fringes that form in the longitudinal spherical aberration region of positive spherical lenses imaging a point source. To align one or more elements to a common axis, spherical lenses are attached precisely to the elements and the resulting diffraction rings are made to coincide. We modeled the spherical aberration of the lenses by calculating the diffraction patterns of converging plane waves passing through concentric narrow annular apertures. The validity of the model is supported by experimental data and is determined to be accurate for a prototype penumbral imaging alignment system developed at Lawrence Livermore National Laboratory.

  16. Accurate prediction of adsorption energies on graphene, using a dispersion-corrected semiempirical method including solvation.

    PubMed

    Vincent, Mark A; Hillier, Ian H

    2014-08-25

    The accurate prediction of the adsorption energies of unsaturated molecules on graphene in the presence of water is essential for the design of molecules that can modify its properties and that can aid its processability. We here show that a semiempirical MO method corrected for dispersive interactions (PM6-DH2) can predict the adsorption energies of unsaturated hydrocarbons and the effect of substitution on these values to an accuracy comparable to DFT values and in good agreement with the experiment. The adsorption energies of TCNE, TCNQ, and a number of sulfonated pyrenes are also predicted, along with the effect of hydration using the COSMO model.

  17. A simple and accurate resist parameter extraction method for sub-80-nm DRAM patterns

    NASA Astrophysics Data System (ADS)

    Lee, Sook; Hwang, Chan; Park, Dong-Woon; Kim, In-Sung; Kim, Ho-Chul; Woo, Sang-Gyun; Cho, Han-Ku; Moon, Joo-Tae

    2004-05-01

    Due to the polarization effect of high NA lithography, the consideration of resist effect in lithography simulation becomes increasingly important. In spite of the importance of resist simulation, many process engineers are reluctant to consider resist effect in lithography simulation due to time-consuming procedure to extract required resist parameters and the uncertainty of measurement of some parameters. Weiss suggested simplified development model, and this model does not require the complex kinetic parameters. For the device fabrication engineers, there is a simple and accurate parameter extraction and optimizing method using Weiss model. This method needs refractive index, Dill"s parameters and development rate monitoring (DRM) data in parameter extraction. The parameters extracted using referred sequence is not accurate, so that we have to optimize the parameters to fit the critical dimension scanning electron microscopy (CD SEM) data of line and space patterns. Hence, the FiRM of Sigma-C is utilized as a resist parameter-optimizing program. According to our study, the illumination shape, the aberration and the pupil mesh point have a large effect on the accuracy of resist parameter in optimization. To obtain the optimum parameters, we need to find the saturated mesh points in terms of normalized intensity log slope (NILS) prior to an optimization. The simulation results using the optimized parameters by this method shows good agreement with experiments for iso-dense bias, Focus-Exposure Matrix data and sub 80nm device pattern simulation.

  18. Joint iris boundary detection and fit: a real-time method for accurate pupil tracking.

    PubMed

    Barbosa, Marconi; James, Andrew C

    2014-08-01

    A range of applications in visual science rely on accurate tracking of the human pupil's movement and contraction in response to light. While the literature for independent contour detection and fitting of the iris-pupil boundary is vast, a joint approach, in which it is assumed that the pupil has a given geometric shape has been largely overlooked. We present here a global method for simultaneously finding and fitting of an elliptic or circular contour against a dark interior, which produces consistently accurate results even under non-ideal recording conditions, such as reflections near and over the boundary, droopy eye lids, or the sudden formation of tears. The specific form of the proposed optimization problem allows us to write down closed analytic formulae for the gradient and the Hessian of the objective function. Moreover, both the objective function and its derivatives can be cast into vectorized form, making the proposed algorithm significantly faster than its closest relative in the literature. We compare methods in multiple ways, both analytically and numerically, using real iris images as well as idealizations of the iris for which the ground truth boundary is precisely known. The method proposed here is illustrated under challenging recording conditions and it is shown to be robust.

  19. Accurate Adaptive Level Set Method and Sharpening Technique for Three Dimensional Deforming Interfaces

    NASA Technical Reports Server (NTRS)

    Kim, Hyoungin; Liou, Meng-Sing

    2011-01-01

    In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems

  20. Joint iris boundary detection and fit: a real-time method for accurate pupil tracking

    PubMed Central

    Barbosa, Marconi; James, Andrew C.

    2014-01-01

    A range of applications in visual science rely on accurate tracking of the human pupil’s movement and contraction in response to light. While the literature for independent contour detection and fitting of the iris-pupil boundary is vast, a joint approach, in which it is assumed that the pupil has a given geometric shape has been largely overlooked. We present here a global method for simultaneously finding and fitting of an elliptic or circular contour against a dark interior, which produces consistently accurate results even under non-ideal recording conditions, such as reflections near and over the boundary, droopy eye lids, or the sudden formation of tears. The specific form of the proposed optimization problem allows us to write down closed analytic formulae for the gradient and the Hessian of the objective function. Moreover, both the objective function and its derivatives can be cast into vectorized form, making the proposed algorithm significantly faster than its closest relative in the literature. We compare methods in multiple ways, both analytically and numerically, using real iris images as well as idealizations of the iris for which the ground truth boundary is precisely known. The method proposed here is illustrated under challenging recording conditions and it is shown to be robust. PMID:25136477

  1. An Improved Method for Accurate and Rapid Measurement of Flight Performance in Drosophila

    PubMed Central

    Babcock, Daniel T.; Ganetzky, Barry

    2014-01-01

    Drosophila has proven to be a useful model system for analysis of behavior, including flight. The initial flight tester involved dropping flies into an oil-coated graduated cylinder; landing height provided a measure of flight performance by assessing how far flies will fall before producing enough thrust to make contact with the wall of the cylinder. Here we describe an updated version of the flight tester with four major improvements. First, we added a "drop tube" to ensure that all flies enter the flight cylinder at a similar velocity between trials, eliminating variability between users. Second, we replaced the oil coating with removable plastic sheets coated in Tangle-Trap, an adhesive designed to capture live insects. Third, we use a longer cylinder to enable more accurate discrimination of flight ability. Fourth we use a digital camera and imaging software to automate the scoring of flight performance. These improvements allow for the rapid, quantitative assessment of flight behavior, useful for large datasets and large-scale genetic screens. PMID:24561810

  2. Stiffly accurate Runge-Kutta methods for nonlinear evolution problems governed by a monotone operator

    NASA Astrophysics Data System (ADS)

    Emmrich, Etienne; Thalhammer, Mechthild

    2010-04-01

    Stiffly accurate implicit Runge-Kutta methods are studied for the time discretisation of nonlinear first-order evolution equations. The equation is supposed to be governed by a time-dependent hemicontinuous operator that is (up to a shift) monotone and coercive, and fulfills a certain growth condition. It is proven that the piecewise constant as well as the piecewise linear interpolant of the time-discrete solution converges towards the exact weak solution, provided the Runge-Kutta method is consistent and satisfies a stability criterion that implies algebraic stability; examples are the Radau IIA and Lobatto IIIC methods. The convergence analysis is also extended to problems involving a strongly continuous perturbation of the monotone main part.

  3. Improved method and apparatus for chromatographic quantitative analysis

    DOEpatents

    Fritz, J.S.; Gjerde, D.T.; Schmuckler, G.

    An improved apparatus and method are described for the quantitative analysis of a solution containing a plurality of anion species by ion exchange chromatography which utilizes a single element and a single ion exchange bed which does not require periodic regeneration. The solution containing the anions is added to an anion exchange resin bed which is a low capacity macroreticular polystyrene-divinylbenzene resin containing quarternary ammonium functional groups, and is eluted therefrom with a dilute solution of a low electrical conductance organic acid salt. As each anion species is eluted from the bed, it is quantitatively sensed by conventional detection means such as a conductivity cell.

  4. Stable and accurate difference methods for seismic wave propagation on locally refined meshes

    NASA Astrophysics Data System (ADS)

    Petersson, A.; Rodgers, A.; Nilsson, S.; Sjogreen, B.; McCandless, K.

    2006-12-01

    To overcome some of the shortcomings of previous numerical methods for the elastic wave equation subject to stress-free boundary conditions, we are incorporating recent results from numerical analysis to develop a new finite difference method which discretizes the governing equations in second order displacement formulation. The most challenging aspect of finite difference methods for time dependent hyperbolic problems is clearly stability and some previous methods are known to be unstable when the material has a compressional velocity which exceeds about three times the shear velocity. Since the material properties in seismic applications often vary rapidly on the computational grid, the most straight forward approach for guaranteeing stability is through an energy estimate. For a hyperbolic system in second order formulation, the key to an energy estimate is a spatial discretization which is self-adjoint, i.e. corresponds to a symmetric or symmetrizable matrix. At the same time we want the scheme to be efficient and fully explicit, so only local operations are necessary to evolve the solution in the interior of the domain as well as on the free-surface boundary. Furthermore, we want the solution to be accurate when the data is smooth. Using these specifications, we developed an explicit second order accurate discretization where stability is guaranteed through an energy estimate for all ratios Cp/Cs. An implementation of our finite difference method was used to simulate ground motions during the 1906 San Francisco earthquake on a uniform grid with grid sizes down to 100 meters corresponding to over 4 Billion grid points. These simulations were run on 1024 processors on one of the supercomputers at Lawrence Livermore National Lab. To reduce the computational requirements for these simulations, we are currently extending the numerical method to use a locally refined mesh where the mesh size approximately follows the velocity structure in the domain. Some

  5. A general radiochemical-color method for quantitation of immunoblots.

    PubMed

    Esmaeli-Azad, B; Feinstein, S C

    1991-12-01

    Quantitative interpretation of protein immunoblotting procedures is hampered by a variety of technical liabilities inherent in the use of photographic and densitometric methods. In this paper, we present a novel, simple, and generally applicable alternative procedure to acquire quantitative data from immunoblots. Our strategy employs both the standard alkaline phosphatase color reaction and radiolabelled Protein A. The color reaction is used to localize the polypeptide of interest after transfer to a solid support. The colored bands are then excised and the radioactivity in the colocalized Protein A is quantitated in a gamma counter. In addition to avoiding the problems associated with photographic and densitometric procedures, our assay also overcomes common problems associated with variable gel lane width and individual band distortion. The resulting data is linear over a range of at least 50-fold (10-500 ng of specific protein, for the example used in this study) and is highly reproducible.

  6. Novel method for accurate g measurements in electron-spin resonance

    NASA Astrophysics Data System (ADS)

    Stesmans, A.; Van Gorp, G.

    1989-09-01

    In high-accuracy work, electron-spin-resonance (ESR) g values are generally determined by calibrating against the accurately known proton nuclear magnetic resonance (NMR). For that method—based on leakage of microwave energy out of the ESR cavity—a convenient technique is presented to obtain accurate g values without needing conscientious precalibration procedures or cumbersome constructions. As main advantages, the method allows the easy monitoring of the positioning of the ESR and NMR samples while they are mounted as close as physically realizable at all time during their simultaneous resonances. Relative accuracies on g of ≊2×10-6 are easily achieved for ESR signals of peak-to-peak width ΔBpp≲0.3 G. The method has been applied to calibrate the g value of conduction electrons of small Li particles embedded in LiF—a frequently used g marker—resulting in gLiF: Li=2.002 293±0.000 002.

  7. Indirect viscosimetric method is less accurate than ektacytometry for the measurement of red blood cell deformability.

    PubMed

    Vent-Schmidt, Jens; Waltz, Xavier; Pichon, Aurélien; Hardy-Dessources, Marie-Dominique; Romana, Marc; Connes, Philippe

    2015-01-01

    The aim of this study was to test the accuracy of viscosimetric method to estimate the red blood cell (RBC) deformability properties. Thirty-three subjects were enrolled in this study: 6 healthy subjects (AA), 11 patients with sickle cell-hemoglobin C disease (SC) and 16 patients with sickle cell anemia (SS). Two methods were used to assess RBC deformability: 1) indirect viscosimetric method and 2) ektacytometry. The indirect viscosimetric method was based on the Dintenfass equation where blood viscosity, plasma viscosity and hematocrit are measured and used to calculate an index of RBC rigidity (Tk index). The RBC deformability/rigidity of the three groups was compared using the two methods. Tk index was not different between SS and SC patients and the two groups had higher values than AA group. When ektacytometry was used, RBC deformability was lower in SS and SC groups compared to the AA group and SS and SC patients were different. Although the two measures of RBC deformability were correlated, the association was not very high. Bland and Altman analysis demonstrated a 3.25 bias suggesting a slight difference between the two methods. In addition, the limit of agreement represented 28% (>15%) of the mean values of RBC deformability, showing no interchangeability between the two methods. In conclusion, measuring RBC deformability by indirect viscosimetry is less accurate than by ektacytometry, which is considered the gold standard.

  8. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods.

    PubMed

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-04-07

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest

  9. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods

    NASA Astrophysics Data System (ADS)

    Jha, Abhinav K.; Caffo, Brian; Frey, Eric C.

    2016-04-01

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest

  10. A Novel Targeted Learning Method for Quantitative Trait Loci Mapping

    PubMed Central

    Wang, Hui; Zhang, Zhongyang; Rose, Sherri; van der Laan, Mark

    2014-01-01

    We present a novel semiparametric method for quantitative trait loci (QTL) mapping in experimental crosses. Conventional genetic mapping methods typically assume parametric models with Gaussian errors and obtain parameter estimates through maximum-likelihood estimation. In contrast with univariate regression and interval-mapping methods, our model requires fewer assumptions and also accommodates various machine-learning algorithms. Estimation is performed with targeted maximum-likelihood learning methods. We demonstrate our semiparametric targeted learning approach in a simulation study and a well-studied barley data set. PMID:25258376

  11. A novel targeted learning method for quantitative trait loci mapping.

    PubMed

    Wang, Hui; Zhang, Zhongyang; Rose, Sherri; van der Laan, Mark

    2014-12-01

    We present a novel semiparametric method for quantitative trait loci (QTL) mapping in experimental crosses. Conventional genetic mapping methods typically assume parametric models with Gaussian errors and obtain parameter estimates through maximum-likelihood estimation. In contrast with univariate regression and interval-mapping methods, our model requires fewer assumptions and also accommodates various machine-learning algorithms. Estimation is performed with targeted maximum-likelihood learning methods. We demonstrate our semiparametric targeted learning approach in a simulation study and a well-studied barley data set.

  12. An adaptive, formally second order accurate version of the immersed boundary method

    NASA Astrophysics Data System (ADS)

    Griffith, Boyce E.; Hornung, Richard D.; McQueen, David M.; Peskin, Charles S.

    2007-04-01

    Like many problems in biofluid mechanics, cardiac mechanics can be modeled as the dynamic interaction of a viscous incompressible fluid (the blood) and a (visco-)elastic structure (the muscular walls and the valves of the heart). The immersed boundary method is a mathematical formulation and numerical approach to such problems that was originally introduced to study blood flow through heart valves, and extensions of this work have yielded a three-dimensional model of the heart and great vessels. In the present work, we introduce a new adaptive version of the immersed boundary method. This adaptive scheme employs the same hierarchical structured grid approach (but a different numerical scheme) as the two-dimensional adaptive immersed boundary method of Roma et al. [A multilevel self adaptive version of the immersed boundary method, Ph.D. Thesis, Courant Institute of Mathematical Sciences, New York University, 1996; An adaptive version of the immersed boundary method, J. Comput. Phys. 153 (2) (1999) 509-534] and is based on a formally second order accurate (i.e., second order accurate for problems with sufficiently smooth solutions) version of the immersed boundary method that we have recently described [B.E. Griffith, C.S. Peskin, On the order of accuracy of the immersed boundary method: higher order convergence rates for sufficiently smooth problems, J. Comput. Phys. 208 (1) (2005) 75-105]. Actual second order convergence rates are obtained for both the uniform and adaptive methods by considering the interaction of a viscous incompressible flow and an anisotropic incompressible viscoelastic shell. We also present initial results from the application of this methodology to the three-dimensional simulation of blood flow in the heart and great vessels. The results obtained by the adaptive method show good qualitative agreement with simulation results obtained by earlier non-adaptive versions of the method, but the flow in the vicinity of the model heart valves

  13. Quantitative polymerase chain reaction analysis of DNA from noninvasive samples for accurate microsatellite genotyping of wild chimpanzees (Pan troglodytes verus).

    PubMed

    Morin, P A; Chambers, K E; Boesch, C; Vigilant, L

    2001-07-01

    Noninvasive samples are useful for molecular genetic analyses of wild animal populations. However, the low DNA content of such samples makes DNA amplification difficult, and there is the potential for erroneous results when one of two alleles at heterozygous microsatellite loci fails to be amplified. In this study we describe an assay designed to measure the amount of amplifiable nuclear DNA in low DNA concentration extracts from noninvasive samples. We describe the range of DNA amounts obtained from chimpanzee faeces and shed hair samples and formulate a new efficient approach for accurate microsatellite genotyping. Prescreening of extracts for DNA quantity is recommended for sorting of samples for likely success and reliability. Repetition of results remains extensive for analysis of microsatellite amplifications beginning from low starting amounts of DNA, but is reduced for those with higher DNA content.

  14. Optical Coherence Tomography as a Rapid, Accurate, Noncontact Method of Visualizing the Palisades of Vogt

    PubMed Central

    Gupta, Divya; Kagemann, Larry; Schuman, Joel S.; SundarRaj, Nirmala

    2012-01-01

    Purpose. This study explored the efficacy of optical coherence tomography (OCT) as a high-resolution, noncontact method for imaging the palisades of Vogt by correlating OCT and confocal microscopy images. Methods. Human limbal rims were acquired and imaged with OCT and confocal microscopy. The area of the epithelial basement membrane in each of these sets was digitally reconstructed, and the models were compared. Results. OCT identified the palisades within the limbus and exhibited excellent structural correlation with immunostained tissue imaged by confocal microscopy. Conclusions. OCT successfully identified the limbal palisades of Vogt that constitute the corneal epithelial stem cell niche. These findings offer the exciting potential to characterize the architecture of the palisades in vivo, to harvest stem cells for transplantation more accurately, to track palisade structure for better diagnosis, follow-up and staging of treatment, and to assess and intervene in the progression of stem cell depletion by monitoring changes in the structure of the palisades. PMID:22266521

  15. A Fully Implicit Time Accurate Method for Hypersonic Combustion: Application to Shock-induced Combustion Instability

    NASA Technical Reports Server (NTRS)

    Yungster, Shaye; Radhakrishnan, Krishnan

    1994-01-01

    A new fully implicit, time accurate algorithm suitable for chemically reacting, viscous flows in the transonic-to-hypersonic regime is described. The method is based on a class of Total Variation Diminishing (TVD) schemes and uses successive Gauss-Siedel relaxation sweeps. The inversion of large matrices is avoided by partitioning the system into reacting and nonreacting parts, but still maintaining a fully coupled interaction. As a result, the matrices that have to be inverted are of the same size as those obtained with the commonly used point implicit methods. In this paper we illustrate the applicability of the new algorithm to hypervelocity unsteady combustion applications. We present a series of numerical simulations of the periodic combustion instabilities observed in ballistic-range experiments of blunt projectiles flying at subdetonative speeds through hydrogen-air mixtures. The computed frequencies of oscillation are in excellent agreement with experimental data.

  16. Odontoma-associated tooth impaction: accurate diagnosis with simple methods? Case report and literature review.

    PubMed

    Troeltzsch, Matthias; Liedtke, Jan; Troeltzsch, Volker; Frankenberger, Roland; Steiner, Timm; Troeltzsch, Markus

    2012-10-01

    Odontomas account for the largest fraction of odontogenic tumors and are frequent causes of tooth impaction. A case of a 13-year-old female patient with an odontoma-associated impaction of a mandibular molar is presented with a review of the literature. Preoperative planning involved simple and convenient methods such as clinical examination and panoramic radiography, which led to a diagnosis of complex odontoma and warranted surgical removal. The clinical diagnosis was confirmed histologically. Multidisciplinary consultation may enable the clinician to find the accurate diagnosis and appropriate therapy based on the clinical and radiographic appearance. Modern radiologic methods such as cone-beam computed tomography or computed tomography should be applied only for special cases, to decrease radiation.

  17. Exact kinetic energy enables accurate evaluation of weak interactions by the FDE-vdW method

    SciTech Connect

    Sinha, Debalina; Pavanello, Michele

    2015-08-28

    The correlation energy of interaction is an elusive and sought-after interaction between molecular systems. By partitioning the response function of the system into subsystem contributions, the Frozen Density Embedding (FDE)-vdW method provides a computationally amenable nonlocal correlation functional based on the adiabatic connection fluctuation dissipation theorem applied to subsystem density functional theory. In reproducing potential energy surfaces of weakly interacting dimers, we show that FDE-vdW, either employing semilocal or exact nonadditive kinetic energy functionals, is in quantitative agreement with high-accuracy coupled cluster calculations (overall mean unsigned error of 0.5 kcal/mol). When employing the exact kinetic energy (which we term the Kohn-Sham (KS)-vdW method), the binding energies are generally closer to the benchmark, and the energy surfaces are also smoother.

  18. Exact kinetic energy enables accurate evaluation of weak interactions by the FDE-vdW method.

    PubMed

    Sinha, Debalina; Pavanello, Michele

    2015-08-28

    The correlation energy of interaction is an elusive and sought-after interaction between molecular systems. By partitioning the response function of the system into subsystem contributions, the Frozen Density Embedding (FDE)-vdW method provides a computationally amenable nonlocal correlation functional based on the adiabatic connection fluctuation dissipation theorem applied to subsystem density functional theory. In reproducing potential energy surfaces of weakly interacting dimers, we show that FDE-vdW, either employing semilocal or exact nonadditive kinetic energy functionals, is in quantitative agreement with high-accuracy coupled cluster calculations (overall mean unsigned error of 0.5 kcal/mol). When employing the exact kinetic energy (which we term the Kohn-Sham (KS)-vdW method), the binding energies are generally closer to the benchmark, and the energy surfaces are also smoother.

  19. Objective evaluation of reconstruction methods for quantitative SPECT imaging in the absence of ground truth.

    PubMed

    Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C

    2015-04-13

    Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.

  20. Objective evaluation of reconstruction methods for quantitative SPECT imaging in the absence of ground truth

    NASA Astrophysics Data System (ADS)

    Jha, Abhinav K.; Song, Na; Caffo, Brian; Frey, Eric C.

    2015-03-01

    Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method pro- vided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.

  1. A quantitative SMRT cell sequencing method for ribosomal amplicons.

    PubMed

    Jones, Bethan M; Kustka, Adam B

    2017-04-01

    Advances in sequencing technologies continue to provide unprecedented opportunities to characterize microbial communities. For example, the Pacific Biosciences Single Molecule Real-Time (SMRT) platform has emerged as a unique approach harnessing DNA polymerase activity to sequence template molecules, enabling long reads at low costs. With the aim to simultaneously classify and enumerate in situ microbial populations, we developed a quantitative SMRT (qSMRT) approach that involves the addition of exogenous standards to quantify ribosomal amplicons derived from environmental samples. The V7-9 regions of 18S SSU rDNA were targeted and quantified from protistan community samples collected in the Ross Sea during the Austral summer of 2011. We used three standards of different length and optimized conditions to obtain accurate quantitative retrieval across the range of expected amplicon sizes, a necessary criterion for analyzing taxonomically diverse 18S rDNA molecules from natural environments. The ability to concurrently identify and quantify microorganisms in their natural environment makes qSMRT a powerful, rapid and cost-effective approach for defining ecosystem diversity and function.

  2. A highly accurate method for the determination of mass and center of mass of a spacecraft

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Trubert, M. R.; Egwuatu, A.

    1978-01-01

    An extremely accurate method for the measurement of mass and the lateral center of mass of a spacecraft has been developed. The method was needed for the Voyager spacecraft mission requirement which limited the uncertainty in the knowledge of lateral center of mass of the spacecraft system weighing 750 kg to be less than 1.0 mm (0.04 in.). The method consists of using three load cells symmetrically located at 120 deg apart on a turntable with respect to the vertical axis of the spacecraft and making six measurements for each load cell. These six measurements are taken by cyclic rotations of the load cell turntable and of the spacecraft, about the vertical axis of the measurement fixture. This method eliminates all alignment, leveling, and load cell calibration errors for the lateral center of mass determination, and permits a statistical best fit of the measurement data. An associated data reduction computer program called MASCM has been written to implement this method and has been used for the Voyager spacecraft.

  3. Efficient and accurate numerical methods for the Klein-Gordon-Schroedinger equations

    SciTech Connect

    Bao, Weizhu . E-mail: bao@math.nus.edu.sg; Yang, Li . E-mail: yangli@nus.edu.sg

    2007-08-10

    In this paper, we present efficient, unconditionally stable and accurate numerical methods for approximations of the Klein-Gordon-Schroedinger (KGS) equations with/without damping terms. The key features of our methods are based on: (i) the application of a time-splitting spectral discretization for a Schroedinger-type equation in KGS (ii) the utilization of Fourier pseudospectral discretization for spatial derivatives in the Klein-Gordon equation in KGS (iii) the adoption of solving the ordinary differential equations (ODEs) in phase space analytically under appropriate chosen transmission conditions between different time intervals or applying Crank-Nicolson/leap-frog for linear/nonlinear terms for time derivatives. The numerical methods are either explicit or implicit but can be solved explicitly, unconditionally stable, and of spectral accuracy in space and second-order accuracy in time. Moreover, they are time reversible and time transverse invariant when there is no damping terms in KGS, conserve (or keep the same decay rate of) the wave energy as that in KGS without (or with a linear) damping term, keep the same dynamics of the mean value of the meson field, and give exact results for the plane-wave solution. Extensive numerical tests are presented to confirm the above properties of our numerical methods for KGS. Finally, the methods are applied to study solitary-wave collisions in one dimension (1D), as well as dynamics of a 2D problem in KGS.

  4. An accurate and efficient bayesian method for automatic segmentation of brain MRI.

    PubMed

    Marroquin, J L; Vemuri, B C; Botello, S; Calderon, F; Fernandez-Bouzas, A

    2002-08-01

    Automatic three-dimensional (3-D) segmentation of the brain from magnetic resonance (MR) scans is a challenging problem that has received an enormous amount of attention lately. Of the techniques reported in the literature, very few are fully automatic. In this paper, we present an efficient and accurate, fully automatic 3-D segmentation procedure for brain MR scans. It has several salient features; namely, the following. 1) Instead of a single multiplicative bias field that affects all tissue intensities, separate parametric smooth models are used for the intensity of each class. 2) A brain atlas is used in conjunction with a robust registration procedure to find a nonrigid transformation that maps the standard brain to the specimen to be segmented. This transformation is then used to: segment the brain from nonbrain tissue; compute prior probabilities for each class at each voxel location and find an appropriate automatic initialization. 3) Finally, a novel algorithm is presented which is a variant of the expectation-maximization procedure, that incorporates a fast and accurate way to find optimal segmentations, given the intensity models along with the spatial coherence assumption. Experimental results with both synthetic and real data are included, as well as comparisons of the performance of our algorithm with that of other published methods.

  5. Novel method for ANA quantitation using IIF imaging system.

    PubMed

    Peng, Xiaodong; Tang, Jiangtao; Wu, Yongkang; Yang, Bin; Hu, Jing

    2014-02-01

    A variety of antinuclear antibodies (ANAs) are found in the serum of patients with autoimmune diseases. The detection of abnormal ANA titers is a critical criterion for diagnosis of systemic lupus erythematosus (SLE) and other connective tissue diseases. Indirect immunofluorescence assay (IIF) on HEp-2 cells is the gold standard method to determine the presence of ANA and therefore provides information about the localization of autoantigens that are useful for diagnosis. However, its utility was limited in prognosing and monitoring of disease activity due to the lack of standardization in performing the technique, subjectivity in interpreting the results and the fact that it is only semi-quantitative. On the other hand, ELISA for the detection of ANA can quantitate ANA but could not provide further information about the localization of the autoantigens. It would be ideal to integrate both of the quantitative and qualitative methods. To address this issue, this study was conducted to quantitatively detect ANAs by using IIF imaging analysis system. Serum samples from patients with ANA positive (including speckled, homogeneous, nuclear mixture and cytoplasmic mixture patterns) and negative were detected for ANA titers by the classical IIF and analyzed by an image system, the image of each sample was acquired by the digital imaging system and the green fluorescence intensity was quantified by the Image-Pro plus software. A good correlation was found in between two methods and the correlation coefficients (R(2)) of various ANA patterns were 0.942 (speckled), 0.942 (homogeneous), 0.923 (nuclear mixture) and 0.760 (cytoplasmic mixture), respectively. The fluorescence density was linearly correlated with the log of ANA titers in various ANA patterns (R(2)>0.95). Moreover, the novel ANA quantitation method showed good reproducibility (F=0.091, p>0.05) with mean±SD and CV% of positive, and negative quality controls were equal to 126.4±9.6 and 7.6%, 10.4±1.25 and 12

  6. Accurate and automatic extrinsic calibration method for blade measurement system integrated by different optical sensors

    NASA Astrophysics Data System (ADS)

    He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu

    2014-11-01

    Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.

  7. Some selected quantitative methods of thermal image analysis in Matlab.

    PubMed

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image.

  8. Informatics Methods to Enable Sharing of Quantitative Imaging Research Data

    PubMed Central

    Levy, Mia A.; Freymann, John B.; Kirby, Justin S.; Fedorov, Andriy; Fennessy, Fiona M.; Eschrich, Steven A.; Berglund, Anders E.; Fenstermacher, David A.; Tan, Yongqiang; Guo, Xiaotao; Casavant, Thomas L.; Brown, Bartley J.; Braun, Terry A.; Dekker, Andre; Roelofs, Erik; Mountz, James M.; Boada, Fernando; Laymon, Charles; Oborski, Matt; Rubin, Daniel L

    2012-01-01

    Introduction The National Cancer Institute (NCI) Quantitative Research Network (QIN) is a collaborative research network whose goal is to share data, algorithms and research tools to accelerate quantitative imaging research. A challenge is the variability in tools and analysis platforms used in quantitative imaging. Our goal was to understand the extent of this variation and to develop an approach to enable sharing data and to promote reuse of quantitative imaging data in the community. Methods We performed a survey of the current tools in use by the QIN member sites for representation and storage of their QIN research data including images, image meta-data and clinical data. We identified existing systems and standards for data sharing and their gaps for the QIN use case. We then proposed a system architecture to enable data sharing and collaborative experimentation within the QIN. Results There area variety of tools currently used by each QIN institution. We developed a general information system architecture to support the QIN goals. We also describe the remaining architecture gaps we are developing to enable members to share research images and image meta-data across the network. Conclusions As a research network, the QIN will stimulate quantitative imaging research by pooling data, algorithms and research tools. However, there are gaps in current functional requirements that will need to be met by future informatics development. Special attention must be given to the technical requirements needed to translate these methods into the clinical research workflow to enable validation and qualification of these novel imaging biomarkers. PMID:22770688

  9. Application of an Effective Statistical Technique for an Accurate and Powerful Mining of Quantitative Trait Loci for Rice Aroma Trait

    PubMed Central

    Golestan Hashemi, Farahnaz Sadat; Rafii, Mohd Y.; Ismail, Mohd Razi; Mohamed, Mahmud Tengku Muda; Rahim, Harun A.; Latif, Mohammad Abdul; Aslani, Farzad

    2015-01-01

    When a phenotype of interest is associated with an external/internal covariate, covariate inclusion in quantitative trait loci (QTL) analyses can diminish residual variation and subsequently enhance the ability of QTL detection. In the in vitro synthesis of 2-acetyl-1-pyrroline (2AP), the main fragrance compound in rice, the thermal processing during the Maillard-type reaction between proline and carbohydrate reduction produces a roasted, popcorn-like aroma. Hence, for the first time, we included the proline amino acid, an important precursor of 2AP, as a covariate in our QTL mapping analyses to precisely explore the genetic factors affecting natural variation for rice scent. Consequently, two QTLs were traced on chromosomes 4 and 8. They explained from 20% to 49% of the total aroma phenotypic variance. Additionally, by saturating the interval harboring the major QTL using gene-based primers, a putative allele of fgr (major genetic determinant of fragrance) was mapped in the QTL on the 8th chromosome in the interval RM223-SCU015RM (1.63 cM). These loci supported previous studies of different accessions. Such QTLs can be widely used by breeders in crop improvement programs and for further fine mapping. Moreover, no previous studies and findings were found on simultaneous assessment of the relationship among 2AP, proline and fragrance QTLs. Therefore, our findings can help further our understanding of the metabolomic and genetic basis of 2AP biosynthesis in aromatic rice. PMID:26061689

  10. [Quantitative method of representative contaminants in groundwater pollution risk assessment].

    PubMed

    Wang, Jun-Jie; He, Jiang-Tao; Lu, Yan; Liu, Li-Ya; Zhang, Xiao-Liang

    2012-03-01

    In the light of the problem that stress vulnerability assessment in groundwater pollution risk assessment is lack of an effective quantitative system, a new system was proposed based on representative contaminants and corresponding emission quantities through the analysis of groundwater pollution sources. And quantitative method of the representative contaminants in this system was established by analyzing the three properties of representative contaminants and determining the research emphasis using analytic hierarchy process. The method had been applied to the assessment of Beijing groundwater pollution risk. The results demonstrated that the representative contaminants hazards greatly depended on different research emphasizes. There were also differences between the sequence of three representative contaminants hazards and their corresponding properties. It suggested that subjective tendency of the research emphasis had a decisive impact on calculation results. In addition, by the means of sequence to normalize the three properties and to unify the quantified properties results would zoom in or out of the relative properties characteristic of different representative contaminants.

  11. A fast experimental beam hardening correction method for accurate bone mineral measurements in 3D μCT imaging system.

    PubMed

    Koubar, Khodor; Bekaert, Virgile; Brasse, David; Laquerriere, Patrice

    2015-06-01

    Bone mineral density plays an important role in the determination of bone strength and fracture risks. Consequently, it is very important to obtain accurate bone mineral density measurements. The microcomputerized tomography system provides 3D information about the architectural properties of bone. Quantitative analysis accuracy is decreased by the presence of artefacts in the reconstructed images, mainly due to beam hardening artefacts (such as cupping artefacts). In this paper, we introduced a new beam hardening correction method based on a postreconstruction technique performed with the use of off-line water and bone linearization curves experimentally calculated aiming to take into account the nonhomogeneity in the scanned animal. In order to evaluate the mass correction rate, calibration line has been carried out to convert the reconstructed linear attenuation coefficient into bone masses. The presented correction method was then applied on a multimaterial cylindrical phantom and on mouse skeleton images. Mass correction rate up to 18% between uncorrected and corrected images were obtained as well as a remarkable improvement of a calculated mouse femur mass has been noticed. Results were also compared to those obtained when using the simple water linearization technique which does not take into account the nonhomogeneity in the object.

  12. Accurate and efficient quantum chemistry calculations for noncovalent interactions in many-body systems: the XSAPT family of methods.

    PubMed

    Lao, Ka Un; Herbert, John M

    2015-01-15

    We present an overview of "XSAPT", a family of quantum chemistry methods for noncovalent interactions. These methods combine an efficient, iterative, monomer-based approach to computing many-body polarization interactions with a two-body version of symmetry-adapted perturbation theory (SAPT). The result is an efficient method for computing accurate intermolecular interaction energies in large noncovalent assemblies such as molecular and ionic clusters, molecular crystals, clathrates, or protein-ligand complexes. As in traditional SAPT, the XSAPT energy is decomposable into physically meaningful components. Dispersion interactions are problematic in traditional low-order SAPT, and two new approaches are introduced here in an attempt to improve this situation: (1) third-generation empirical atom-atom dispersion potentials, and (2) an empirically scaled version of second-order SAPT dispersion. Comparison to high-level ab initio benchmarks for dimers, water clusters, halide-water clusters, a methane clathrate hydrate, and a DNA intercalation complex illustrate both the accuracy of XSAPT-based methods as well as their limitations. The computational cost of XSAPT scales as O(N(3))-O(N(5)) with respect to monomer size, N, depending upon the particular version that is employed, but the accuracy is typically superior to alternative ab initio methods with similar scaling. Moreover, the monomer-based nature of XSAPT calculations makes them trivially parallelizable, such that wall times scale linearly with respect to the number of monomer units. XSAPT-based methods thus open the door to both qualitative and quantitative studies of noncovalent interactions in clusters, biomolecules, and condensed-phase systems.

  13. Quantitative method of measuring cancer cell urokinase and metastatic potential

    NASA Technical Reports Server (NTRS)

    Morrison, Dennis R. (Inventor)

    1993-01-01

    The metastatic potential of tumors can be evaluated by the quantitative detection of urokinase and DNA. The cell sample selected for examination is analyzed for the presence of high levels of urokinase and abnormal DNA using analytical flow cytometry and digital image analysis. Other factors such as membrane associated urokinase, increased DNA synthesis rates and certain receptors can be used in the method for detection of potentially invasive tumors.

  14. Testing of flat optical surfaces by the quantitative Foucault method.

    PubMed

    Simon, M C; Simon, J M

    1978-01-01

    The complete theory of measurement of optical flat mirrors of circular or elliptical shape using the quantitative Foucault method is described here. It has been used in Córdoba since 1939 in a partially intuitive but correct form. The surface, not yet flat and, at times, astigmatic, is assimilated to the sum of a spherical plus a cylindrical dome. The errors of the three possible ways of reckoning are calculated.

  15. An Accurate Non-Cooperative Method for Measuring Textureless Spherical Target Based on Calibrated Lasers

    PubMed Central

    Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning

    2016-01-01

    Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time. PMID:27941705

  16. A novel method for accurate collagen and biochemical assessment of pulmonary tissue utilizing one animal

    PubMed Central

    Kliment, Corrine R; Englert, Judson M; Crum, Lauren P; Oury, Tim D

    2011-01-01

    Aim: The purpose of this study was to develop an improved method for collagen and protein assessment of fibrotic lungs while decreasing animal use. methods: 8-10 week old, male C57BL/6 mice were given a single intratracheal instillation of crocidolite asbestos or control titanium dioxide. Lungs were collected on day 14 and dried as whole lung, or homogenized in CHAPS buffer, for hydroxyproline analysis. Insoluble and salt-soluble collagen content was also determined in lung homogenates using a modified Sirius red colorimetric 96-well plate assay. results: The hydroxyproline assay showed significant increases in collagen content in the lungs of asbestos-treated mice. Identical results were present between collagen content determined on dried whole lung or whole lung homogenates. The Sirius red plate assay showed a significant increase in collagen content in lung homogenates however, this assay grossly over-estimated the total amount of collagen and underestimated changes between control and fibrotic lungs, conclusions: The proposed method provides accurate quantification of collagen content in whole lungs and additional homogenate samples for biochemical analysis from a single animal. The Sirius-red colorimetric plate assay provides a complementary method for determination of the relative changes in lung collagen but the values tend to overestimate absolute values obtained by the gold standard hydroxyproline assay and underestimate the overall fibrotic injury. PMID:21577320

  17. An Accurate Non-Cooperative Method for Measuring Textureless Spherical Target Based on Calibrated Lasers.

    PubMed

    Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning

    2016-12-09

    Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time.

  18. Numerical system utilising a Monte Carlo calculation method for accurate dose assessment in radiation accidents.

    PubMed

    Takahashi, F; Endo, A

    2007-01-01

    A system utilising radiation transport codes has been developed to derive accurate dose distributions in a human body for radiological accidents. A suitable model is quite essential for a numerical analysis. Therefore, two tools were developed to setup a 'problem-dependent' input file, defining a radiation source and an exposed person to simulate the radiation transport in an accident with the Monte Carlo calculation codes-MCNP and MCNPX. Necessary resources are defined by a dialogue method with a generally used personal computer for both the tools. The tools prepare human body and source models described in the input file format of the employed Monte Carlo codes. The tools were validated for dose assessment in comparison with a past criticality accident and a hypothesized exposure.

  19. Accurate description of the electronic structure of organic semiconductors by GW methods

    NASA Astrophysics Data System (ADS)

    Marom, Noa

    2017-03-01

    Electronic properties associated with charged excitations, such as the ionization potential (IP), the electron affinity (EA), and the energy level alignment at interfaces, are critical parameters for the performance of organic electronic devices. To computationally design organic semiconductors and functional interfaces with tailored properties for target applications it is necessary to accurately predict these properties from first principles. Many-body perturbation theory is often used for this purpose within the GW approximation, where G is the one particle Green’s function and W is the dynamically screened Coulomb interaction. Here, the formalism of GW methods at different levels of self-consistency is briefly introduced and some recent applications to organic semiconductors and interfaces are reviewed.

  20. Distance scaling method for accurate prediction of slowly varying magnetic fields in satellite missions

    NASA Astrophysics Data System (ADS)

    Zacharias, Panagiotis P.; Chatzineofytou, Elpida G.; Spantideas, Sotirios T.; Capsalis, Christos N.

    2016-07-01

    In the present work, the determination of the magnetic behavior of localized magnetic sources from near-field measurements is examined. The distance power law of the magnetic field fall-off is used in various cases to accurately predict the magnetic signature of an equipment under test (EUT) consisting of multiple alternating current (AC) magnetic sources. Therefore, parameters concerning the location of the observation points (magnetometers) are studied towards this scope. The results clearly show that these parameters are independent of the EUT's size and layout. Additionally, the techniques developed in the present study enable the placing of the magnetometers close to the EUT, thus achieving high signal-to-noise ratio (SNR). Finally, the proposed method is verified by real measurements, using a mobile phone as an EUT.

  1. Methods to achieve accurate projection of regional and global raster databases

    USGS Publications Warehouse

    Usery, E. Lynn; Seong, Jeong Chang; Steinwand, Dan

    2002-01-01

    Modeling regional and global activities of climatic and human-induced change requires accurate geographic data from which we can develop mathematical and statistical tabulations of attributes and properties of the environment. Many of these models depend on data formatted as raster cells or matrices of pixel values. Recently, it has been demonstrated that regional and global raster datasets are subject to significant error from mathematical projection and that these errors are of such magnitude that model results may be jeopardized (Steinwand, et al., 1995; Yang, et al., 1996; Usery and Seong, 2001; Seong and Usery, 2001). There is a need to develop methods of projection that maintain the accuracy of these datasets to support regional and global analyses and modeling

  2. Methods for accurate analysis of galaxy clustering on non-linear scales

    NASA Astrophysics Data System (ADS)

    Vakili, Mohammadjavad

    2017-01-01

    Measurements of galaxy clustering with the low-redshift galaxy surveys provide sensitive probe of cosmology and growth of structure. Parameter inference with galaxy clustering relies on computation of likelihood functions which requires estimation of the covariance matrix of the observables used in our analyses. Therefore, accurate estimation of the covariance matrices serves as one of the key ingredients in precise cosmological parameter inference. This requires generation of a large number of independent galaxy mock catalogs that accurately describe the statistical distribution of galaxies in a wide range of physical scales. We present a fast method based on low-resolution N-body simulations and approximate galaxy biasing technique for generating mock catalogs. Using a reference catalog that was created using the high resolution Big-MultiDark N-body simulation, we show that our method is able to produce catalogs that describe galaxy clustering at a percentage-level accuracy down to highly non-linear scales in both real-space and redshift-space.In most large-scale structure analyses, modeling of galaxy bias on non-linear scales is performed assuming a halo model. Clustering of dark matter halos has been shown to depend on halo properties beyond mass such as halo concentration, a phenomenon referred to as assembly bias. Standard large-scale structure studies assume that halo mass alone is sufficient in characterizing the connection between galaxies and halos. However, modeling of galaxy bias can face systematic effects if the number of galaxies are correlated with other halo properties. Using the Small MultiDark-Planck high resolution N-body simulation and the clustering measurements of Sloan Digital Sky Survey DR7 main galaxy sample, we investigate the extent to which the dependence of galaxy bias on halo concentration can improve our modeling of galaxy clustering.

  3. Improving the full spectrum fitting method: accurate convolution with Gauss-Hermite functions

    NASA Astrophysics Data System (ADS)

    Cappellari, Michele

    2017-04-01

    I start by providing an updated summary of the penalized pixel-fitting (PPXF) method that is used to extract the stellar and gas kinematics, as well as the stellar population of galaxies, via full spectrum fitting. I then focus on the problem of extracting the kinematics when the velocity dispersion σ is smaller than the velocity sampling ΔV that is generally, by design, close to the instrumental dispersion σinst. The standard approach consists of convolving templates with a discretized kernel, while fitting for its parameters. This is obviously very inaccurate when σ ≲ ΔV/2, due to undersampling. Oversampling can prevent this, but it has drawbacks. Here I present a more accurate and efficient alternative. It avoids the evaluation of the undersampled kernel and instead directly computes its well-sampled analytic Fourier transform, for use with the convolution theorem. A simple analytic transform exists when the kernel is described by the popular Gauss-Hermite parametrization (which includes the Gaussian as special case) for the line-of-sight velocity distribution. I describe how this idea was implemented in a significant upgrade to the publicly available PPXF software. The key advantage of the new approach is that it provides accurate velocities regardless of σ. This is important e.g. for spectroscopic surveys targeting galaxies with σ ≪ σinst, for galaxy redshift determinations or for measuring line-of-sight velocities of individual stars. The proposed method could also be used to fix Gaussian convolution algorithms used in today's popular software packages.

  4. Method for quantitative proteomics research by using metal element chelated tags coupled with mass spectrometry.

    PubMed

    Liu, Huiling; Zhang, Yangjun; Wang, Jinglan; Wang, Dong; Zhou, Chunxi; Cai, Yun; Qian, Xiaohong

    2006-09-15

    The mass spectrometry-based methods with a stable isotope as the internal standard in quantitative proteomics have been developed quickly in recent years. But the use of some stable isotope reagents is limited by the relative high price and synthetic difficulties. We have developed a new method for quantitative proteomics research by using metal element chelated tags (MECT) coupled with mass spectrometry. The bicyclic anhydride diethylenetriamine-N,N,N',N' ',N' '-pentaacetic acid (DTPA) is covalently coupled to primary amines of peptides, and the ligand is then chelated to the rare earth metals Y and Tb. The tagged peptides are mixed and analyzed by LC-ESI-MS/MS. Peptides are quantified by measuring the relative signal intensities for the Y and Tb tag pairs in MS, which permits the quantitation of the original proteins generating the corresponding peptides. The protein is then identified by the corresponding peptide sequence from its MS/MS spectrum. The MECT method was evaluated by using standard proteins as model sample. The experimental results showed that metal chelate-tagged peptides chromatographically coeluted successfully during the reversed-phase LC analysis. The relative quantitation results were accurate for proteins using MECT. DTPA modification of the N-terminal of peptides promoted cleaner fragmentation (only y-series ions) in mass spectrometry and improved the confidence level of protein identification. The MECT strategy provides a simple, rapid, and economical alternative to current mass tagging technologies available.

  5. A time-accurate finite volume method valid at all flow velocities

    NASA Astrophysics Data System (ADS)

    Kim, S.-W.

    1993-07-01

    A finite volume method to solve the Navier-Stokes equations at all flow velocities (e.g., incompressible, subsonic, transonic, supersonic and hypersonic flows) is presented. The numerical method is based on a finite volume method that incorporates a pressure-staggered mesh and an incremental pressure equation for the conservation of mass. Comparison of three generally accepted time-advancing schemes, i.e., Simplified Marker-and-Cell (SMAC), Pressure-Implicit-Splitting of Operators (PISO), and Iterative-Time-Advancing (ITA) scheme, are made by solving a lid-driven polar cavity flow and self-sustained oscillatory flows over circular and square cylinders. Calculated results show that the ITA is the most stable numerically and yields the most accurate results. The SMAC is the most efficient computationally and is as stable as the ITA. It is shown that the PISO is the most weakly convergent and it exhibits an undesirable strong dependence on the time-step size. The degenerated numerical results obtained using the PISO are attributed to its second corrector step that cause the numerical results to deviate further from a divergence free velocity field. The accurate numerical results obtained using the ITA is attributed to its capability to resolve the nonlinearity of the Navier-Stokes equations. The present numerical method that incorporates the ITA is used to solve an unsteady transitional flow over an oscillating airfoil and a chemically reacting flow of hydrogen in a vitiated supersonic airstream. The turbulence fields in these flow cases are described using multiple-time-scale turbulence equations. For the unsteady transitional over an oscillating airfoil, the fluid flow is described using ensemble-averaged Navier-Stokes equations defined on the Lagrangian-Eulerian coordinates. It is shown that the numerical method successfully predicts the large dynamic stall vortex (DSV) and the trailing edge vortex (TEV) that are periodically generated by the oscillating airfoil

  6. Targeted LC-MS/MS Method for the Quantitation of Plant Lignans and Enterolignans in Biofluids from Humans and Pigs.

    PubMed

    Nørskov, Natalja P; Olsen, Anja; Tjønneland, Anne; Bolvig, Anne Katrine; Lærke, Helle Nygaard; Knudsen, Knud Erik Bach

    2015-07-15

    Lignans have gained nutritional interest due to their promising role in the prevention of lifestyle diseases. However, epidemiological studies are in need of more evidence to link the intake of lignans to this promising role. In this context, it is necessary to study large population groups to obtain sufficient statistical power. Therefore, there is a demand for fast, sensitive, and accurate methods for quantitation with high throughput of samples. This paper presents a validated LC-MS/MS method for the quantitation of eight plant lignans (matairesinol, hydroxymatairesinol, secoisolariciresinol, lariciresinol, isolariciresinol, syringaresinol, medioresinol, and pinoresinol) and two enterolignans (enterodiol and enterolactone) in both human and pig plasma and urine. The method showed high selectivity and sensitivity allowing quantitation of lignans in the range of 0.024-100 ng/mL and with a run time of only 4.8 min per sample. The method was successfully applied to quantitate lignans in biofluids from ongoing studies with humans and pigs.

  7. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid–Base and Ligand Binding Equilibria of Aquacobalamin

    SciTech Connect

    Johnston, Ryne C.; Zhou, Jing; Smith, Jeremy C.; Parks, Jerry M.

    2016-07-08

    In redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. Moreover, a major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co ligand binding equilibrium constants (Kon/off), pKas and reduction potentials for models of aquacobalamin in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for CoIII, CoII, and CoI species, respectively, and the second model features saturation of each vacant axial coordination site on CoII and CoI species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of

  8. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid–Base and Ligand Binding Equilibria of Aquacobalamin

    DOE PAGES

    Johnston, Ryne C.; Zhou, Jing; Smith, Jeremy C.; ...

    2016-07-08

    In redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. Moreover, a major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co ligand binding equilibrium constants (Kon/off), pKas and reduction potentials for models of aquacobalaminmore » in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for CoIII, CoII, and CoI species, respectively, and the second model features saturation of each vacant axial coordination site on CoII and CoI species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of Co axial ligand binding, leading to substantial errors in predicted

  9. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid-Base and Ligand Binding Equilibria of Aquacobalamin.

    PubMed

    Johnston, Ryne C; Zhou, Jing; Smith, Jeremy C; Parks, Jerry M

    2016-08-04

    Redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. A major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co-ligand binding equilibrium constants (Kon/off), pKas, and reduction potentials for models of aquacobalamin in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for Co(III), Co(II), and Co(I) species, respectively, and the second model features saturation of each vacant axial coordination site on Co(II) and Co(I) species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of Co-axial ligand binding, leading to substantial errors in predicted pKas and

  10. A quantitative method for measuring the quality of history matches

    SciTech Connect

    Shaw, T.S.; Knapp, R.M.

    1997-08-01

    History matching can be an efficient tool for reservoir characterization. A {open_quotes}good{close_quotes} history matching job can generate reliable reservoir parameters. However, reservoir engineers are often frustrated when they try to select a {open_quotes}better{close_quotes} match from a series of history matching runs. Without a quantitative measurement, it is always difficult to tell the difference between a {open_quotes}good{close_quotes} and a {open_quotes}better{close_quotes} matches. For this reason, we need a quantitative method for testing the quality of matches. This paper presents a method for such a purpose. The method uses three statistical indices to (1) test shape conformity, (2) examine bias errors, and (3) measure magnitude of deviation. The shape conformity test insures that the shape of a simulated curve matches that of a historical curve. Examining bias errors assures that model reservoir parameters have been calibrated to that of a real reservoir. Measuring the magnitude of deviation assures that the difference between the model and the real reservoir parameters is minimized. The method was first tested on a hypothetical model and then applied to published field studies. The results showed that the method can efficiently measure the quality of matches. It also showed that the method can serve as a diagnostic tool for calibrating reservoir parameters during history matching.

  11. A comparison of quantitative methods for clinical imaging with hyperpolarized (13)C-pyruvate.

    PubMed

    Daniels, Charlie J; McLean, Mary A; Schulte, Rolf F; Robb, Fraser J; Gill, Andrew B; McGlashan, Nicholas; Graves, Martin J; Schwaiger, Markus; Lomas, David J; Brindle, Kevin M; Gallagher, Ferdia A

    2016-04-01

    Dissolution dynamic nuclear polarization (DNP) enables the metabolism of hyperpolarized (13)C-labelled molecules, such as the conversion of [1-(13)C]pyruvate to [1-(13)C]lactate, to be dynamically and non-invasively imaged in tissue. Imaging of this exchange reaction in animal models has been shown to detect early treatment response and correlate with tumour grade. The first human DNP study has recently been completed, and, for widespread clinical translation, simple and reliable methods are necessary to accurately probe the reaction in patients. However, there is currently no consensus on the most appropriate method to quantify this exchange reaction. In this study, an in vitro system was used to compare several kinetic models, as well as simple model-free methods. Experiments were performed using a clinical hyperpolarizer, a human 3 T MR system, and spectroscopic imaging sequences. The quantitative methods were compared in vivo by using subcutaneous breast tumours in rats to examine the effect of pyruvate inflow. The two-way kinetic model was the most accurate method for characterizing the exchange reaction in vitro, and the incorporation of a Heaviside step inflow profile was best able to describe the in vivo data. The lactate time-to-peak and the lactate-to-pyruvate area under the curve ratio were simple model-free approaches that accurately represented the full reaction, with the time-to-peak method performing indistinguishably from the best kinetic model. Finally, extracting data from a single pixel was a robust and reliable surrogate of the whole region of interest. This work has identified appropriate quantitative methods for future work in the analysis of human hyperpolarized (13)C data.

  12. The extended Koopmans' theorem for orbital-optimized methods: accurate computation of ionization potentials.

    PubMed

    Bozkaya, Uğur

    2013-10-21

    The extended Koopmans' theorem (EKT) provides a straightforward way to compute ionization potentials (IPs) from any level of theory, in principle. However, for non-variational methods, such as Møller-Plesset perturbation and coupled-cluster theories, the EKT computations can only be performed as by-products of analytic gradients as the relaxed generalized Fock matrix (GFM) and one- and two-particle density matrices (OPDM and TPDM, respectively) are required [J. Cioslowski, P. Piskorz, and G. Liu, J. Chem. Phys. 107, 6804 (1997)]. However, for the orbital-optimized methods both the GFM and OPDM are readily available and symmetric, as opposed to the standard post Hartree-Fock (HF) methods. Further, the orbital optimized methods solve the N-representability problem, which may arise when the relaxed particle density matrices are employed for the standard methods, by disregarding the orbital Z-vector contributions for the OPDM. Moreover, for challenging chemical systems, where spin or spatial symmetry-breaking problems are observed, the abnormal orbital response contributions arising from the numerical instabilities in the HF molecular orbital Hessian can be avoided by the orbital-optimization. Hence, it appears that the orbital-optimized methods are the most natural choice for the study of the EKT. In this research, the EKT for the orbital-optimized methods, such as orbital-optimized second- and third-order Møller-Plesset perturbation [U. Bozkaya, J. Chem. Phys. 135, 224103 (2011)] and coupled-electron pair theories [OCEPA(0)] [U. Bozkaya and C. D. Sherrill, J. Chem. Phys. 139, 054104 (2013)], are presented. The presented methods are applied to IPs of the second- and third-row atoms, and closed- and open-shell molecules. Performances of the orbital-optimized methods are compared with those of the counterpart standard methods. Especially, results of the OCEPA(0) method (with the aug-cc-pVTZ basis set) for the lowest IPs of the considered atoms and closed

  13. Conservative high-order-accurate finite-difference methods for curvilinear grids

    NASA Technical Reports Server (NTRS)

    Rai, Man M.; Chakrvarthy, Sukumar

    1993-01-01

    Two fourth-order-accurate finite-difference methods for numerically solving hyperbolic systems of conservation equations on smooth curvilinear grids are presented. The first method uses the differential form of the conservation equations; the second method uses the integral form of the conservation equations. Modifications to these schemes, which are required near boundaries to maintain overall high-order accuracy, are discussed. An analysis that demonstrates the stability of the modified schemes is also provided. Modifications to one of the schemes to make it total variation diminishing (TVD) are also discussed. Results that demonstrate the high-order accuracy of both schemes are included in the paper. In particular, a Ringleb-flow computation demonstrates the high-order accuracy and the stability of the boundary and near-boundary procedures. A second computation of supersonic flow over a cylinder demonstrates the shock-capturing capability of the TVD methodology. An important contribution of this paper is the dear demonstration that higher order accuracy leads to increased computational efficiency.

  14. A Method for Accurate Reconstructions of the Upper Airway Using Magnetic Resonance Images

    PubMed Central

    Xiong, Huahui; Huang, Xiaoqing; Li, Yong; Li, Jianhong; Xian, Junfang; Huang, Yaqi

    2015-01-01

    Objective The purpose of this study is to provide an optimized method to reconstruct the structure of the upper airway (UA) based on magnetic resonance imaging (MRI) that can faithfully show the anatomical structure with a smooth surface without artificial modifications. Methods MRI was performed on the head and neck of a healthy young male participant in the axial, coronal and sagittal planes to acquire images of the UA. The level set method was used to segment the boundary of the UA. The boundaries in the three scanning planes were registered according to the positions of crossing points and anatomical characteristics using a Matlab program. Finally, the three-dimensional (3D) NURBS (Non-Uniform Rational B-Splines) surface of the UA was constructed using the registered boundaries in all three different planes. Results A smooth 3D structure of the UA was constructed, which captured the anatomical features from the three anatomical planes, particularly the location of the anterior wall of the nasopharynx. The volume and area of every cross section of the UA can be calculated from the constructed 3D model of UA. Conclusions A complete scheme of reconstruction of the UA was proposed, which can be used to measure and evaluate the 3D upper airway accurately. PMID:26066461

  15. Extracting accurate strain measurements in bone mechanics: A critical review of current methods.

    PubMed

    Grassi, Lorenzo; Isaksson, Hanna

    2015-10-01

    Osteoporosis related fractures are a social burden that advocates for more accurate fracture prediction methods. Mechanistic methods, e.g. finite element models, have been proposed as a tool to better predict bone mechanical behaviour and strength. However, there is little consensus about the optimal constitutive law to describe bone as a material. Extracting reliable and relevant strain data from experimental tests is of fundamental importance to better understand bone mechanical properties, and to validate numerical models. Several techniques have been used to measure strain in experimental mechanics, with substantial differences in terms of accuracy, precision, time- and length-scale. Each technique presents upsides and downsides that must be carefully evaluated when designing the experiment. Moreover, additional complexities are often encountered when applying such strain measurement techniques to bone, due to its complex composite structure. This review of literature examined the four most commonly adopted methods for strain measurements (strain gauges, fibre Bragg grating sensors, digital image correlation, and digital volume correlation), with a focus on studies with bone as a substrate material, at the organ and tissue level. For each of them the working principles, a summary of the main applications to bone mechanics at the organ- and tissue-level, and a list of pros and cons are provided.

  16. Keeping the edge: an accurate numerical method to solve the stream power law

    NASA Astrophysics Data System (ADS)

    Campforts, B.; Govers, G.

    2015-12-01

    Bedrock rivers set the base level of surrounding hill slopes and mediate the dynamic interplay between mountain building and denudation. The propensity of rivers to preserve pulses of increased tectonic uplift also allows to reconstruct long term uplift histories from longitudinal river profiles. An accurate reconstruction of river profile development at different timescales is therefore essential. Long term river development is typically modeled by means of the stream power law. Under specific conditions this equation can be solved analytically but numerical Finite Difference Methods (FDMs) are most frequently used. Nonetheless, FDMs suffer from numerical smearing, especially at knickpoint zones which are key to understand transient landscapes. Here, we solve the stream power law by means of a Finite Volume Method (FVM) which is Total Variation Diminishing (TVD). Total volume methods are designed to simulate sharp discontinuities making them very suitable to model river incision. In contrast to FDMs, the TVD_FVM is well capable of preserving knickpoints as illustrated for the fast propagating Niagara falls. Moreover, we show that the TVD_FVM performs much better when reconstructing uplift at timescales exceeding 100 Myr, using Eastern Australia as an example. Finally, uncertainty associated with parameter calibration is dramatically reduced when the TVD_FVM is applied. Therefore, the use of a TVD_FVM to understand long term landscape evolution is an important addition to the toolbox at the disposition of geomorphologists.

  17. An automatic method for fast and accurate liver segmentation in CT images using a shape detection level set method

    NASA Astrophysics Data System (ADS)

    Lee, Jeongjin; Kim, Namkug; Lee, Ho; Seo, Joon Beom; Won, Hyung Jin; Shin, Yong Moon; Shin, Yeong Gil

    2007-03-01

    Automatic liver segmentation is still a challenging task due to the ambiguity of liver boundary and the complex context of nearby organs. In this paper, we propose a faster and more accurate way of liver segmentation in CT images with an enhanced level set method. The speed image for level-set propagation is smoothly generated by increasing number of iterations in anisotropic diffusion filtering. This prevents the level-set propagation from stopping in front of local minima, which prevails in liver CT images due to irregular intensity distributions of the interior liver region. The curvature term of shape modeling level-set method captures well the shape variations of the liver along the slice. Finally, rolling ball algorithm is applied for including enhanced vessels near the liver boundary. Our approach are tested and compared to manual segmentation results of eight CT scans with 5mm slice distance using the average distance and volume error. The average distance error between corresponding liver boundaries is 1.58 mm and the average volume error is 2.2%. The average processing time for the segmentation of each slice is 5.2 seconds, which is much faster than the conventional ones. Accurate and fast result of our method will expedite the next stage of liver volume quantification for liver transplantations.

  18. An improved quantitative analysis method for plant cortical microtubules.

    PubMed

    Lu, Yi; Huang, Chenyang; Wang, Jia; Shang, Peng

    2014-01-01

    The arrangement of plant cortical microtubules can reflect the physiological state of cells. However, little attention has been paid to the image quantitative analysis of plant cortical microtubules so far. In this paper, Bidimensional Empirical Mode Decomposition (BEMD) algorithm was applied in the image preprocessing of the original microtubule image. And then Intrinsic Mode Function 1 (IMF1) image obtained by decomposition was selected to do the texture analysis based on Grey-Level Cooccurrence Matrix (GLCM) algorithm. Meanwhile, in order to further verify its reliability, the proposed texture analysis method was utilized to distinguish different images of Arabidopsis microtubules. The results showed that the effect of BEMD algorithm on edge preserving accompanied with noise reduction was positive, and the geometrical characteristic of the texture was obvious. Four texture parameters extracted by GLCM perfectly reflected the different arrangements between the two images of cortical microtubules. In summary, the results indicate that this method is feasible and effective for the image quantitative analysis of plant cortical microtubules. It not only provides a new quantitative approach for the comprehensive study of the role played by microtubules in cell life activities but also supplies references for other similar studies.

  19. A quantitative method for optimized placement of continuous air monitors.

    PubMed

    Whicker, Jeffrey J; Rodgers, John C; Moxley, John S

    2003-11-01

    Alarming continuous air monitors (CAMs) are a critical component for worker protection in facilities that handle large amounts of hazardous materials. In nuclear facilities, continuous air monitors alarm when levels of airborne radioactive materials exceed alarm thresholds, thus prompting workers to exit the room to reduce inhalation exposures. To maintain a high level of worker protection, continuous air monitors are required to detect radioactive aerosol clouds quickly and with good sensitivity. This requires that there are sufficient numbers of continuous air monitors in a room and that they are well positioned. Yet there are no published methodologies to quantitatively determine the optimal number and placement of continuous air monitors in a room. The goal of this study was to develop and test an approach to quantitatively determine optimal number and placement of continuous air monitors in a room. The method we have developed uses tracer aerosol releases (to simulate accidental releases) and the measurement of the temporal and spatial aspects of the dispersion of the tracer aerosol through the room. The aerosol dispersion data is then analyzed to optimize continuous air monitor utilization based on simulated worker exposure. This method was tested in a room within a Department of Energy operated plutonium facility at the Savannah River Site in South Carolina, U.S. Results from this study show that the value of quantitative airflow and aerosol dispersion studies is significant and that worker protection can be significantly improved while balancing the costs associated with CAM programs.

  20. Accurate, precise, and efficient theoretical methods to calculate anion-π interaction energies in model structures.

    PubMed

    Mezei, Pál D; Csonka, Gábor I; Ruzsinszky, Adrienn; Sun, Jianwei

    2015-01-13

    A correct description of the anion-π interaction is essential for the design of selective anion receptors and channels and important for advances in the field of supramolecular chemistry. However, it is challenging to do accurate, precise, and efficient calculations of this interaction, which are lacking in the literature. In this article, by testing sets of 20 binary anion-π complexes of fluoride, chloride, bromide, nitrate, or carbonate ions with hexafluorobenzene, 1,3,5-trifluorobenzene, 2,4,6-trifluoro-1,3,5-triazine, or 1,3,5-triazine and 30 ternary π-anion-π' sandwich complexes composed from the same monomers, we suggest domain-based local-pair natural orbital coupled cluster energies extrapolated to the complete basis-set limit as reference values. We give a detailed explanation of the origin of anion-π interactions, using the permanent quadrupole moments, static dipole polarizabilities, and electrostatic potential maps. We use symmetry-adapted perturbation theory (SAPT) to calculate the components of the anion-π interaction energies. We examine the performance of the direct random phase approximation (dRPA), the second-order screened exchange (SOSEX), local-pair natural-orbital (LPNO) coupled electron pair approximation (CEPA), and several dispersion-corrected density functionals (including generalized gradient approximation (GGA), meta-GGA, and double hybrid density functional). The LPNO-CEPA/1 results show the best agreement with the reference results. The dRPA method is only slightly less accurate and precise than the LPNO-CEPA/1, but it is considerably more efficient (6-17 times faster) for the binary complexes studied in this paper. For 30 ternary π-anion-π' sandwich complexes, we give dRPA interaction energies as reference values. The double hybrid functionals are much more efficient but less accurate and precise than dRPA. The dispersion-corrected double hybrid PWPB95-D3(BJ) and B2PLYP-D3(BJ) functionals perform better than the GGA and meta

  1. Quantitative mass spectrometric analysis of glycoproteins combined with enrichment methods.

    PubMed

    Ahn, Yeong Hee; Kim, Jin Young; Yoo, Jong Shin

    2015-01-01

    Mass spectrometry (MS) has been a core technology for high sensitive and high-throughput analysis of the enriched glycoproteome in aspects of quantitative assays as well as qualitative profiling of glycoproteins. Because it has been widely recognized that aberrant glycosylation in a glycoprotein may involve in progression of a certain disease, the development of efficient analysis tool for the aberrant glycoproteins is very important for deep understanding about pathological function of the glycoprotein and new biomarker development. This review first describes the protein glycosylation-targeting enrichment technologies mainly employing solid-phase extraction methods such as hydrizide-capturing, lectin-specific capturing, and affinity separation techniques based on porous graphitized carbon, hydrophilic interaction chromatography, or immobilized boronic acid. Second, MS-based quantitative analysis strategies coupled with the protein glycosylation-targeting enrichment technologies, by using a label-free MS, stable isotope-labeling, or targeted multiple reaction monitoring (MRM) MS, are summarized with recent published studies.

  2. A Monte Carlo Method for Making the SDSS u-Band Magnitude More Accurate

    NASA Astrophysics Data System (ADS)

    Gu, Jiayin; Du, Cuihua; Zuo, Wenbo; Jing, Yingjie; Wu, Zhenyu; Ma, Jun; Zhou, Xu

    2016-10-01

    We develop a new Monte Carlo-based method to convert the Sloan Digital Sky Survey (SDSS) u-band magnitude to the south Galactic Cap of the u-band Sky Survey (SCUSS) u-band magnitude. Due to the increased accuracy of SCUSS u-band measurements, the converted u-band magnitude becomes more accurate compared with the original SDSS u-band magnitude, in particular at the faint end. The average u-magnitude error (for both SDSS and SCUSS) of numerous main-sequence stars with 0.2\\lt g-r\\lt 0.8 increases as the g-band magnitude becomes fainter. When g = 19.5, the average magnitude error of the SDSS u is 0.11. When g = 20.5, the average SDSS u error rises to 0.22. However, at this magnitude, the average magnitude error of the SCUSS u is just half as much as that of the SDSS u. The SDSS u-band magnitudes of main-sequence stars with 0.2\\lt g-r\\lt 0.8 and 18.5\\lt g\\lt 20.5 are converted, therefore the maximum average error of the converted u-band magnitudes is 0.11. The potential application of this conversion is to derive a more accurate photometric metallicity calibration from SDSS observations, especially for the more distant stars. Thus, we can explore stellar metallicity distributions either in the Galactic halo or some stream stars.

  3. A rapid chemiluminescent method for quantitation of human DNA.

    PubMed Central

    Walsh, P S; Varlaro, J; Reynolds, R

    1992-01-01

    A sensitive and simple method for the quantitation of human DNA is described. This method is based on probe hybridization to a human alpha satellite locus, D17Z1. The biotinylated probe is hybridized to sample DNA immobilized on nylon membrane. The subsequent binding of streptavidin-horseradish peroxidase to the bound probe allows for chemiluminescent detection using a luminol-based reagent and X-ray film. Less than 150 pg of human DNA can easily be detected with a 15 minute exposure. The entire procedure can be performed in 1.5 hours. Microgram quantities of nonhuman DNA have been tested and the results indicate very high specificity for human DNA. The data on film can be scanned into a computer and a commercially available program can be used to create a standard curve where DNA quantity is plotted against the mean density of each slot blot signal. The methods described can also be applied to the very sensitive determination of quantity and quality (size) of DNA on Southern blots. The high sensitivity of this quantitation method requires the consumption of only a fraction of sample for analysis. Determination of DNA quantity is necessary for RFLP and many PCR-based tests where optimal results are obtained only with a relatively narrow range of DNA quantities. The specificity of this quantitation method for human DNA will be useful for the analysis of samples that may also contain bacterial or other non-human DNA, for example forensic evidence samples, ancient DNA samples, or clinical samples. Images PMID:1408822

  4. A rapid chemiluminescent method for quantitation of human DNA.

    PubMed

    Walsh, P S; Varlaro, J; Reynolds, R

    1992-10-11

    A sensitive and simple method for the quantitation of human DNA is described. This method is based on probe hybridization to a human alpha satellite locus, D17Z1. The biotinylated probe is hybridized to sample DNA immobilized on nylon membrane. The subsequent binding of streptavidin-horseradish peroxidase to the bound probe allows for chemiluminescent detection using a luminol-based reagent and X-ray film. Less than 150 pg of human DNA can easily be detected with a 15 minute exposure. The entire procedure can be performed in 1.5 hours. Microgram quantities of nonhuman DNA have been tested and the results indicate very high specificity for human DNA. The data on film can be scanned into a computer and a commercially available program can be used to create a standard curve where DNA quantity is plotted against the mean density of each slot blot signal. The methods described can also be applied to the very sensitive determination of quantity and quality (size) of DNA on Southern blots. The high sensitivity of this quantitation method requires the consumption of only a fraction of sample for analysis. Determination of DNA quantity is necessary for RFLP and many PCR-based tests where optimal results are obtained only with a relatively narrow range of DNA quantities. The specificity of this quantitation method for human DNA will be useful for the analysis of samples that may also contain bacterial or other non-human DNA, for example forensic evidence samples, ancient DNA samples, or clinical samples.

  5. Quantitation of mRNA levels of steroid 5alpha-reductase isozymes: a novel method that combines quantitative RT-PCR and capillary electrophoresis.

    PubMed

    Torres, Jesús M; Ortega, Esperanza

    2004-01-01

    A novel, accurate, rapid and modestly labor-intensive method has been developed to quantitate specific mRNA species by reverse transcription-polymerase chain reaction (RT-PCR). This strategy combines the high degree of specificity of competitive PCR with the sensitivity of laser-induced fluorescence capillary electrophoresis (LIF-CE). The specific target mRNA and a mimic DNA fragment, used as an internal standard (IS), were co-amplified in a single reaction in which the same primers are used. The amount of mRNA was then quantitated by extrapolation from the standard curve generated with the internal standard. PCR primers were designed to amplify both a 185 bp fragment of the target cDNA for steroid 5alpha-reductase 1 (5alpha-R1) and a 192 bp fragment of the target cDNA for steroid 5alpha-reductase type 2 (5alpha-R2). The 5' forward primers were end-labeled with 6-carboxy-fluorescein (6-FAM). Two synthetic internal standard DNAs of 300 bp were synthesized from the sequence of plasmid pEGFP-C1. The ratio of fluorescence intensity between amplified products of the target cDNA (185 or 192 bp fragments) and the competitive DNA (300 bp fragment) was determined quantitatively after separation by capillary electrophoresis and fluorescence analysis. The accurate quantitation of low-abundance mRNAs by the present method allows low-level gene expression to be characterized.

  6. A new noninvasive method for the accurate and precise assessment of varicose vein diameters.

    PubMed

    Baldassarre, Damiano; Pustina, Linda; Castelnuovo, Samuela; Bondioli, Alighiero; Carlà, Matteo; Sirtori, Cesare R

    2003-01-01

    The feasibility and reproducibility of a new ultrasonic method for the direct assessment of maximal varicose vein diameter (VVD) were evaluated. A study was also performed to demonstrate the capacity of the method to detect changes in venous diameter induced by a pharmacologic treatment. Patients with varicose vein disease were recruited. A method that allows the precise positioning of patient and transducer and performance of scans in a gel-bath was developed. Maximal VVD was recorded both in the standing and supine positions. The intraassay reproducibility was determined by replicate scans made within 15 minutes in both positions. The interobserver variability was assessed by comparing VVDs measured during the first phase baseline examination with those obtained during baseline examinations in the second phase of the study. The error in reproducibility of VVD determinations was 5.3% when diameters were evaluated in the standing position and 6.4% when assessed in the supine position. The intramethod agreement was high, with a bias between readings of 0.06 +/- 0.18 mm and of -0.02 +/- 0.19 mm, respectively, in standing and supine positions. Correlation coefficients were better than 0.99 in both positions. The method appears to be sensitive enough to detect small changes in VVDs induced by treatments. The proposed technique provides a tool of potential valid use in the detection and in vivo monitoring of VVD changes in patients with varicose vein disease. The method offers an innovative approach to obtain a quantitative assessment of varicose vein progression and of treatment effects, thus providing a basis for epidemiologic surveys.

  7. A Quantitative Method for Microtubule Analysis in Fluorescence Images.

    PubMed

    Lan, Xiaodong; Li, Lingfei; Hu, Jiongyu; Zhang, Qiong; Dang, Yongming; Huang, Yuesheng

    2015-12-01

    Microtubule analysis is of significant value for a better understanding of normal and pathological cellular processes. Although immunofluorescence microscopic techniques have proven useful in the study of microtubules, comparative results commonly rely on a descriptive and subjective visual analysis. We developed an objective and quantitative method based on image processing and analysis of fluorescently labeled microtubular patterns in cultured cells. We used a multi-parameter approach by analyzing four quantifiable characteristics to compose our quantitative feature set. Then we interpreted specific changes in the parameters and revealed the contribution of each feature set using principal component analysis. In addition, we verified that different treatment groups could be clearly discriminated using principal components of the multi-parameter model. High predictive accuracy of four commonly used multi-classification methods confirmed our method. These results demonstrated the effectiveness and efficiency of our method in the analysis of microtubules in fluorescence images. Application of the analytical methods presented here provides information concerning the organization and modification of microtubules, and could aid in the further understanding of structural and functional aspects of microtubules under normal and pathological conditions.

  8. A Weight-Averaged Interpolation Method for Coupling Time-Accurate Rarefied and Continuum Flows

    NASA Astrophysics Data System (ADS)

    Diaz, Steven William

    A novel approach to coupling rarefied and continuum flow regimes as a single, hybrid model is introduced. The method borrows from techniques used in the simulation of spray flows to interpolate Lagrangian point-particles onto an Eulerian grid in a weight-averaged sense. A brief overview of traditional methods for modeling both rarefied and continuum domains is given, and a review of the literature regarding rarefied/continuum flow coupling is presented. Details of the theoretical development of the method of weighted interpolation are then described. The method evaluates macroscopic properties at the nodes of a CFD grid via the weighted interpolation of all simulated molecules in a set surrounding the node. The weight factor applied to each simulated molecule is the inverse of the linear distance between it and the given node. During development, the method was applied to several preliminary cases, including supersonic flow over an airfoil, subsonic flow over tandem airfoils, and supersonic flow over a backward facing step; all at low Knudsen numbers. The main thrust of the research centered on the time-accurate expansion of a rocket plume into a near-vacuum. The method proves flexible enough to be used with various flow solvers, demonstrated by the use of Fluent as the continuum solver for the preliminary cases and a NASA-developed Large Eddy Simulation research code, WRLES, for the full lunar model. The method is applicable to a wide range of Mach numbers and is completely grid independent, allowing the rarefied and continuum solvers to be optimized for their respective domains without consideration of the other. The work presented demonstrates the validity, and flexibility of the method of weighted interpolation as a novel concept in the field of hybrid flow coupling. The method marks a significant divergence from current practices in the coupling of rarefied and continuum flow domains and offers a kernel on which to base an ongoing field of research. It has the

  9. A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction

    NASA Technical Reports Server (NTRS)

    Bockelie, Michael J.; Eiseman, Peter R.

    1990-01-01

    A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.

  10. Adaptive and accurate color edge extraction method for one-shot shape acquisition

    NASA Astrophysics Data System (ADS)

    Yin, Wei; Cheng, Xiaosheng; Cui, Haihua; Li, Dawei; Zhou, Lei

    2016-09-01

    This paper presents an approach to extract accurate color edge information using encoded patterns in hue, saturation, and intensity (HSI) color space. This method is applied to one-shot shape acquisition. Theoretical analysis shows that the hue transition between primary and secondary colors in a color edge is based on light interference and diffraction. We set up a color transition model to illustrate the hue transition on an edge and then define the segmenting position of two stripes. By setting up an adaptive HSI color space, the colors of the stripes and subpixel edges are obtained precisely without a dark laboratory environment, in a low-cost processing algorithm. Since this method does not have any constraints for colors of neighboring stripes, the encoding is an easy procedure. The experimental results show that the edges of dense modulation patterns can be obtained under a complicated environment illumination, and the precision can ensure that the three-dimensional shape of the object is obtained reliably with only one image.

  11. Efficient and Accurate Multiple-Phenotype Regression Method for High Dimensional Data Considering Population Structure.

    PubMed

    Joo, Jong Wha J; Kang, Eun Yong; Org, Elin; Furlotte, Nick; Parks, Brian; Hormozdiari, Farhad; Lusis, Aldons J; Eskin, Eleazar

    2016-12-01

    A typical genome-wide association study tests correlation between a single phenotype and each genotype one at a time. However, single-phenotype analysis might miss unmeasured aspects of complex biological networks. Analyzing many phenotypes simultaneously may increase the power to capture these unmeasured aspects and detect more variants. Several multivariate approaches aim to detect variants related to more than one phenotype, but these current approaches do not consider the effects of population structure. As a result, these approaches may result in a significant amount of false positive identifications. Here, we introduce a new methodology, referred to as GAMMA for generalized analysis of molecular variance for mixed-model analysis, which is capable of simultaneously analyzing many phenotypes and correcting for population structure. In a simulated study using data implanted with true genetic effects, GAMMA accurately identifies these true effects without producing false positives induced by population structure. In simulations with this data, GAMMA is an improvement over other methods which either fail to detect true effects or produce many false positive identifications. We further apply our method to genetic studies of yeast and gut microbiome from mice and show that GAMMA identifies several variants that are likely to have true biological mechanisms.

  12. An Accurate Method for Measuring Airplane-Borne Conformal Antenna's Radar Cross Section

    NASA Astrophysics Data System (ADS)

    Guo, Shuxia; Zhang, Lei; Wang, Yafeng; Hu, Chufeng

    2016-09-01

    The airplane-borne conformal antenna attaches itself tightly with the airplane skin, so the conventional measurement method cannot determine the contribution of the airplane-borne conformal antenna to its radar cross section (RCS). This paper uses the 2D microwave imaging to isolate and extract the distribution of the reflectivity of the airplane-borne conformal antenna. It obtains the 2D spatial spectra of the conformal antenna through the wave spectral transform between the 2D spatial image and the 2D spatial spectrum. After the interpolation from the rectangular coordinate domain to the polar coordinate domain, the spectral domain data for the variation of the scatter of the conformal antenna with frequency and angle is obtained. The experimental results show that the measurement method proposed in this paper greatly enhances the airplane-borne conformal antenna's RCS measurement accuracy, essentially eliminates the influences caused by the airplane skin and more accurately reveals the airplane-borne conformal antenna's RCS scatter properties.

  13. MASCG: Multi-Atlas Segmentation Constrained Graph method for accurate segmentation of hip CT images.

    PubMed

    Chu, Chengwen; Bai, Junjie; Wu, Xiaodong; Zheng, Guoyan

    2015-12-01

    This paper addresses the issue of fully automatic segmentation of a hip CT image with the goal to preserve the joint structure for clinical applications in hip disease diagnosis and treatment. For this purpose, we propose a Multi-Atlas Segmentation Constrained Graph (MASCG) method. The MASCG method uses multi-atlas based mesh fusion results to initialize a bone sheetness based multi-label graph cut for an accurate hip CT segmentation which has the inherent advantage of automatic separation of the pelvic region from the bilateral proximal femoral regions. We then introduce a graph cut constrained graph search algorithm to further improve the segmentation accuracy around the bilateral hip joint regions. Taking manual segmentation as the ground truth, we evaluated the present approach on 30 hip CT images (60 hips) with a 15-fold cross validation. When the present approach was compared to manual segmentation, an average surface distance error of 0.30 mm, 0.29 mm, and 0.30 mm was found for the pelvis, the left proximal femur, and the right proximal femur, respectively. A further look at the bilateral hip joint regions demonstrated an average surface distance error of 0.16 mm, 0.21 mm and 0.20 mm for the acetabulum, the left femoral head, and the right femoral head, respectively.

  14. Accurate computation of surface stresses and forces with immersed boundary methods

    NASA Astrophysics Data System (ADS)

    Goza, Andres; Liska, Sebastian; Morley, Benjamin; Colonius, Tim

    2016-09-01

    Many immersed boundary methods solve for surface stresses that impose the velocity boundary conditions on an immersed body. These surface stresses may contain spurious oscillations that make them ill-suited for representing the physical surface stresses on the body. Moreover, these inaccurate stresses often lead to unphysical oscillations in the history of integrated surface forces such as the coefficient of lift. While the errors in the surface stresses and forces do not necessarily affect the convergence of the velocity field, it is desirable, especially in fluid-structure interaction problems, to obtain smooth and convergent stress distributions on the surface. To this end, we show that the equation for the surface stresses is an integral equation of the first kind whose ill-posedness is the source of spurious oscillations in the stresses. We also demonstrate that for sufficiently smooth delta functions, the oscillations may be filtered out to obtain physically accurate surface stresses. The filtering is applied as a post-processing procedure, so that the convergence of the velocity field is unaffected. We demonstrate the efficacy of the method by computing stresses and forces that converge to the physical stresses and forces for several test problems.

  15. A spectral element method with adaptive segmentation for accurately simulating extracellular electrical stimulation of neurons.

    PubMed

    Eiber, Calvin D; Dokos, Socrates; Lovell, Nigel H; Suaning, Gregg J

    2016-08-19

    The capacity to quickly and accurately simulate extracellular stimulation of neurons is essential to the design of next-generation neural prostheses. Existing platforms for simulating neurons are largely based on finite-difference techniques; due to the complex geometries involved, the more powerful spectral or differential quadrature techniques cannot be applied directly. This paper presents a mathematical basis for the application of a spectral element method to the problem of simulating the extracellular stimulation of retinal neurons, which is readily extensible to neural fibers of any kind. The activating function formalism is extended to arbitrary neuron geometries, and a segmentation method to guarantee an appropriate choice of collocation points is presented. Differential quadrature may then be applied to efficiently solve the resulting cable equations. The capacity for this model to simulate action potentials propagating through branching structures and to predict minimum extracellular stimulation thresholds for individual neurons is demonstrated. The presented model is validated against published values for extracellular stimulation threshold and conduction velocity for realistic physiological parameter values. This model suggests that convoluted axon geometries are more readily activated by extracellular stimulation than linear axon geometries, which may have ramifications for the design of neural prostheses.

  16. Methods for accurate cold-chain temperature monitoring using digital data-logger thermometers

    NASA Astrophysics Data System (ADS)

    Chojnacky, M. J.; Miller, W. M.; Strouse, G. F.

    2013-09-01

    Complete and accurate records of vaccine temperature history are vital to preserving drug potency and patient safety. However, previously published vaccine storage and handling guidelines have failed to indicate a need for continuous temperature monitoring in vaccine storage refrigerators. We evaluated the performance of seven digital data logger models as candidates for continuous temperature monitoring of refrigerated vaccines, based on the following criteria: out-of-box performance and compliance with manufacturer accuracy specifications over the range of use; measurement stability over extended, continuous use; proper setup in a vaccine storage refrigerator so that measurements reflect liquid vaccine temperatures; and practical methods for end-user validation and establishing metrological traceability. Data loggers were tested using ice melting point checks and by comparison to calibrated thermocouples to characterize performance over 0 °C to 10 °C. We also monitored logger performance in a study designed to replicate the range of vaccine storage and environmental conditions encountered at provider offices. Based on the results of this study, the Centers for Disease Control released new guidelines on proper methods for storage, handling, and temperature monitoring of vaccines for participants in its federally-funded Vaccines for Children Program. Improved temperature monitoring practices will ultimately decrease waste from damaged vaccines, improve consumer confidence, and increase effective inoculation rates.

  17. New Distributed Multipole Methods for Accurate Electrostatics for Large-Scale Biomolecular Simultations

    NASA Astrophysics Data System (ADS)

    Sagui, Celeste

    2006-03-01

    An accurate and numerically efficient treatment of electrostatics is essential for biomolecular simulations, as this stabilizes much of the delicate 3-d structure associated with biomolecules. Currently, force fields such as AMBER and CHARMM assign ``partial charges'' to every atom in a simulation in order to model the interatomic electrostatic forces, so that the calculation of the electrostatics rapidly becomes the computational bottleneck in large-scale simulations. There are two main issues associated with the current treatment of classical electrostatics: (i) how does one eliminate the artifacts associated with the point-charges (e.g., the underdetermined nature of the current RESP fitting procedure for large, flexible molecules) used in the force fields in a physically meaningful way? (ii) how does one efficiently simulate the very costly long-range electrostatic interactions? Recently, we have dealt with both of these challenges as follows. In order to improve the description of the molecular electrostatic potentials (MEPs), a new distributed multipole analysis based on localized functions -- Wannier, Boys, and Edminston-Ruedenberg -- was introduced, which allows for a first principles calculation of the partial charges and multipoles. Through a suitable generalization of the particle mesh Ewald (PME) and multigrid method, one can treat electrostatic multipoles all the way to hexadecapoles all without prohibitive extra costs. The importance of these methods for large-scale simulations will be discussed, and examplified by simulations from polarizable DNA models.

  18. Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods

    DOEpatents

    Grossman, Mark W.; George, William A.

    1987-01-01

    A process for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H.sub.2 O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg.sub.2 Cl.sub.2. The method for doing this involves dissolving a precise amount of Hg.sub.2 Cl.sub.2 in an electrolyte solution comprised of concentrated HCl and H.sub.2 O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg.

  19. Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods

    DOEpatents

    Grossman, M.W.; George, W.A.

    1987-07-07

    A process is described for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H[sub 2]O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg[sub 2]Cl[sub 2]. The method for doing this involves dissolving a precise amount of Hg[sub 2]Cl[sub 2] in an electrolyte solution comprised of concentrated HCl and H[sub 2]O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg. 1 fig.

  20. Open lung biopsy: a safe, reliable and accurate method for diagnosis in diffuse lung disease.

    PubMed

    Shah, S S; Tsang, V; Goldstraw, P

    1992-01-01

    The ideal method for obtaining lung tissue for diagnosis should provide high diagnostic yield with low morbidity and mortality. We reviewed all 432 patients (mean age 55 years) who underwent an open lung biopsy at this hospital over a 10-year period. Twenty-four patients (5.5%) were immunocompromised. One hundred and twenty-five patients were on steroid therapy at the time of operation. Open lung biopsy provided a firm diagnosis in 410 cases overall (94.9%) and in 20 out of 24 patients in the immunocompromised group (83.3%). The commonest diagnosis was cryptogenic fibrosing alveolitis (173 patients). Twenty-two patients (5.1%) suffered complications following the procedure: wound infection 11 patients, pneumothorax 9 patients and haemothorax 1 patient. Thirteen patients (3.0%) died following open lung biopsy, but in only 1 patient was the death attributable to the procedure itself. We conclude that open lung biopsy is an accurate and safe method for establishing a diagnosis in diffuse lung disease with a high yield and minimal risk.

  1. Accurate gradient approximation for complex interface problems in 3D by an improved coupling interface method

    SciTech Connect

    Shu, Yu-Chen; Chern, I-Liang; Chang, Chien C.

    2014-10-15

    Most elliptic interface solvers become complicated for complex interface problems at those “exceptional points” where there are not enough neighboring interior points for high order interpolation. Such complication increases especially in three dimensions. Usually, the solvers are thus reduced to low order accuracy. In this paper, we classify these exceptional points and propose two recipes to maintain order of accuracy there, aiming at improving the previous coupling interface method [26]. Yet the idea is also applicable to other interface solvers. The main idea is to have at least first order approximations for second order derivatives at those exceptional points. Recipe 1 is to use the finite difference approximation for the second order derivatives at a nearby interior grid point, whenever this is possible. Recipe 2 is to flip domain signatures and introduce a ghost state so that a second-order method can be applied. This ghost state is a smooth extension of the solution at the exceptional point from the other side of the interface. The original state is recovered by a post-processing using nearby states and jump conditions. The choice of recipes is determined by a classification scheme of the exceptional points. The method renders the solution and its gradient uniformly second-order accurate in the entire computed domain. Numerical examples are provided to illustrate the second order accuracy of the presently proposed method in approximating the gradients of the original states for some complex interfaces which we had tested previous in two and three dimensions, and a real molecule ( (1D63)) which is double-helix shape and composed of hundreds of atoms.

  2. Methods for accurate estimation of net discharge in a tidal channel

    USGS Publications Warehouse

    Simpson, M.R.; Bland, R.

    2000-01-01

    Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three

  3. Can MRI accurately detect pilon articular malreduction? A quantitative comparison between CT and 3T MRI bone models

    PubMed Central

    Radzi, Shairah; Dlaska, Constantin Edmond; Cowin, Gary; Robinson, Mark; Pratap, Jit; Schuetz, Michael Andreas; Mishra, Sanjay

    2016-01-01

    Background Pilon fracture reduction is a challenging surgery. Radiographs are commonly used to assess the quality of reduction, but are limited in revealing the remaining bone incongruities. The study aimed to develop a method in quantifying articular malreductions using 3D computed tomography (CT) and magnetic resonance imaging (MRI) models. Methods CT and MRI data were acquired using three pairs of human cadaveric ankle specimens. Common tibial pilon fractures were simulated by performing osteotomies to the ankle specimens. Five of the created fractures [three AO type-B (43-B1), and two AO type-C (43-C1) fractures] were then reduced and stabilised using titanium implants, then rescanned. All datasets were reconstructed into CT and MRI models, and were analysed in regards to intra-articular steps and gaps, surface deviations, malrotations and maltranslations of the bone fragments. Results Initial results reveal that type B fracture CT and MRI models differed by ~0.2 (step), ~0.18 (surface deviations), ~0.56° (rotation) and ~0.4 mm (translation). Type C fracture MRI models showed metal artefacts extending to the articular surface, thus unsuitable for analysis. Type C fracture CT models differed from their CT and MRI contralateral models by ~0.15 (surface deviation), ~1.63° (rotation) and ~0.4 mm (translation). Conclusions Type B fracture MRI models were comparable to CT and may potentially be used for the postoperative assessment of articular reduction on a case-to-case basis. PMID:28090442

  4. Quantitative Evaluation of the Total Magnetic Moments of Colloidal Magnetic Nanoparticles: A Kinetics-based Method.

    PubMed

    Liu, Haiyi; Sun, Jianfei; Wang, Haoyao; Wang, Peng; Song, Lina; Li, Yang; Chen, Bo; Zhang, Yu; Gu, Ning

    2015-06-08

    A kinetics-based method is proposed to quantitatively characterize the collective magnetization of colloidal magnetic nanoparticles. The method is based on the relationship between the magnetic force on a colloidal droplet and the movement of the droplet under a gradient magnetic field. Through computational analysis of the kinetic parameters, such as displacement, velocity, and acceleration, the magnetization of colloidal magnetic nanoparticles can be calculated. In our experiments, the values measured by using our method exhibited a better linear correlation with magnetothermal heating, than those obtained by using a vibrating sample magnetometer and magnetic balance. This finding indicates that this method may be more suitable to evaluate the collective magnetism of colloidal magnetic nanoparticles under low magnetic fields than the commonly used methods. Accurate evaluation of the magnetic properties of colloidal nanoparticles is of great importance for the standardization of magnetic nanomaterials and for their practical application in biomedicine.

  5. Quantitative Phase Analysis by the Rietveld Method for Forensic Science.

    PubMed

    Deng, Fei; Lin, Xiaodong; He, Yonghong; Li, Shu; Zi, Run; Lai, Shijun

    2015-07-01

    Quantitative phase analysis (QPA) is helpful to determine the type attribute of the object because it could present the content of the constituents. QPA by Rietveld method requires neither measurement of calibration data nor the use of an internal standard; however, the approximate crystal structure of each phase in a mixture is necessary. In this study, 8 synthetic mixtures composed of potassium nitrate and sulfur were analyzed by Rietveld QPA method. The Rietveld refinement was accomplished with a material analysis using diffraction program and evaluated by three agreement indices. Results showed that Rietveld QPA yielded precise results, with errors generally less than 2.0% absolute. In addition, a criminal case which was broken successfully with the help of Rietveld QPA method was also introduced. This method will allow forensic investigators to acquire detailed information of the material evidence, which could point out the direction for case detection and court proceedings.

  6. A New Method for Accurate Treatment of Flow Equations in Cylindrical Coordinates Using Series Expansions

    NASA Technical Reports Server (NTRS)

    Constantinescu, G.S.; Lele, S. K.

    2000-01-01

    The motivation of this work is the ongoing effort at the Center for Turbulence Research (CTR) to use large eddy simulation (LES) techniques to calculate the noise radiated by jet engines. The focus on engine exhaust noise reduction is motivated by the fact that a significant reduction has been achieved over the last decade on the other main sources of acoustic emissions of jet engines, such as the fan and turbomachinery noise, which gives increased priority to jet noise. To be able to propose methods to reduce the jet noise based on results of numerical simulations, one first has to be able to accurately predict the spatio-temporal distribution of the noise sources in the jet. Though a great deal of understanding of the fundamental turbulence mechanisms in high-speed jets was obtained from direct numerical simulations (DNS) at low Reynolds numbers, LES seems to be the only realistic available tool to obtain the necessary near-field information that is required to estimate the acoustic radiation of the turbulent compressible engine exhaust jets. The quality of jet-noise predictions is determined by the accuracy of the numerical method that has to capture the wide range of pressure fluctuations associated with the turbulence in the jet and with the resulting radiated noise, and by the boundary condition treatment and the quality of the mesh. Higher Reynolds numbers and coarser grids put in turn a higher burden on the robustness and accuracy of the numerical method used in this kind of jet LES simulations. As these calculations are often done in cylindrical coordinates, one of the most important requirements for the numerical method is to provide a flow solution that is not contaminated by numerical artifacts. The coordinate singularity is known to be a source of such artifacts. In the present work we use 6th order Pade schemes in the non-periodic directions to discretize the full compressible flow equations. It turns out that the quality of jet-noise predictions

  7. Quantitative, Qualitative and Geospatial Methods to Characterize HIV Risk Environments

    PubMed Central

    Conners, Erin E.; West, Brooke S.; Roth, Alexis M.; Meckel-Parker, Kristen G.; Kwan, Mei-Po; Magis-Rodriguez, Carlos; Staines-Orozco, Hugo; Clapp, John D.; Brouwer, Kimberly C.

    2016-01-01

    Increasingly, ‘place’, including physical and geographical characteristics as well as social meanings, is recognized as an important factor driving individual and community health risks. This is especially true among marginalized populations in low and middle income countries (LMIC), whose environments may also be more difficult to study using traditional methods. In the NIH-funded longitudinal study Mapa de Salud, we employed a novel approach to exploring the risk environment of female sex workers (FSWs) in two Mexico/U.S. border cities, Tijuana and Ciudad Juárez. In this paper we describe the development, implementation, and feasibility of a mix of quantitative and qualitative tools used to capture the HIV risk environments of FSWs in an LMIC setting. The methods were: 1) Participatory mapping; 2) Quantitative interviews; 3) Sex work venue field observation; 4) Time-location-activity diaries; 5) In-depth interviews about daily activity spaces. We found that the mixed-methodology outlined was both feasible to implement and acceptable to participants. These methods can generate geospatial data to assess the role of the environment on drug and sexual risk behaviors among high risk populations. Additionally, the adaptation of existing methods for marginalized populations in resource constrained contexts provides new opportunities for informing public health interventions. PMID:27191846

  8. Quantitative, Qualitative and Geospatial Methods to Characterize HIV Risk Environments.

    PubMed

    Conners, Erin E; West, Brooke S; Roth, Alexis M; Meckel-Parker, Kristen G; Kwan, Mei-Po; Magis-Rodriguez, Carlos; Staines-Orozco, Hugo; Clapp, John D; Brouwer, Kimberly C

    2016-01-01

    Increasingly, 'place', including physical and geographical characteristics as well as social meanings, is recognized as an important factor driving individual and community health risks. This is especially true among marginalized populations in low and middle income countries (LMIC), whose environments may also be more difficult to study using traditional methods. In the NIH-funded longitudinal study Mapa de Salud, we employed a novel approach to exploring the risk environment of female sex workers (FSWs) in two Mexico/U.S. border cities, Tijuana and Ciudad Juárez. In this paper we describe the development, implementation, and feasibility of a mix of quantitative and qualitative tools used to capture the HIV risk environments of FSWs in an LMIC setting. The methods were: 1) Participatory mapping; 2) Quantitative interviews; 3) Sex work venue field observation; 4) Time-location-activity diaries; 5) In-depth interviews about daily activity spaces. We found that the mixed-methodology outlined was both feasible to implement and acceptable to participants. These methods can generate geospatial data to assess the role of the environment on drug and sexual risk behaviors among high risk populations. Additionally, the adaptation of existing methods for marginalized populations in resource constrained contexts provides new opportunities for informing public health interventions.

  9. Quantitative analytical method to evaluate the metabolism of vitamin D.

    PubMed

    Mena-Bravo, A; Ferreiro-Vera, C; Priego-Capote, F; Maestro, M A; Mouriño, A; Quesada-Gómez, J M; Luque de Castro, M D

    2015-03-10

    A method for quantitative analysis of vitamin D (both D2 and D3) and its main metabolites - monohydroxylated vitamin D (25-hydroxyvitamin D2 and 25-hydroxyvitamin D3) and dihydroxylated metabolites (1,25-dihydroxyvitamin D2, 1,25-dihydroxyvitamin D3 and 24,25-dihydroxyvitamin D3) in human serum is here reported. The method is based on direct analysis of serum by an automated platform involving on-line coupling of a solid-phase extraction workstation to a liquid chromatograph-tandem mass spectrometer. Detection of the seven analytes was carried out by the selected reaction monitoring (SRM) mode, and quantitative analysis was supported on the use of stable isotopic labeled internal standards (SIL-ISs). The detection limits were between 0.3-75pg/mL for the target compounds, while precision (expressed as relative standard deviation) was below 13.0% for between-day variability. The method was externally validated according to the vitamin D External Quality Assurance Scheme (DEQAS) through the analysis of ten serum samples provided by this organism. The analytical features of the method support its applicability in nutritional and clinical studies targeted at elucidating the role of vitamin D metabolism.

  10. A New Kinetic Spectrophotometric Method for the Quantitation of Amorolfine.

    PubMed

    Soto, César; Poza, Cristian; Contreras, David; Yáñez, Jorge; Nacaratte, Fallon; Toral, M Inés

    2017-01-01

    Amorolfine (AOF) is a compound with fungicide activity based on the dual inhibition of growth of the fungal cell membrane, the biosynthesis and accumulation of sterols, and the reduction of ergosterol. In this work a sensitive kinetic and spectrophotometric method for the AOF quantitation based on the AOF oxidation by means of KMnO4 at 30 min (fixed time), pH alkaline, and ionic strength controlled was developed. Measurements of changes in absorbance at 610 nm were used as criterion of the oxidation progress. In order to maximize the sensitivity, different experimental reaction parameters were carefully studied via factorial screening and optimized by multivariate method. The linearity, intraday, and interday assay precision and accuracy were determined. The absorbance-concentration plot corresponding to tap water spiked samples was rectilinear, over the range of 7.56 × 10(-6)-3.22 × 10(-5) mol L(-1), with detection and quantitation limits of 2.49 × 10(-6) mol L(-1) and 7.56 × 10(-6) mol L(-1), respectively. The proposed method was successfully validated for the application of the determination of the drug in the spiked tap water samples and the percentage recoveries were 94.0-105.0%. The method is simple and does not require expensive instruments or complicated extraction steps of the reaction product.

  11. A New Kinetic Spectrophotometric Method for the Quantitation of Amorolfine

    PubMed Central

    Poza, Cristian; Contreras, David; Yáñez, Jorge; Nacaratte, Fallon; Toral, M. Inés

    2017-01-01

    Amorolfine (AOF) is a compound with fungicide activity based on the dual inhibition of growth of the fungal cell membrane, the biosynthesis and accumulation of sterols, and the reduction of ergosterol. In this work a sensitive kinetic and spectrophotometric method for the AOF quantitation based on the AOF oxidation by means of KMnO4 at 30 min (fixed time), pH alkaline, and ionic strength controlled was developed. Measurements of changes in absorbance at 610 nm were used as criterion of the oxidation progress. In order to maximize the sensitivity, different experimental reaction parameters were carefully studied via factorial screening and optimized by multivariate method. The linearity, intraday, and interday assay precision and accuracy were determined. The absorbance-concentration plot corresponding to tap water spiked samples was rectilinear, over the range of 7.56 × 10−6–3.22 × 10−5 mol L−1, with detection and quantitation limits of 2.49 × 10−6 mol L−1 and 7.56 × 10−6 mol L−1, respectively. The proposed method was successfully validated for the application of the determination of the drug in the spiked tap water samples and the percentage recoveries were 94.0–105.0%. The method is simple and does not require expensive instruments or complicated extraction steps of the reaction product. PMID:28348920

  12. A new method for quantitative real-time polymerase chain reaction data analysis.

    PubMed

    Rao, Xiayu; Lai, Dejian; Huang, Xuelin

    2013-09-01

    Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantification method that has been extensively used in biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle method and linear and nonlinear model-fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence can hardly be accurate and therefore can distort results. We propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtract the fluorescence in the former cycle from that in the latter cycle, transforming the n cycle raw data into n-1 cycle data. Then, linear regression is applied to the natural logarithm of the transformed data. Finally, PCR amplification efficiencies and the initial DNA molecular numbers are calculated for each reaction. This taking-difference method avoids the error in subtracting an unknown background, and thus it is more accurate and reliable. This method is easy to perform, and this strategy can be extended to all current methods for PCR data analysis.

  13. Quantitative cell imaging using single beam phase retrieval method

    NASA Astrophysics Data System (ADS)

    Anand, Arun; Chhaniwal, Vani; Javidi, Bahram

    2011-06-01

    Quantitative three-dimensional imaging of cells can provide important information about their morphology as well as their dynamics, which will be useful in studying their behavior under various conditions. There are several microscopic techniques to image unstained, semi-transparent specimens, by converting the phase information into intensity information. But most of the quantitative phase contrast imaging techniques is realized either by using interference of the object wavefront with a known reference beam or using phase shifting interferometry. A two-beam interferometric method is challenging to implement especially with low coherent sources and it also requires a fine adjustment of beams to achieve high contrast fringes. In this letter, the development of a single beam phase retrieval microscopy technique for quantitative phase contrast imaging of cells using multiple intensity samplings of a volume speckle field in the axial direction is described. Single beam illumination with multiple intensity samplings provides fast convergence and a unique solution of the object wavefront. Three-dimensional thickness profiles of different cells such as red blood cells and onion skin cells were reconstructed using this technique with an axial resolution of the order of several nanometers.

  14. Accurate Ionization Potentials and Electron Affinities of Acceptor Molecules: A Benchmark of GW Methods

    NASA Astrophysics Data System (ADS)

    Marom, Noa; Knight, Joseph; Wang, Xiaopeng; Gallandi, Lukas; Dolgounitcheva, Olga; Ren, Xinguo; Ortiz, Vincent; Rinke, Patrick; Korzdorfer, Thomas

    The performance of different GW methods is assessed for a set of 24 organic acceptors. Errors are evaluated with respect to coupled cluster singles, doubles, perturbative triples [CCSD(T)] reference data for the vertical ionization potentials (IPs) and electron affinities (EAs), extrapolated to the complete basis set limit. Additional comparisons are made to experimental data, where available. We consider fully self-consistent GW (scGW), partial self-consistency in the Green's function (scGW0) , non-self-consistent G0W0 based on several mean-field starting points, and a ``beyond GW'' second order screened exchange (SOSEX) correction to G0W0. The best performers overall are G0W0 + SOSEX and G0W0 based on an IP-tuned long range corrected hybrid functional with the former being more accurate for EAs and the latter for IPs. Both provide a balanced treatment of localized vs. delocalized states and valence spectra in good agreement with photoemission spectroscopy (PES) experiments.

  15. Wear characteristics of UHMW polyethylene: a method for accurately measuring extremely low wear rates.

    PubMed

    McKellop, H; Clarke, I C; Markolf, K L; Amstutz, H C

    1978-11-01

    The wear of UHMW polyethylene bearing against 316 stainless steel or cobalt chrome alloy was measured using a 12-channel wear tester especially developed for the evaluation of candidate materials for prosthetic joints. The coefficient of friction and wear rate was determined as a function of lubricant, contact stress, and metallic surface roughness in tests lasting two to three million cycles, the equivalent of several years' use of a prosthesis. Wear was determined from the weight loss of the polyethylene specimens corrected for the effect of fluid absorption. The friction and wear processes in blood serum differed markedly from those in saline solution or distilled water. Only serum lubrication produced wear surfaces resembling those observed on removed prostheses. The experimental method provided a very accurate reproducible measurement of polyethylene wear. The long-term wear rates were proportional to load and sliding distance and were much lower than expected from previously published data. Although the polyethylene wear rate increased with increasing surface roughness, wear was not severe except with very coarse metal surfaces. The data obtained in these studies forms a basis for the subsequent comparative evaluation of potentially superior materials for prosthetic joints.

  16. Method for accurate sizing of pulmonary vessels from 3D medical images

    NASA Astrophysics Data System (ADS)

    O'Dell, Walter G.

    2015-03-01

    Detailed characterization of vascular anatomy, in particular the quantification of changes in the distribution of vessel sizes and of vascular pruning, is essential for the diagnosis and management of a variety of pulmonary vascular diseases and for the care of cancer survivors who have received radiation to the thorax. Clinical estimates of vessel radii are typically based on setting a pixel intensity threshold and counting how many "On" pixels are present across the vessel cross-section. A more objective approach introduced recently involves fitting the image with a library of spherical Gaussian filters and utilizing the size of the best matching filter as the estimate of vessel diameter. However, both these approaches have significant accuracy limitations including mis-match between a Gaussian intensity distribution and that of real vessels. Here we introduce and demonstrate a novel approach for accurate vessel sizing using 3D appearance models of a tubular structure along a curvilinear trajectory in 3D space. The vessel branch trajectories are represented with cubic Hermite splines and the tubular branch surfaces represented as a finite element surface mesh. An iterative parameter adjustment scheme is employed to optimally match the appearance models to a patient's chest X-ray computed tomography (CT) scan to generate estimates for branch radii and trajectories with subpixel resolution. The method is demonstrated on pulmonary vasculature in an adult human CT scan, and on 2D simulated test cases.

  17. Evaluation of the quantitative performances of supercritical fluid chromatography: from method development to validation.

    PubMed

    Dispas, Amandine; Lebrun, Pierre; Ziemons, Eric; Marini, Roland; Rozet, Eric; Hubert, Philippe

    2014-08-01

    Recently, the number of papers about SFC increased drastically but scientists did not truly focus their work on quantitative performances of this technique. In order to prove the potential of UHPSFC, the present work discussed about the different steps of the analytical life cycle of a method: from development to validation and application. Moreover, the UHPSFC quantitative performances were evaluated in comparison with UHPLC, which is the main technique used for quality control in the pharmaceutical industry and then could be considered as a reference. The methods were developed using Design Space strategy, leading to the optimization of robust method. In this context, when the Design Space optimization shows guarantee of quality, no more robustness study is required prior to the validation. Then, the methods were geometrically transferred in order to reduce the analysis time. The UHPSFC and UHPLC methods were validated based on the total error approach using accuracy profile. Even if UHPLC showed better precision and sensitivity, UHPSFC method is able to give accurate results in a dosing range larger than the 80-120% range required by the European Medicines Agency. Consequently, UHPSFC results are valid and could be used for the control of active substance in a finished pharmaceutical product. Finally, UHPSFC validated method was used to analyse real samples and gave similar results than the reference method (UHPLC).

  18. Bacterial Cytological Profiling (BCP) as a Rapid and Accurate Antimicrobial Susceptibility Testing Method for Staphylococcus aureus

    PubMed Central

    Quach, D.T.; Sakoulas, G.; Nizet, V.; Pogliano, J.; Pogliano, K.

    2016-01-01

    Successful treatment of bacterial infections requires the timely administration of appropriate antimicrobial therapy. The failure to initiate the correct therapy in a timely fashion results in poor clinical outcomes, longer hospital stays, and higher medical costs. Current approaches to antibiotic susceptibility testing of cultured pathogens have key limitations ranging from long run times to dependence on prior knowledge of genetic mechanisms of resistance. We have developed a rapid antimicrobial susceptibility assay for Staphylococcus aureus based on bacterial cytological profiling (BCP), which uses quantitative fluorescence microscopy to measure antibiotic induced changes in cellular architecture. BCP discriminated between methicillin-susceptible (MSSA) and -resistant (MRSA) clinical isolates of S. aureus (n = 71) within 1–2 h with 100% accuracy. Similarly, BCP correctly distinguished daptomycin susceptible (DS) from daptomycin non-susceptible (DNS) S. aureus strains (n = 20) within 30 min. Among MRSA isolates, BCP further identified two classes of strains that differ in their susceptibility to specific combinations of beta-lactam antibiotics. BCP provides a rapid and flexible alternative to gene-based susceptibility testing methods for S. aureus, and should be readily adaptable to different antibiotics and bacterial species as new mechanisms of resistance or multidrug-resistant pathogens evolve and appear in mainstream clinical practice. PMID:26981574

  19. Development and application of quantitative detection method for viral hemorrhagic septicemia virus (VHSV) genogroup IVa.

    PubMed

    Kim, Jong-Oh; Kim, Wi-Sik; Kim, Si-Woo; Han, Hyun-Ja; Kim, Jin Woo; Park, Myoung Ae; Oh, Myung-Joo

    2014-05-23

    Viral hemorrhagic septicemia virus (VHSV) is a problematic pathogen in olive flounder (Paralichthys olivaceus) aquaculture farms in Korea. Thus, it is necessary to develop a rapid and accurate diagnostic method to detect this virus. We developed a quantitative RT-PCR (qRT-PCR) method based on the nucleocapsid (N) gene sequence of Korean VHSV isolate (Genogroup IVa). The slope and R² values of the primer set developed in this study were -0.2928 (96% efficiency) and 0.9979, respectively. Its comparison with viral infectivity calculated by traditional quantifying method (TCID₅₀) showed a similar pattern of kinetic changes in vitro and in vivo. The qRT-PCR method reduced detection time compared to that of TCID₅₀, making it a very useful tool for VHSV diagnosis.

  20. Development and Application of Quantitative Detection Method for Viral Hemorrhagic Septicemia Virus (VHSV) Genogroup IVa

    PubMed Central

    Kim, Jong-Oh; Kim, Wi-Sik; Kim, Si-Woo; Han, Hyun-Ja; Kim, Jin Woo; Park, Myoung Ae; Oh, Myung-Joo

    2014-01-01

    Viral hemorrhagic septicemia virus (VHSV) is a problematic pathogen in olive flounder (Paralichthys olivaceus) aquaculture farms in Korea. Thus, it is necessary to develop a rapid and accurate diagnostic method to detect this virus. We developed a quantitative RT-PCR (qRT-PCR) method based on the nucleocapsid (N) gene sequence of Korean VHSV isolate (Genogroup IVa). The slope and R2 values of the primer set developed in this study were −0.2928 (96% efficiency) and 0.9979, respectively. Its comparison with viral infectivity calculated by traditional quantifying method (TCID50) showed a similar pattern of kinetic changes in vitro and in vivo. The qRT-PCR method reduced detection time compared to that of TCID50, making it a very useful tool for VHSV diagnosis. PMID:24859343

  1. Fast, accurate and easy-to-pipeline methods for amplicon sequence processing

    NASA Astrophysics Data System (ADS)

    Antonielli, Livio; Sessitsch, Angela

    2016-04-01

    Next generation sequencing (NGS) technologies established since years as an essential resource in microbiology. While on the one hand metagenomic studies can benefit from the continuously increasing throughput of the Illumina (Solexa) technology, on the other hand the spreading of third generation sequencing technologies (PacBio, Oxford Nanopore) are getting whole genome sequencing beyond the assembly of fragmented draft genomes, making it now possible to finish bacterial genomes even without short read correction. Besides (meta)genomic analysis next-gen amplicon sequencing is still fundamental for microbial studies. Amplicon sequencing of the 16S rRNA gene and ITS (Internal Transcribed Spacer) remains a well-established widespread method for a multitude of different purposes concerning the identification and comparison of archaeal/bacterial (16S rRNA gene) and fungal (ITS) communities occurring in diverse environments. Numerous different pipelines have been developed in order to process NGS-derived amplicon sequences, among which Mothur, QIIME and USEARCH are the most well-known and cited ones. The entire process from initial raw sequence data through read error correction, paired-end read assembly, primer stripping, quality filtering, clustering, OTU taxonomic classification and BIOM table rarefaction as well as alternative "normalization" methods will be addressed. An effective and accurate strategy will be presented using the state-of-the-art bioinformatic tools and the example of a straightforward one-script pipeline for 16S rRNA gene or ITS MiSeq amplicon sequencing will be provided. Finally, instructions on how to automatically retrieve nucleotide sequences from NCBI and therefore apply the pipeline to targets other than 16S rRNA gene (Greengenes, SILVA) and ITS (UNITE) will be discussed.

  2. An automated, fast and accurate registration method to link stranded seeds in permanent prostate implants

    NASA Astrophysics Data System (ADS)

    Westendorp, Hendrik; Nuver, Tonnis T.; Moerland, Marinus A.; Minken, André W.

    2015-10-01

    The geometry of a permanent prostate implant varies over time. Seeds can migrate and edema of the prostate affects the position of seeds. Seed movements directly influence dosimetry which relates to treatment quality. We present a method that tracks all individual seeds over time allowing quantification of seed movements. This linking procedure was tested on transrectal ultrasound (TRUS) and cone-beam CT (CBCT) datasets of 699 patients. These datasets were acquired intraoperatively during a dynamic implantation procedure, that combines both imaging modalities. The procedure was subdivided in four automatic linking steps. (I) The Hungarian Algorithm was applied to initially link seeds in CBCT and the corresponding TRUS datasets. (II) Strands were identified and optimized based on curvature and linefits: non optimal links were removed. (III) The positions of unlinked seeds were reviewed and were linked to incomplete strands if within curvature- and distance-thresholds. (IV) Finally, seeds close to strands were linked, also if the curvature-threshold was violated. After linking the seeds an affine transformation was applied. The procedure was repeated until the results were stable or the 6th iteration ended. All results were visually reviewed for mismatches and uncertainties. Eleven implants showed a mismatch and in 12 cases an uncertainty was identified. On average the linking procedure took 42 ms per case. This accurate and fast method has the potential to be used for other time spans, like Day 30, and other imaging modalities. It can potentially be used during a dynamic implantation procedure to faster and better evaluate the quality of the permanent prostate implant.

  3. Biological characteristics of crucian by quantitative inspection method

    NASA Astrophysics Data System (ADS)

    Chu, Mengqi

    2015-04-01

    Biological characteristics of crucian by quantitative inspection method Through quantitative inspection method , the biological characteristics of crucian was preliminary researched. Crucian , Belongs to Cypriniformes, Cyprinidae, Carassius auratus, is a kind of main plant-eating omnivorous fish,like Gregarious, selection and ranking. Crucian are widely distributed, perennial water all over the country all have production. Determine the indicators of crucian in the experiment, to understand the growth, reproduction situation of crucian in this area . Using the measured data (such as the scale length ,scale size and wheel diameter and so on) and related functional to calculate growth of crucian in any one year.According to the egg shape, color, weight ,etc to determine its maturity, with the mean egg diameter per 20 eggs and the number of eggs per 0.5 grams, to calculate the relative and absolute fecundity of the fish .Measured crucian were female puberty. Based on the relation between the scale diameter and length and the information, linear relationship between crucian scale diameter and length: y=1.530+3.0649. From the data, the fertility and is closely relative to the increase of age. The older, the more mature gonad development. The more amount of eggs. In addition, absolute fecundity increases with the pituitary gland.Through quantitative check crucian bait food intake by the object, reveals the main food, secondary foods, and chance food of crucian ,and understand that crucian degree of be fond of of all kinds of bait organisms.Fish fertility with weight gain, it has the characteristics of species and populations, and at the same tmes influenced by the age of the individual, body length, body weight, environmental conditions (especially the nutrition conditions), and breeding habits, spawning times factors and the size of the egg. After a series of studies of crucian biological character, provide the ecological basis for local crucian's feeding, breeding

  4. [Quantitative and qualitative research methods, can they coexist yet?].

    PubMed

    Hunt, Elena; Lavoie, Anne-Marise

    2011-06-01

    Qualitative design is gaining ground in Nursing research. In spite of a relative progress however, the evidence based practice movement continues to dominate and to underline the exclusive value of quantitative design (particularly that of randomized clinical trials) for clinical decision making. In the actual context convenient to those in power making utilitarian decisions on one hand, and facing nursing criticism of the establishment in favor of qualitative research on the other hand, it is difficult to chose a practical and ethical path that values the nursing role within the health care system, keeping us committed to quality care and maintaining researcher's integrity. Both qualitative and quantitative methods have advantages and disadvantages, and clearly, none of them can, by itself, capture, describe and explain reality adequately. Therefore, a balance between the two methods is needed. Researchers bare responsibility to society and science, and they should opt for the appropriate design susceptible to answering the research question, not promote the design favored by the research funding distributors.

  5. Quantitative methods to direct exploration based on hydrogeologic information

    USGS Publications Warehouse

    Graettinger, A.J.; Lee, J.; Reeves, H.W.; Dethan, D.

    2006-01-01

    Quantitatively Directed Exploration (QDE) approaches based on information such as model sensitivity, input data covariance and model output covariance are presented. Seven approaches for directing exploration are developed, applied, and evaluated on a synthetic hydrogeologic site. The QDE approaches evaluate input information uncertainty, subsurface model sensitivity and, most importantly, output covariance to identify the next location to sample. Spatial input parameter values and covariances are calculated with the multivariate conditional probability calculation from a limited number of samples. A variogram structure is used during data extrapolation to describe the spatial continuity, or correlation, of subsurface information. Model sensitivity can be determined by perturbing input data and evaluating output response or, as in this work, sensitivities can be programmed directly into an analysis model. Output covariance is calculated by the First-Order Second Moment (FOSM) method, which combines the covariance of input information with model sensitivity. A groundwater flow example, modeled in MODFLOW-2000, is chosen to demonstrate the seven QDE approaches. MODFLOW-2000 is used to obtain the piezometric head and the model sensitivity simultaneously. The seven QDE approaches are evaluated based on the accuracy of the modeled piezometric head after information from a QDE sample is added. For the synthetic site used in this study, the QDE approach that identifies the location of hydraulic conductivity that contributes the most to the overall piezometric head variance proved to be the best method to quantitatively direct exploration. ?? IWA Publishing 2006.

  6. DREAM: a method for semi-quantitative dermal exposure assessment.

    PubMed

    Van-Wendel-de-Joode, Berna; Brouwer, Derk H; Vermeulen, Roel; Van Hemmen, Joop J; Heederik, Dick; Kromhout, Hans

    2003-01-01

    This paper describes a new method (DREAM) for structured, semi-quantitative dermal exposure assessment for chemical or biological agents that can be used in occupational hygiene or epidemiology. It is anticipated that DREAM could serve as an initial assessment of dermal exposure, amongst others, resulting in a ranking of tasks and subsequently jobs. DREAM consists of an inventory and evaluation part. Two examples of dermal exposure of workers of a car-construction company show that DREAM characterizes tasks and gives insight into exposure mechanisms, forming a basis for systematic exposure reduction. DREAM supplies estimates for exposure levels on the outside clothing layer as well as on skin, and provides insight into the distribution of dermal exposure over the body. Together with the ranking of tasks and people, this provides information for measurement strategies and helps to determine who, where and what to measure. In addition to dermal exposure assessment, the systematic description of dermal exposure pathways helps to prioritize and determine most adequate measurement strategies and methods. DREAM could be a promising approach for structured, semi-quantitative, dermal exposure assessment.

  7. Novel method for the quantitative measurement of color vision deficiencies

    NASA Astrophysics Data System (ADS)

    Xiong, Kai; Hou, Minxian; Ye, Guanrong

    2005-01-01

    The method is based on chromatic visual evoked potential (VEP) measurement. The equiluminance of color stimulus in normal subjects is characterized by L-cone and M-cone activation in retina. For the deuteranopes and protanopes, only the activations of one relevant remaining cone type should be considered. The equiluminance turning curve was established for the recorded VEPs of the luminance changes of the red and green color stimulus, and the position of the equiluminance was used to define the kind and degree of color vision deficiencies. In the test of 47 volunteers, we got the VEP traces and the equiluminance turning curves, which was in accordance with the judgment by the pseudoisochromatic plate used in clinic. The method fulfills the impersonal and quantitative requirements in color vision deficiencies test.

  8. Quantitative Methods in the Study of Local History

    ERIC Educational Resources Information Center

    Davey, Pene

    1974-01-01

    The author suggests how the quantitative analysis of data from census records, assessment roles, and newspapers may be integrated into the classroom. Suggestions for obtaining quantitative data are provided. (DE)

  9. An accurate method for microanalysis of carbon monoxide in putrid postmortem blood by head-space gas chromatography-mass spectrometry (HS/GC/MS).

    PubMed

    Hao, Hongxia; Zhou, Hong; Liu, Xiaopei; Zhang, Zhong; Yu, Zhongshan

    2013-06-10

    Carbon monoxide (CO) may be the cause of more than half the fatal poisonings reported in many countries, with some of these cases under-reported or misdiagnosed by medical professionals. Therefore, an accurate and reliable analytical method to measure blood carboxyhemoglobin level (COHb%), in the 1% to lethal range, is essential for correct diagnosis. Herein a method was established, i.e. head-space gas chromatography-mass spectrometry (HS/GC/MS) that has numerous advantages over other techniques, such as UV spectrometry, for determination of COHb%. There was a linear relationship (R(2)=0. 9995) between the peak area for CO and the COHb% in blood. Using a molecular sieve-packed column, CO levels in the air down to 0.01% and COHb% levels in small blood samples down to 0.2% could be quantitated rapidly and accurately. Furthermore, this method showed good reproducibility with a relative standard deviation for COHb% of <1%. Therefore, this technique provides an accurate and reliable method for determining CO and COHb% levels and may prove useful for investigation of deaths potentially related to CO exposure.

  10. QUANTITATIVE MASS SPECTROMETRIC ANALYSIS OF GLYCOPROTEINS COMBINED WITH ENRICHMENT METHODS

    PubMed Central

    Ahn, Yeong Hee; Kim, Jin Young; Yoo, Jong Shin

    2015-01-01

    Mass spectrometry (MS) has been a core technology for high sensitive and high-throughput analysis of the enriched glycoproteome in aspects of quantitative assays as well as qualitative profiling of glycoproteins. Because it has been widely recognized that aberrant glycosylation in a glycoprotein may involve in progression of a certain disease, the development of efficient analysis tool for the aberrant glycoproteins is very important for deep understanding about pathological function of the glycoprotein and new biomarker development. This review first describes the protein glycosylation-targeting enrichment technologies mainly employing solid-phase extraction methods such as hydrizide-capturing, lectin-specific capturing, and affinity separation techniques based on porous graphitized carbon, hydrophilic interaction chromatography, or immobilized boronic acid. Second, MS-based quantitative analysis strategies coupled with the protein glycosylation-targeting enrichment technologies, by using a label-free MS, stable isotope-labeling, or targeted multiple reaction monitoring (MRM) MS, are summarized with recent published studies. © 2014 The Authors. Mass Spectrometry Reviews Published by Wiley Periodicals, Inc. Rapid Commun. Mass Spec Rev 34:148–165, 2015. PMID:24889823

  11. A novel semi-quantitative method for measuring tissue bleeding.

    PubMed

    Vukcevic, G; Volarevic, V; Raicevic, S; Tanaskovic, I; Milicic, B; Vulovic, T; Arsenijevic, S

    2014-03-01

    In this study, we describe a new semi-quantitative method for measuring the extent of bleeding in pathohistological tissue samples. To test our novel method, we recruited 120 female patients in their first trimester of pregnancy and divided them into three groups of 40. Group I was the control group, in which no dilation was applied. Group II was an experimental group, in which dilation was performed using classical mechanical dilators. Group III was also an experimental group, in which dilation was performed using a hydraulic dilator. Tissue samples were taken from the patients' cervical canals using a Novak's probe via energetic single-step curettage prior to any dilation in Group I and after dilation in Groups II and III. After the tissue samples were prepared, light microscopy was used to obtain microphotographs at 100x magnification. The surfaces affected by bleeding were measured in the microphotographs using the Autodesk AutoCAD 2009 program and its "polylines" function. The lines were used to mark the area around the entire sample (marked A) and to create "polyline" areas around each bleeding area on the sample (marked B). The percentage of the total area affected by bleeding was calculated using the formula: N = Bt x 100 / At where N is the percentage (%) of the tissue sample surface affected by bleeding, At (A total) is the sum of the surfaces of all of the tissue samples and Bt (B total) is the sum of all the surfaces affected by bleeding in all of the tissue samples. This novel semi-quantitative method utilizes the Autodesk AutoCAD 2009 program, which is simple to use and widely available, thereby offering a new, objective and precise approach to estimate the extent of bleeding in tissue samples.

  12. Quantitative methods in electroencephalography to access therapeutic response.

    PubMed

    Diniz, Roseane Costa; Fontenele, Andrea Martins Melo; Carmo, Luiza Helena Araújo do; Ribeiro, Aurea Celeste da Costa; Sales, Fábio Henrique Silva; Monteiro, Sally Cristina Moutinho; Sousa, Ana Karoline Ferreira de Castro

    2016-07-01

    Pharmacometrics or Quantitative Pharmacology aims to quantitatively analyze the interaction between drugs and patients whose tripod: pharmacokinetics, pharmacodynamics and disease monitoring to identify variability in drug response. Being the subject of central interest in the training of pharmacists, this work was out with a view to promoting this idea on methods to access the therapeutic response of drugs with central action. This paper discusses quantitative methods (Fast Fourier Transform, Magnitude Square Coherence, Conditional Entropy, Generalised Linear semi-canonical Correlation Analysis, Statistical Parametric Network and Mutual Information Function) used to evaluate the EEG signals obtained after administration regimen of drugs, the main findings and their clinical relevance, pointing it as a contribution to construction of different pharmaceutical practice. Peter Anderer et. al in 2000 showed the effect of 20mg of buspirone in 20 healthy subjects after 1, 2, 4, 6 and 8h after oral ingestion of the drug. The areas of increased power of the theta frequency occurred mainly in the temporo-occipital - parietal region. It has been shown by Sampaio et al., 2007 that the use of bromazepam, which allows the release of GABA (gamma amino butyric acid), an inhibitory neurotransmitter of the central nervous system could theoretically promote dissociation of cortical functional areas, a decrease of functional connectivity, a decrease of cognitive functions by means of smaller coherence (electrophysiological magnitude measured from the EEG by software) values. Ahmad Khodayari-Rostamabad et al. in 2015 talk that such a measure could be a useful clinical tool potentially to assess adverse effects of opioids and hence give rise to treatment guidelines. There was the relation between changes in pain intensity and brain sources (at maximum activity locations) during remifentanil infusion despite its potent analgesic effect. The statement of mathematical and computational

  13. Membrane chromatographic immunoassay method for rapid quantitative analysis of specific serum antibodies.

    PubMed

    Ghosh, Raja

    2006-02-05

    This paper discusses a membrane chromatographic immunoassay method for rapid detection and quantitative analysis of specific serum antibodies. A type of polyvinylidine fluoride (PVDF) microfiltration membrane was used in the method for its ability to reversibly and specifically bind IgG antibodies from antiserum samples by hydrophobic interaction. Using this form of selective antibody binding and enrichment an affinity membrane with antigen binding ability was obtained in-situ. This was done by passing a pulse of diluted antiserum sample through a stack of microporous PVDF membranes. The affinity membrane thus formed was challenged with a pulse of antigen solution and the amount of antigen bound was accurately determined using chromatographic methods. The antigen binding correlated well with the antibody loading on the membrane. This method is direct, rapid and accurate, does not involve any chemical reaction, and uses very few reagents. Moreover, the same membrane could be repeatedly used for sequential immunoassays on account of the reversible nature of the antibody binding. Proof of concept of this method is provided using human hemoglobin as model antigen and rabbit antiserum against human hemoglobin as the antibody source.

  14. Quantitative evaluation of solar wind time-shifting methods

    NASA Astrophysics Data System (ADS)

    Cameron, Taylor; Jackel, Brian

    2016-11-01

    Nine years of solar wind dynamic pressure and geosynchronous magnetic field data are used for a large-scale statistical comparison of uncertainties associated with several different algorithms for propagating solar wind measurements. The MVAB-0 scheme is best overall, performing on average a minute more accurately than a flat time-shift. We also evaluate the accuracy of these time-shifting methods as a function of solar wind magnetic field orientation. We find that all time-shifting algorithms perform significantly worse (>5 min) due to geometric effects when the solar wind magnetic field is radial (parallel or antiparallel to the Earth-Sun line). Finally, we present an empirical scheme that performs almost as well as MVAB-0 on average and slightly better than MVAB-0 for intervals with nonradial B.

  15. A Quantitative Vainberg Method for Black Box Scattering

    NASA Astrophysics Data System (ADS)

    Galkowski, Jeffrey

    2017-01-01

    We give a quantitative version of Vainberg's method relating pole free regions to propagation of singularities for black box scatterers. In particular, we show that there is a logarithmic resonance free region near the real axis of size {τ} with polynomial bounds on the resolvent if and only if the wave propagator gains derivatives at rate {τ}. Next we show that if there exist singularities in the wave trace at times tending to infinity which smooth at rate {τ}, then there are resonances in logarithmic strips whose width is given by {τ}. As our main application of these results, we give sharp bounds on the size of resonance free regions in scattering on geometrically nontrapping manifolds with conic points. Moreover, these bounds are generically optimal on exteriors of nontrapping polygonal domains.

  16. Methods for Quantitative Interpretation of Retarding Field Analyzer Data

    SciTech Connect

    Calvey, J.R.; Crittenden, J.A.; Dugan, G.F.; Palmer, M.A.; Furman, M.; Harkay, K.

    2011-03-28

    Over the course of the CesrTA program at Cornell, over 30 Retarding Field Analyzers (RFAs) have been installed in the CESR storage ring, and a great deal of data has been taken with them. These devices measure the local electron cloud density and energy distribution, and can be used to evaluate the efficacy of different cloud mitigation techniques. Obtaining a quantitative understanding of RFA data requires use of cloud simulation programs, as well as a detailed model of the detector itself. In a drift region, the RFA can be modeled by postprocessing the output of a simulation code, and one can obtain best fit values for important simulation parameters with a chi-square minimization method.

  17. A quantitative dimming method for LED based on PWM

    NASA Astrophysics Data System (ADS)

    Wang, Jiyong; Mou, Tongsheng; Wang, Jianping; Tian, Xiaoqing

    2012-10-01

    Traditional light sources were required to provide stable and uniform illumination for a living or working environment considering performance of visual function of human being. The requirement was always reasonable until non-visual functions of the ganglion cells in the retina photosensitive layer were found. New generation of lighting technology, however, is emerging based on novel lighting materials such as LED and photobiological effects on human physiology and behavior. To realize dynamic lighting of LED whose intensity and color were adjustable to the need of photobiological effects, a quantitative dimming method based on Pulse Width Modulation (PWM) and light-mixing technology was presented. Beginning with two channels' PWM, this paper demonstrated the determinacy and limitation of PWM dimming for realizing Expected Photometric and Colorimetric Quantities (EPCQ), in accordance with the analysis on geometrical, photometric, colorimetric and electrodynamic constraints. A quantitative model which mapped the EPCQ into duty cycles was finally established. The deduced model suggested that the determinacy was a unique individuality only for two channels' and three channels' PWM, but the limitation was an inevitable commonness for multiple channels'. To examine the model, a light-mixing experiment with two kinds of white LED simulated variations of illuminance and Correlation Color Temperature (CCT) from dawn to midday. Mean deviations between theoretical values and measured values were obtained, which were 15lx and 23K respectively. Result shows that this method can effectively realize the light spectrum which has a specific requirement of EPCQ, and provides a theoretical basis and a practical way for dynamic lighting of LED.

  18. Accurate measurement method of Fabry-Perot cavity parameters via optical transfer function

    SciTech Connect

    Bondu, Francois; Debieu, Olivier

    2007-05-10

    It is shown how the transfer function from frequency noise to a Pound-Drever-Hall signal for a Fabry-Perot cavity can be used to accurately measure cavity length, cavity linewidth, mirror curvature, misalignments, laser beam shape mismatching with resonant beam shape, and cavity impedance mismatching with respect to vacuum.

  19. A time-accurate implicit method for chemical non-equilibrium flows at all speeds

    NASA Technical Reports Server (NTRS)

    Shuen, Jian-Shun

    1992-01-01

    A new time accurate coupled solution procedure for solving the chemical non-equilibrium Navier-Stokes equations over a wide range of Mach numbers is described. The scheme is shown to be very efficient and robust for flows with velocities ranging from M less than or equal to 10(exp -10) to supersonic speeds.

  20. Automatic segmentation method of striatum regions in quantitative susceptibility mapping images

    NASA Astrophysics Data System (ADS)

    Murakawa, Saki; Uchiyama, Yoshikazu; Hirai, Toshinori

    2015-03-01

    Abnormal accumulation of brain iron has been detected in various neurodegenerative diseases. Quantitative susceptibility mapping (QSM) is a novel contrast mechanism in magnetic resonance (MR) imaging and enables the quantitative analysis of local tissue susceptibility property. Therefore, automatic segmentation tools of brain regions on QSM images would be helpful for radiologists' quantitative analysis in various neurodegenerative diseases. The purpose of this study was to develop an automatic segmentation and classification method of striatum regions on QSM images. Our image database consisted of 22 QSM images obtained from healthy volunteers. These images were acquired on a 3.0 T MR scanner. The voxel size was 0.9×0.9×2 mm. The matrix size of each slice image was 256×256 pixels. In our computerized method, a template mating technique was first used for the detection of a slice image containing striatum regions. An image registration technique was subsequently employed for the classification of striatum regions in consideration of the anatomical knowledge. After the image registration, the voxels in the target image which correspond with striatum regions in the reference image were classified into three striatum regions, i.e., head of the caudate nucleus, putamen, and globus pallidus. The experimental results indicated that 100% (21/21) of the slice images containing striatum regions were detected accurately. The subjective evaluation of the classification results indicated that 20 (95.2%) of 21 showed good or adequate quality. Our computerized method would be useful for the quantitative analysis of Parkinson diseases in QSM images.

  1. Sequencing human ribs into anatomical order by quantitative multivariate methods.

    PubMed

    Cirillo, John; Henneberg, Maciej

    2012-06-01

    Little research has focussed on methods to anatomically sequence ribs. Correct anatomical sequencing of ribs assists in determining the location and distribution of regional trauma, age estimation, number of puncture wounds, number of individuals, and personal identification. The aim of the current study is to develop a method for placing fragmented and incomplete rib sets into correct anatomical position. Ribs 2-10 were used from eleven cadavers of an Australian population. Seven variables were measured from anatomical locations on the rib. General descriptive statistics were calculated for each variable along with an analysis of variance (ANOVA) and ANOVA with Bonferroni statistics. Considerable overlap was observed between ribs for univariate methods. Bivariate and multivariate methods were then applied. Results of the ANOVA with post hoc Bonferroni statistics show that ratios of various dimensions of a single rib could be used to sequence it within adjacent ribs. Using multiple regression formulae, the most accurate estimation of the anatomical rib number occurs when the entire rib is found in isolation. This however, is not always possible. Even when only the head and neck of the rib are preserved, a modified multivariate regression formula assigned 91.95% of ribs into correct anatomical position or as an adjacent rib. Using multivariate methods it is possible to sequence a single human rib with a high level of accuracy and they are superior to univariate methods. Left and right ribs were found to be highly symmetrical. Some rib dimensions were greater in males than in females, but overall the level of sexual dimorphism was low.

  2. A quantitative assessment method for Ascaris eggs on hands.

    PubMed

    Jeandron, Aurelie; Ensink, Jeroen H J; Thamsborg, Stig M; Dalsgaard, Anders; Sengupta, Mita E

    2014-01-01

    The importance of hands in the transmission of soil transmitted helminths, especially Ascaris and Trichuris infections, is under-researched. This is partly because of the absence of a reliable method to quantify the number of eggs on hands. Therefore, the aim of this study was to develop a method to assess the number of Ascaris eggs on hands and determine the egg recovery rate of the method. Under laboratory conditions, hands were seeded with a known number of Ascaris eggs, air dried and washed in a plastic bag retaining the washing water, in order to determine recovery rates of eggs for four different detergents (cationic [benzethonium chloride 0.1% and cetylpyridinium chloride CPC 0.1%], anionic [7X 1% - quadrafos, glycol ether, and dioctyl sulfoccinate sodium salt] and non-ionic [Tween80 0.1% -polyethylene glycol sorbitan monooleate]) and two egg detection methods (McMaster technique and FLOTAC). A modified concentration McMaster technique showed the highest egg recovery rate from bags. Two of the four diluted detergents (benzethonium chloride 0.1% and 7X 1%) also showed a higher egg recovery rate and were then compared with de-ionized water for recovery of helminth eggs from hands. The highest recovery rate (95.6%) was achieved with a hand rinse performed with 7X 1%. Washing hands with de-ionized water resulted in an egg recovery rate of 82.7%. This washing method performed with a low concentration of detergent offers potential for quantitative investigation of contamination of hands with Ascaris eggs and of their role in human infection. Follow-up studies are needed that validate the hand washing method under field conditions, e.g. including people of different age, lower levels of contamination and various levels of hand cleanliness.

  3. CONDENSED MATTER: STRUCTURE, MECHANICAL AND THERMAL PROPERTIES: An Accurate Image Simulation Method for High-Order Laue Zone Effects

    NASA Astrophysics Data System (ADS)

    Cai, Can-Ying; Zeng, Song-Jun; Liu, Hong-Rong; Yang, Qi-Bin

    2008-05-01

    A completely different formulation for simulation of the high order Laue zone (HOLZ) diffractions is derived. It refers to the new method, i.e. the Taylor series (TS) method. To check the validity and accuracy of the TS method, we take polyvinglidene fluoride (PVDF) crystal as an example to calculate the exit wavefunction by the conventional multi-slice (CMS) method and the TS method. The calculated results show that the TS method is much more accurate than the CMS method and is independent of the slice thicknesses. Moreover, the pure first order Laue zone wavefunction by the TS method can reflect the major potential distribution of the first reciprocal plane.

  4. Intracranial aneurysm segmentation in 3D CT angiography: method and quantitative validation

    NASA Astrophysics Data System (ADS)

    Firouzian, Azadeh; Manniesing, R.; Flach, Z. H.; Risselada, R.; van Kooten, F.; Sturkenboom, M. C. J. M.; van der Lugt, A.; Niessen, W. J.

    2010-03-01

    Accurately quantifying aneurysm shape parameters is of clinical importance, as it is an important factor in choosing the right treatment modality (i.e. coiling or clipping), in predicting rupture risk and operative risk and for pre-surgical planning. The first step in aneurysm quantification is to segment it from other structures that are present in the image. As manual segmentation is a tedious procedure and prone to inter- and intra-observer variability, there is a need for an automated method which is accurate and reproducible. In this paper a novel semi-automated method for segmenting aneurysms in Computed Tomography Angiography (CTA) data based on Geodesic Active Contours is presented and quantitatively evaluated. Three different image features are used to steer the level set to the boundary of the aneurysm, namely intensity, gradient magnitude and variance in intensity. The method requires minimum user interaction, i.e. clicking a single seed point inside the aneurysm which is used to estimate the vessel intensity distribution and to initialize the level set. The results show that the developed method is reproducible, and performs in the range of interobserver variability in terms of accuracy.

  5. A simple, quantitative method using alginate gel to determine rat colonic tumor volume in vivo.

    PubMed

    Irving, Amy A; Young, Lindsay B; Pleiman, Jennifer K; Konrath, Michael J; Marzella, Blake; Nonte, Michael; Cacciatore, Justin; Ford, Madeline R; Clipson, Linda; Amos-Landgraf, James M; Dove, William F

    2014-04-01

    Many studies of the response of colonic tumors to therapeutics use tumor multiplicity as the endpoint to determine the effectiveness of the agent. These studies can be greatly enhanced by accurate measurements of tumor volume. Here we present a quantitative method to easily and accurately determine colonic tumor volume. This approach uses a biocompatible alginate to create a negative mold of a tumor-bearing colon; this mold is then used to make positive casts of dental stone that replicate the shape of each original tumor. The weight of the dental stone cast correlates highly with the weight of the dissected tumors. After refinement of the technique, overall error in tumor volume was 16.9% ± 7.9% and includes error from both the alginate and dental stone procedures. Because this technique is limited to molding of tumors in the colon, we utilized the Apc(Pirc/+) rat, which has a propensity for developing colonic tumors that reflect the location of the majority of human intestinal tumors. We have successfully used the described method to determine tumor volumes ranging from 4 to 196 mm³. Alginate molding combined with dental stone casting is a facile method for determining tumor volume in vivo without costly equipment or knowledge of analytic software. This broadly accessible method creates the opportunity to objectively study colonic tumors over time in living animals in conjunction with other experiments and without transferring animals from the facility where they are maintained.

  6. A Simple, Quantitative Method Using Alginate Gel to Determine Rat Colonic Tumor Volume In Vivo

    PubMed Central

    Irving, Amy A; Young, Lindsay B; Pleiman, Jennifer K; Konrath, Michael J; Marzella, Blake; Nonte, Michael; Cacciatore, Justin; Ford, Madeline R; Clipson, Linda; Amos-Landgraf, James M; Dove, William F

    2014-01-01

    Many studies of the response of colonic tumors to therapeutics use tumor multiplicity as the endpoint to determine the effectiveness of the agent. These studies can be greatly enhanced by accurate measurements of tumor volume. Here we present a quantitative method to easily and accurately determine colonic tumor volume. This approach uses a biocompatible alginate to create a negative mold of a tumor-bearing colon; this mold is then used to make positive casts of dental stone that replicate the shape of each original tumor. The weight of the dental stone cast correlates highly with the weight of the dissected tumors. After refinement of the technique, overall error in tumor volume was 16.9% ± 7.9% and includes error from both the alginate and dental stone procedures. Because this technique is limited to molding of tumors in the colon, we utilized the ApcPirc/+ rat, which has a propensity for developing colonic tumors that reflect the location of the majority of human intestinal tumors. We have successfully used the described method to determine tumor volumes ranging from 4 to 196 mm3. Alginate molding combined with dental stone casting is a facile method for determining tumor volume in vivo without costly equipment or knowledge of analytic software. This broadly accessible method creates the opportunity to objectively study colonic tumors over time in living animals in conjunction with other experiments and without transferring animals from the facility where they are maintained. PMID:24674588

  7. Nuclear medicine and imaging research (instrumentation and quantitative methods of evaluation)

    SciTech Connect

    Beck, R.N.; Cooper, M.; Chen, C.T.

    1992-07-01

    This document is the annual progress report for project entitled 'Instrumentation and Quantitative Methods of Evaluation.' Progress is reported in separate sections individually abstracted and indexed for the database. Subject areas reported include theoretical studies of imaging systems and methods, hardware developments, quantitative methods of evaluation, and knowledge transfer: education in quantitative nuclear medicine imaging.

  8. Rapid, cost-effective and accurate quantification of Yucca schidigera Roezl. steroidal saponins using HPLC-ELSD method.

    PubMed

    Tenon, Mathieu; Feuillère, Nicolas; Roller, Marc; Birtić, Simona

    2017-04-15

    Yucca GRAS-labelled saponins have been and are increasingly used in food/feed, pharmaceutical or cosmetic industries. Existing techniques presently used for Yucca steroidal saponin quantification remain either inaccurate and misleading or accurate but time consuming and cost prohibitive. The method reported here addresses all of the above challenges. HPLC/ELSD technique is an accurate and reliable method that yields results of appropriate repeatability and reproducibility. This method does not over- or under-estimate levels of steroidal saponins. HPLC/ELSD method does not require each and every pure standard of saponins, to quantify the group of steroidal saponins. The method is a time- and cost-effective technique that is suitable for routine industrial analyses. HPLC/ELSD methods yield a saponin fingerprints specific to the plant species. As the method is capable of distinguishing saponin profiles from taxonomically distant species, it can unravel plant adulteration issues.

  9. The quantitative and qualitative recovery of Campylobacter from raw poultry using USDA and Health Canada methods.

    PubMed

    Sproston, E L; Carrillo, C D; Boulter-Bitzer, J

    2014-12-01

    Harmonisation of methods between Canadian government agencies is essential to accurately assess and compare the prevalence and concentrations present on retail poultry intended for human consumption. The standard qualitative procedure used by Health Canada differs to that used by the USDA for both quantitative and qualitative methods. A comparison of three methods was performed on raw poultry samples obtained from an abattoir to determine if one method is superior to the others in isolating Campylobacter from chicken carcass rinses. The average percent of positive samples was 34.72% (95% CI, 29.2-40.2), 39.24% (95% CI, 33.6-44.9), 39.93% (95% CI, 34.3-45.6) for the direct plating US method and the US enrichment and Health Canada enrichment methods, respectively. Overall there were significant differences when comparing either of the enrichment methods to the direct plating method using the McNemars chi squared test. On comparison of weekly data (Fishers exact test) direct plating was only inferior to the enrichment methods on a single occasion. Direct plating is important for enumeration and establishing the concentration of Campylobacter present on raw poultry. However, enrichment methods are also vital to identify positive samples where concentrations are below the detection limit for direct plating.

  10. Device and method for accurately measuring concentrations of airborne transuranic isotopes

    DOEpatents

    McIsaac, C.V.; Killian, E.W.; Grafwallner, E.G.; Kynaston, R.L.; Johnson, L.O.; Randolph, P.D.

    1996-09-03

    An alpha continuous air monitor (CAM) with two silicon alpha detectors and three sample collection filters is described. This alpha CAM design provides continuous sampling and also measures the cumulative transuranic (TRU), i.e., plutonium and americium, activity on the filter, and thus provides a more accurate measurement of airborne TRU concentrations than can be accomplished using a single fixed sample collection filter and a single silicon alpha detector. 7 figs.

  11. Device and method for accurately measuring concentrations of airborne transuranic isotopes

    DOEpatents

    McIsaac, Charles V.; Killian, E. Wayne; Grafwallner, Ervin G.; Kynaston, Ronnie L.; Johnson, Larry O.; Randolph, Peter D.

    1996-01-01

    An alpha continuous air monitor (CAM) with two silicon alpha detectors and three sample collection filters is described. This alpha CAM design provides continuous sampling and also measures the cumulative transuranic (TRU), i.e., plutonium and americium, activity on the filter, and thus provides a more accurate measurement of airborne TRU concentrations than can be accomplished using a single fixed sample collection filter and a single silicon alpha detector.

  12. Accurate, quantitative assays for the hydrolysis of soluble type I, II, and III /sup 3/H-acetylated collagens by bacterial and tissue collagenases

    SciTech Connect

    Mallya, S.K.; Mookhtiar, K.A.; Van Wart, H.E.

    1986-11-01

    Accurate and quantitative assays for the hydrolysis of soluble /sup 3/H-acetylated rat tendon type I, bovine cartilage type II, and human amnion type III collagens by both bacterial and tissue collagenases have been developed. The assays are carried out at any temperature in the 1-30/sup 0/C range in a single reaction tube and the progress of the reaction is monitored by withdrawing aliquots as a function of time, quenching with 1,10-phenanthroline, and quantitation of the concentration of hydrolysis fragments. The latter is achieved by selective denaturation of these fragments by incubation under conditions described in the previous paper of this issue. The assays give percentages of hydrolysis of all three collagen types by neutrophil collagenase that agree well with the results of gel electrophoresis experiments. The initial rates of hydrolysis of all three collagens are proportional to the concentration of both neutrophil or Clostridial collagenases over a 10-fold range of enzyme concentrations. All three assays can be carried out at collagen concentrations that range from 0.06 to 2 mg/ml and give linear double reciprocal plots for both tissue and bacterial collagenases that can be used to evaluate the kinetic parameters K/sub m/ and k/sub cat/ or V/sub max/. The assay developed for the hydrolysis of rat type I collagen by neutrophil collagenase is shown to be more sensitive by at least one order of magnitude than comparable assays that use rat type I collagen fibrils or gels as substrate.

  13. Breast tumour visualization using 3D quantitative ultrasound methods

    NASA Astrophysics Data System (ADS)

    Gangeh, Mehrdad J.; Raheem, Abdul; Tadayyon, Hadi; Liu, Simon; Hadizad, Farnoosh; Czarnota, Gregory J.

    2016-04-01

    Breast cancer is one of the most common cancer types accounting for 29% of all cancer cases. Early detection and treatment has a crucial impact on improving the survival of affected patients. Ultrasound (US) is non-ionizing, portable, inexpensive, and real-time imaging modality for screening and quantifying breast cancer. Due to these attractive attributes, the last decade has witnessed many studies on using quantitative ultrasound (QUS) methods in tissue characterization. However, these studies have mainly been limited to 2-D QUS methods using hand-held US (HHUS) scanners. With the availability of automated breast ultrasound (ABUS) technology, this study is the first to develop 3-D QUS methods for the ABUS visualization of breast tumours. Using an ABUS system, unlike the manual 2-D HHUS device, the whole patient's breast was scanned in an automated manner. The acquired frames were subsequently examined and a region of interest (ROI) was selected in each frame where tumour was identified. Standard 2-D QUS methods were used to compute spectral and backscatter coefficient (BSC) parametric maps on the selected ROIs. Next, the computed 2-D parameters were mapped to a Cartesian 3-D space, interpolated, and rendered to provide a transparent color-coded visualization of the entire breast tumour. Such 3-D visualization can potentially be used for further analysis of the breast tumours in terms of their size and extension. Moreover, the 3-D volumetric scans can be used for tissue characterization and the categorization of breast tumours as benign or malignant by quantifying the computed parametric maps over the whole tumour volume.

  14. A Powerful and Robust Method for Mapping Quantitative Trait Loci in General Pedigrees

    PubMed Central

    Diao, G. ; Lin, D. Y. 

    2005-01-01

    The variance-components model is the method of choice for mapping quantitative trait loci in general human pedigrees. This model assumes normally distributed trait values and includes a major gene effect, random polygenic and environmental effects, and covariate effects. Violation of the normality assumption has detrimental effects on the type I error and power. One possible way of achieving normality is to transform trait values. The true transformation is unknown in practice, and different transformations may yield conflicting results. In addition, the commonly used transformations are ineffective in dealing with outlying trait values. We propose a novel extension of the variance-components model that allows the true transformation function to be completely unspecified. We present efficient likelihood-based procedures to estimate variance components and to test for genetic linkage. Simulation studies demonstrated that the new method is as powerful as the existing variance-components methods when the normality assumption holds; when the normality assumption fails, the new method still provides accurate control of type I error and is substantially more powerful than the existing methods. We performed a genomewide scan of monoamine oxidase B for the Collaborative Study on the Genetics of Alcoholism. In that study, the results that are based on the existing variance-components method changed dramatically when three outlying trait values were excluded from the analysis, whereas our method yielded essentially the same answers with or without those three outliers. The computer program that implements the new method is freely available. PMID:15918154

  15. gitter: a robust and accurate method for quantification of colony sizes from plate images.

    PubMed

    Wagih, Omar; Parts, Leopold

    2014-03-20

    Colony-based screens that quantify the fitness of clonal populations on solid agar plates are perhaps the most important source of genome-scale functional information in microorganisms. The images of ordered arrays of mutants produced by such experiments can be difficult to process because of laboratory-specific plate features, morphed colonies, plate edges, noise, and other artifacts. Most of the tools developed to address this problem are optimized to handle a single setup and do not work out of the box in other settings. We present gitter, an image analysis tool for robust and accurate processing of images from colony-based screens. gitter works by first finding the grid of colonies from a preprocessed image and then locating the bounds of each colony separately. We show that gitter produces comparable colony sizes to other tools in simple cases but outperforms them by being able to handle a wider variety of screens and more accurately quantify colony sizes from difficult images. gitter is freely available as an R package from http://cran.r-project.org/web/packages/gitter under the LGPL. Tutorials and demos can be found at http://omarwagih.github.io/gitter.

  16. New methods for quantitative and qualitative facial studies: an overview.

    PubMed

    Thomas, I T; Hintz, R J; Frias, J L

    1989-01-01

    The clinical study of birth defects has traditionally followed the Gestalt approach, with a trend, in recent years, toward more objective delineation. Data collection, however, has been largely restricted to measurements from X-rays and anthropometry. In other fields, new techniques are being applied that capitalize on the use of modern computer technology. One such technique is that of remote sensing, of which photogrammetry is a branch. Cartographers, surveyors and engineers, using specially designed cameras, have applied geometrical techniques to locate points on an object precisely. These techniques, in their long-range application, have become part of our industrial technology and have assumed great importance with the development of satellite-borne surveillance systems. The close-range application of similar techniques has the potential for extremely accurate clinical measurement. We are currently evaluating the application of remote sensing to facial measurement using three conventional 35 mm still cameras. The subject is photographed in front of a carefully measured grid, and digitization is then carried out on 35-mm slides specific landmarks on the cranioface are identified, along with points on the background grid and the four corners of the slide frame, and are registered as xy coordinates by a digitizer. These coordinates are then converted into precise locations in object space. The technique is capable of producing measurements to within 1/100th of an inch. We suggest that remote sensing methods such as this may well be of great value in the study of congenital malformations.

  17. Quantitative Methods for Comparing Different Polyline Stream Network Models

    SciTech Connect

    Danny L. Anderson; Daniel P. Ames; Ping Yang

    2014-04-01

    Two techniques for exploring relative horizontal accuracy of complex linear spatial features are described and sample source code (pseudo code) is presented for this purpose. The first technique, relative sinuosity, is presented as a measure of the complexity or detail of a polyline network in comparison to a reference network. We term the second technique longitudinal root mean squared error (LRMSE) and present it as a means for quantitatively assessing the horizontal variance between two polyline data sets representing digitized (reference) and derived stream and river networks. Both relative sinuosity and LRMSE are shown to be suitable measures of horizontal stream network accuracy for assessing quality and variation in linear features. Both techniques have been used in two recent investigations involving extracting of hydrographic features from LiDAR elevation data. One confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes yielded better stream network delineations, based on sinuosity and LRMSE, when using LiDAR-derived DEMs. The other demonstrated a new method of delineating stream channels directly from LiDAR point clouds, without the intermediate step of deriving a DEM, showing that the direct delineation from LiDAR point clouds yielded an excellent and much better match, as indicated by the LRMSE.

  18. Quantitative methods to study epithelial morphogenesis and polarity.

    PubMed

    Aigouy, B; Collinet, C; Merkel, M; Sagner, A

    2017-01-01

    Morphogenesis of an epithelial tissue emerges from the behavior of its constituent cells, including changes in shape, rearrangements, and divisions. In many instances the directionality of these cellular events is controlled by the polarized distribution of specific molecular components. In recent years, our understanding of morphogenesis and polarity highly benefited from advances in genetics, microscopy, and image analysis. They now make it possible to measure cellular dynamics and polarity with unprecedented precision for entire tissues throughout their development. Here we review recent approaches to visualize and measure cell polarity and tissue morphogenesis. The chapter is organized like an experiment. We first discuss the choice of cell and polarity reporters and describe the use of mosaics to reveal hidden cell polarities or local morphogenetic events. Then, we outline application-specific advantages and disadvantages of different microscopy techniques and image projection algorithms. Next, we present methods to extract cell outlines to measure cell polarity and detect cellular events underlying morphogenesis. Finally, we bridge scales by presenting approaches to quantify the specific contribution of each cellular event to global tissue deformation. Taken together, we provide an in-depth description of available tools and theoretical concepts to quantitatively study cell polarity and tissue morphogenesis over multiple scales.

  19. A Simple yet Accurate Method for Students to Determine Asteroid Rotation Periods from Fragmented Light Curve Data

    ERIC Educational Resources Information Center

    Beare, R. A.

    2008-01-01

    Professional astronomers use specialized software not normally available to students to determine the rotation periods of asteroids from fragmented light curve data. This paper describes a simple yet accurate method based on Microsoft Excel[R] that enables students to find periods in asteroid light curve and other discontinuous time series data of…

  20. Development of the local magnification method for quantitative evaluation of endoscope geometric distortion

    NASA Astrophysics Data System (ADS)

    Wang, Quanzeng; Cheng, Wei-Chung; Suresh, Nitin; Hua, Hong

    2016-05-01

    With improved diagnostic capabilities and complex optical designs, endoscopic technologies are advancing. As one of the several important optical performance characteristics, geometric distortion can negatively affect size estimation and feature identification related diagnosis. Therefore, a quantitative and simple distortion evaluation method is imperative for both the endoscopic industry and the medical device regulatory agent. However, no such method is available yet. While the image correction techniques are rather mature, they heavily depend on computational power to process multidimensional image data based on complex mathematical model, i.e., difficult to understand. Some commonly used distortion evaluation methods, such as the picture height distortion (DPH) or radial distortion (DRAD), are either too simple to accurately describe the distortion or subject to the error of deriving a reference image. We developed the basic local magnification (ML) method to evaluate endoscope distortion. Based on the method, we also developed ways to calculate DPH and DRAD. The method overcomes the aforementioned limitations, has clear physical meaning in the whole field of view, and can facilitate lesion size estimation during diagnosis. Most importantly, the method can facilitate endoscopic technology to market and potentially be adopted in an international endoscope standard.

  1. Accurate surface tension measurement of glass melts by the pendant drop method.

    PubMed

    Chang, Yao-Yuan; Wu, Ming-Ya; Hung, Yi-Lin; Lin, Shi-Yow

    2011-05-01

    A pendant drop tensiometer, coupled with image digitization technology and a best-fitting algorithm, was built to accurately measure the surface tension of glass melts at high temperatures. More than one thousand edge-coordinate points were obtained for a pendant glass drop. These edge points were fitted with the theoretical drop profiles derived from the Young-Laplace equation to determine the surface tension of glass melt. The uncertainty of the surface tension measurements was investigated. The measurement uncertainty (σ) could be related to a newly defined factor of drop profile completeness (Fc): the larger the Fc is, the smaller σ is. Experimental data showed that the uncertainty of the surface tension measurement when using this pendant drop tensiometer could be ±3 mN∕m for glass melts.

  2. Development of a method to accurately calculate the Dpb and quickly predict the strength of a chemical bond

    NASA Astrophysics Data System (ADS)

    Du, Xia; Zhao, Dong-Xia; Yang, Zhong-Zhi

    2013-02-01

    A new approach to characterize and measure bond strength has been developed. First, we propose a method to accurately calculate the potential acting on an electron in a molecule (PAEM) at the saddle point along a chemical bond in situ, denoted by Dpb. Then, a direct method to quickly evaluate bond strength is established. We choose some familiar molecules as models for benchmarking this method. As a practical application, the Dpb of base pairs in DNA along C-H and N-H bonds are obtained for the first time. All results show that C7-H of A-T and C8-H of G-C are the relatively weak bonds that are the injured positions in DNA damage. The significance of this work is twofold: (i) A method is developed to calculate Dpb of various sizable molecules in situ quickly and accurately; (ii) This work demonstrates the feasibility to quickly predict the bond strength in macromolecules.

  3. Rapid quantitative analysis of lipids using a colorimetric method in a microplate format.

    PubMed

    Cheng, Yu-Shen; Zheng, Yi; VanderGheynst, Jean S

    2011-01-01

    A colorimetric sulfo-phospho-vanillin (SPV) method was developed for high throughput analysis of total lipids. The developed method uses a reaction mixture that is maintained in a 96-well microplate throughout the entire assay. The new assay provides the following advantages over other methods of lipid measurement: (1) background absorbance can be easily corrected for each well, (2) there is less risk of handling and transferring sulfuric acid contained in reaction mixtures, (3) color develops more consistently providing more accurate measurement of absorbance, and (4) the assay can be used for quantitative measurement of lipids extracted from a wide variety of sources. Unlike other spectrophotometric approaches that use fluorescent dyes, the optimal spectra and reaction conditions for the developed assay do not vary with the sample source. The developed method was used to measure lipids in extracts from four strains of microalgae. No significant difference was found in lipid determination when lipid content was measured using the new method and compared to results obtained using a macro-gravimetric method.

  4. Thermography as a quantitative imaging method for assessing postoperative inflammation

    PubMed Central

    Christensen, J; Matzen, LH; Vaeth, M; Schou, S; Wenzel, A

    2012-01-01

    Objective To assess differences in skin temperature between the operated and control side of the face after mandibular third molar surgery using thermography. Methods 127 patients had 1 mandibular third molar removed. Before the surgery, standardized thermograms were taken of both sides of the patient's face using a Flir ThermaCam™ E320 (Precisions Teknik AB, Halmstad, Sweden). The imaging procedure was repeated 2 days and 7 days after surgery. A region of interest including the third molar region was marked on each image. The mean temperature within each region of interest was calculated. The difference between sides and over time were assessed using paired t-tests. Results No significant difference was found between the operated side and the control side either before or 7 days after surgery (p > 0.3). The temperature of the operated side (mean: 32.39 °C, range: 28.9–35.3 °C) was higher than that of the control side (mean: 32.06 °C, range: 28.5–35.0 °C) 2 days after surgery [0.33 °C, 95% confidence interval (CI): 0.22–0.44 °C, p < 0.001]. No significant difference was found between the pre-operative and the 7-day post-operative temperature (p > 0.1). After 2 days, the operated side was not significantly different from the temperature pre-operatively (p = 0.12), whereas the control side had a lower temperature (0.57 °C, 95% CI: 0.29–0.86 °C, p < 0.001). Conclusions Thermography seems useful for quantitative assessment of inflammation between the intervention side and the control side after surgical removal of mandibular third molars. However, thermography cannot be used to assess absolute temperature changes due to normal variations in skin temperature over time. PMID:22752326

  5. Experimental Null Method to Guide the Development of Technical Procedures and to Control False-Positive Discovery in Quantitative Proteomics.

    PubMed

    Shen, Xiaomeng; Hu, Qiang; Li, Jun; Wang, Jianmin; Qu, Jun

    2015-10-02

    Comprehensive and accurate evaluation of data quality and false-positive biomarker discovery is critical to direct the method development/optimization for quantitative proteomics, which nonetheless remains challenging largely due to the high complexity and unique features of proteomic data. Here we describe an experimental null (EN) method to address this need. Because the method experimentally measures the null distribution (either technical or biological replicates) using the same proteomic samples, the same procedures and the same batch as the case-vs-contol experiment, it correctly reflects the collective effects of technical variability (e.g., variation/bias in sample preparation, LC-MS analysis, and data processing) and project-specific features (e.g., characteristics of the proteome and biological variation) on the performances of quantitative analysis. To show a proof of concept, we employed the EN method to assess the quantitative accuracy and precision and the ability to quantify subtle ratio changes between groups using different experimental and data-processing approaches and in various cellular and tissue proteomes. It was found that choices of quantitative features, sample size, experimental design, data-processing strategies, and quality of chromatographic separation can profoundly affect quantitative precision and accuracy of label-free quantification. The EN method was also demonstrated as a practical tool to determine the optimal experimental parameters and rational ratio cutoff for reliable protein quantification in specific proteomic experiments, for example, to identify the necessary number of technical/biological replicates per group that affords sufficient power for discovery. Furthermore, we assessed the ability of EN method to estimate levels of false-positives in the discovery of altered proteins, using two concocted sample sets mimicking proteomic profiling using technical and biological replicates, respectively, where the true

  6. A Critical Review for Developing Accurate and Dynamic Predictive Models Using Machine Learning Methods in Medicine and Health Care.

    PubMed

    Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer

    2017-04-01

    Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.

  7. Time-Accurate, Unstructured-Mesh Navier-Stokes Computations with the Space-Time CESE Method

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2006-01-01

    Application of the newly emerged space-time conservation element solution element (CESE) method to compressible Navier-Stokes equations is studied. In contrast to Euler equations solvers, several issues such as boundary conditions, numerical dissipation, and grid stiffness warrant systematic investigations and validations. Non-reflecting boundary conditions applied at the truncated boundary are also investigated from the stand point of acoustic wave propagation. Validations of the numerical solutions are performed by comparing with exact solutions for steady-state as well as time-accurate viscous flow problems. The test cases cover a broad speed regime for problems ranging from acoustic wave propagation to 3D hypersonic configurations. Model problems pertinent to hypersonic configurations demonstrate the effectiveness of the CESE method in treating flows with shocks, unsteady waves, and separations. Good agreement with exact solutions suggests that the space-time CESE method provides a viable alternative for time-accurate Navier-Stokes calculations of a broad range of problems.

  8. k-Space Image Correlation Spectroscopy: A Method for Accurate Transport Measurements Independent of Fluorophore Photophysics

    PubMed Central

    Kolin, David L.; Ronis, David; Wiseman, Paul W.

    2006-01-01

    We present the theory and application of reciprocal space image correlation spectroscopy (kICS). This technique measures the number density, diffusion coefficient, and velocity of fluorescently labeled macromolecules in a cell membrane imaged on a confocal, two-photon, or total internal reflection fluorescence microscope. In contrast to r-space correlation techniques, we show kICS can recover accurate dynamics even in the presence of complex fluorophore photobleaching and/or “blinking”. Furthermore, these quantities can be calculated without nonlinear curve fitting, or any knowledge of the beam radius of the exciting laser. The number densities calculated by kICS are less sensitive to spatial inhomogeneity of the fluorophore distribution than densities measured using image correlation spectroscopy. We use simulations as a proof-of-principle to show that number densities and transport coefficients can be extracted using this technique. We present calibration measurements with fluorescent microspheres imaged on a confocal microscope, which recover Stokes-Einstein diffusion coefficients, and flow velocities that agree with single particle tracking measurements. We also show the application of kICS to measurements of the transport dynamics of α5-integrin/enhanced green fluorescent protein constructs in a transfected CHO cell imaged on a total internal reflection fluorescence microscope using charge-coupled device area detection. PMID:16861272

  9. Spectral neighbor analysis method for automated generation of quantum-accurate interatomic potentials

    SciTech Connect

    Thompson, A.P.; Swiler, L.P.; Trott, C.R.; Foiles, S.M.; Tucker, G.J.

    2015-03-15

    We present a new interatomic potential for solids and liquids called Spectral Neighbor Analysis Potential (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected onto a basis of hyperspherical harmonics in four dimensions. The bispectrum components are the same bond-orientational order parameters employed by the GAP potential [1]. The SNAP potential, unlike GAP, assumes a linear relationship between atom energy and bispectrum components. The linear SNAP coefficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. We demonstrate that a previously unnoticed symmetry property can be exploited to reduce the computational cost of the force calculations by more than one order of magnitude. We present results for a SNAP potential for tantalum, showing that it accurately reproduces a range of commonly calculated properties of both the crystalline solid and the liquid phases. In addition, unlike simpler existing potentials, SNAP correctly predicts the energy barrier for screw dislocation migration in BCC tantalum.

  10. Spectral neighbor analysis method for automated generation of quantum-accurate interatomic potentials

    NASA Astrophysics Data System (ADS)

    Thompson, A. P.; Swiler, L. P.; Trott, C. R.; Foiles, S. M.; Tucker, G. J.

    2015-03-01

    We present a new interatomic potential for solids and liquids called Spectral Neighbor Analysis Potential (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected onto a basis of hyperspherical harmonics in four dimensions. The bispectrum components are the same bond-orientational order parameters employed by the GAP potential [1]. The SNAP potential, unlike GAP, assumes a linear relationship between atom energy and bispectrum components. The linear SNAP coefficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. We demonstrate that a previously unnoticed symmetry property can be exploited to reduce the computational cost of the force calculations by more than one order of magnitude. We present results for a SNAP potential for tantalum, showing that it accurately reproduces a range of commonly calculated properties of both the crystalline solid and the liquid phases. In addition, unlike simpler existing potentials, SNAP correctly predicts the energy barrier for screw dislocation migration in BCC tantalum.

  11. Comparative Application of PLS and PCR Methods to Simultaneous Quantitative Estimation and Simultaneous Dissolution Test of Zidovudine - Lamivudine Tablets.

    PubMed

    Üstündağ, Özgür; Dinç, Erdal; Özdemir, Nurten; Tilkan, M Günseli

    2015-01-01

    In the development strategies of new drug products and generic drug products, the simultaneous in-vitro dissolution behavior of oral dosage formulations is the most important indication for the quantitative estimation of efficiency and biopharmaceutical characteristics of drug substances. This is to force the related field's scientists to improve very powerful analytical methods to get more reliable, precise and accurate results in the quantitative analysis and dissolution testing of drug formulations. In this context, two new chemometric tools, partial least squares (PLS) and principal component regression (PCR) were improved for the simultaneous quantitative estimation and dissolution testing of zidovudine (ZID) and lamivudine (LAM) in a tablet dosage form. The results obtained in this study strongly encourage us to use them for the quality control, the routine analysis and the dissolution test of the marketing tablets containing ZID and LAM drugs.

  12. Accurate Learning with Few Atlases (ALFA): an algorithm for MRI neonatal brain extraction and comparison with 11 publicly available methods

    NASA Astrophysics Data System (ADS)

    Serag, Ahmed; Blesa, Manuel; Moore, Emma J.; Pataky, Rozalia; Sparrow, Sarah A.; Wilkinson, A. G.; MacNaught, Gillian; Semple, Scott I.; Boardman, James P.

    2016-03-01

    Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases ‘uniformly’ distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.

  13. Accurate Learning with Few Atlases (ALFA): an algorithm for MRI neonatal brain extraction and comparison with 11 publicly available methods.

    PubMed

    Serag, Ahmed; Blesa, Manuel; Moore, Emma J; Pataky, Rozalia; Sparrow, Sarah A; Wilkinson, A G; Macnaught, Gillian; Semple, Scott I; Boardman, James P

    2016-03-24

    Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases 'uniformly' distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.

  14. Archimedes Revisited: A Faster, Better, Cheaper Method of Accurately Measuring the Volume of Small Objects

    ERIC Educational Resources Information Center

    Hughes, Stephen W.

    2005-01-01

    A little-known method of measuring the volume of small objects based on Archimedes' principle is described, which involves suspending an object in a water-filled container placed on electronic scales. The suspension technique is a variation on the hydrostatic weighing technique used for measuring volume. The suspension method was compared with two…

  15. Calculation of accurate channel spacing of an AWG optical demultiplexer applying proportional method

    NASA Astrophysics Data System (ADS)

    Seyringer, D.; Hodzic, E.

    2015-06-01

    We present the proportional method to correct the channel spacing between the transmitted output channels of an AWG. The developed proportional method was applied to 64-channel, 50 GHz AWG and the achieved results confirm very good correlation between designed channel spacing (50 GHz) and the channel spacing calculated from simulated AWG transmission characteristics.

  16. Identification and evaluation of new reference genes in Gossypium hirsutum for accurate normalization of real-time quantitative RT-PCR data

    PubMed Central

    2010-01-01

    Background Normalizing through reference genes, or housekeeping genes, can make more accurate and reliable results from reverse transcription real-time quantitative polymerase chain reaction (qPCR). Recent studies have shown that no single housekeeping gene is universal for all experiments. Thus, suitable reference genes should be the first step of any qPCR analysis. Only a few studies on the identification of housekeeping gene have been carried on plants. Therefore qPCR studies on important crops such as cotton has been hampered by the lack of suitable reference genes. Results By the use of two distinct algorithms, implemented by geNorm and NormFinder, we have assessed the gene expression of nine candidate reference genes in cotton: GhACT4, GhEF1α5, GhFBX6, GhPP2A1, GhMZA, GhPTB, GhGAPC2, GhβTUB3 and GhUBQ14. The candidate reference genes were evaluated in 23 experimental samples consisting of six distinct plant organs, eight stages of flower development, four stages of fruit development and in flower verticils. The expression of GhPP2A1 and GhUBQ14 genes were the most stable across all samples and also when distinct plants organs are examined. GhACT4 and GhUBQ14 present more stable expression during flower development, GhACT4 and GhFBX6 in the floral verticils and GhMZA and GhPTB during fruit development. Our analysis provided the most suitable combination of reference genes for each experimental set tested as internal control for reliable qPCR data normalization. In addition, to illustrate the use of cotton reference genes we checked the expression of two cotton MADS-box genes in distinct plant and floral organs and also during flower development. Conclusion We have tested the expression stabilities of nine candidate genes in a set of 23 tissue samples from cotton plants divided into five different experimental sets. As a result of this evaluation, we recommend the use of GhUBQ14 and GhPP2A1 housekeeping genes as superior references for normalization of gene

  17. A second-order accurate kinetic-theory-based method for inviscid compressible flows

    NASA Technical Reports Server (NTRS)

    Deshpande, Suresh M.

    1986-01-01

    An upwind method for the numerical solution of the Euler equations is presented. This method, called the kinetic numerical method (KNM), is based on the fact that the Euler equations are moments of the Boltzmann equation of the kinetic theory of gases when the distribution function is Maxwellian. The KNM consists of two phases, the convection phase and the collision phase. The method is unconditionally stable and explicit. It is highly vectorizable and can be easily made total variation diminishing for the distribution function by a suitable choice of the interpolation strategy. The method is applied to a one-dimensional shock-propagation problem and to a two-dimensional shock-reflection problem.

  18. Accurate and efficient velocity estimation using Transmission matrix formalism based on the domain decomposition method

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Jakobsen, Morten; Wu, Ru-Shan; Lu, Wenkai; Chen, Xiaohong

    2017-03-01

    Full waveform inversion (FWI) has been regarded as an effective tool to build the velocity model for the following pre-stack depth migration. Traditional inversion methods are built on Born approximation and are initial model dependent, while this problem can be avoided by introducing Transmission matrix (T-matrix), because the T-matrix includes all orders of scattering effects. The T-matrix can be estimated from the spatial aperture and frequency bandwidth limited seismic data using linear optimization methods. However the full T-matrix inversion method (FTIM) is always required in order to estimate velocity perturbations, which is very time consuming. The efficiency can be improved using the previously proposed inverse thin-slab propagator (ITSP) method, especially for large scale models. However, the ITSP method is currently designed for smooth media, therefore the estimation results are unsatisfactory when the velocity perturbation is relatively large. In this paper, we propose a domain decomposition method (DDM) to improve the efficiency of the velocity estimation for models with large perturbations, as well as guarantee the estimation accuracy. Numerical examples for smooth Gaussian ball models and a reservoir model with sharp boundaries are performed using the ITSP method, the proposed DDM and the FTIM. The estimated velocity distributions, the relative errors and the elapsed time all demonstrate the validity of the proposed DDM.

  19. Methods for Applying Accurate Digital PCR Analysis on Low Copy DNA Samples

    PubMed Central

    Whale, Alexandra S.; Cowen, Simon; Foy, Carole A.; Huggett, Jim F.

    2013-01-01

    Digital PCR (dPCR) is a highly accurate molecular approach, capable of precise measurements, offering a number of unique opportunities. However, in its current format dPCR can be limited by the amount of sample that can be analysed and consequently additional considerations such as performing multiplex reactions or pre-amplification can be considered. This study investigated the impact of duplexing and pre-amplification on dPCR analysis by using three different assays targeting a model template (a portion of the Arabidopsis thaliana alcohol dehydrogenase gene). We also investigated the impact of different template types (linearised plasmid clone and more complex genomic DNA) on measurement precision using dPCR. We were able to demonstrate that duplex dPCR can provide a more precise measurement than uniplex dPCR, while applying pre-amplification or varying template type can significantly decrease the precision of dPCR. Furthermore, we also demonstrate that the pre-amplification step can introduce measurement bias that is not consistent between experiments for a sample or assay and so could not be compensated for during the analysis of this data set. We also describe a model for estimating the prevalence of molecular dropout and identify this as a source of dPCR imprecision. Our data have demonstrated that the precision afforded by dPCR at low sample concentration can exceed that of the same template post pre-amplification thereby negating the need for this additional step. Our findings also highlight the technical differences between different templates types containing the same sequence that must be considered if plasmid DNA is to be used to assess or control for more complex templates like genomic DNA. PMID:23472156

  20. Is photometry an accurate and reliable method to assess boar semen concentration?

    PubMed

    Camus, A; Camugli, S; Lévêque, C; Schmitt, E; Staub, C

    2011-02-01

    Sperm concentration assessment is a key point to insure appropriate sperm number per dose in species subjected to artificial insemination (AI). The aim of the present study was to evaluate the accuracy and reliability of two commercially available photometers, AccuCell™ and AccuRead™ pre-calibrated for boar semen in comparison to UltiMate™ boar version 12.3D, NucleoCounter SP100 and Thoma hemacytometer. For each type of instrument, concentration was measured on 34 boar semen samples in quadruplicate and agreement between measurements and instruments were evaluated. Accuracy for both photometers was illustrated by mean of percentage differences to the general mean. It was -0.6% and 0.5% for Accucell™ and Accuread™ respectively, no significant differences were found between instrument and mean of measurement among all equipment. Repeatability for both photometers was 1.8% and 3.2% for AccuCell™ and AccuRead™ respectively. Low differences were observed between instruments (confidence interval 3%) except when hemacytometer was used as a reference. Even though hemacytometer is considered worldwide as the gold standard, it is the more variable instrument (confidence interval 7.1%). The conclusion is that routine photometry measures of raw semen concentration are reliable, accurate and precise using AccuRead™ or AccuCell™. There are multiple steps in semen processing that can induce sperm loss and therefore increase differences between theoretical and real sperm numbers in doses. Potential biases that depend on the workflow but not on the initial photometric measure of semen concentration are discussed.

  1. Accurate dispersion interactions from standard density-functional theory methods with small basis sets.

    PubMed

    Mackie, Iain D; Dilabio, Gino A

    2010-06-21

    B971, PBE and PBE1 density functionals with 6-31G(d) basis sets are shown to accurately describe the binding in dispersion bound dimers. This is achieved through the use of dispersion-correcting potentials (DCPs) in conjunction with counterpoise corrections. DCPs resemble and are applied like conventional effective core potentials that can be used with most computational chemistry programs without code modification. Rather, DCPs are implemented by simple appendage to the input files for these types of programs. Binding energies are predicted to within ca. 11% and monomer separations to within ca. 0.06 A of high-level wavefunction data using B971/6-31G(d)-DCP. Similar results are obtained for PBE and PBE1 with the 6-31G(d) basis sets and DCPs. Although results found using the 3-21G(d) are not as impressive, they never-the-less show promise as a means of initial study for a wide variety of dimers, including those dominated by dispersion, hydrogen-bonding and a mixture of interactions. Notable improvement is found in comparison to M06-2X/6-31G(d) data, e.g., mean absolute deviations for the S22-set of dimers of ca. 13.6 and 16.5% for B971/6-31G(d)-DCP and M06-2X, respectively. However, it should be pointed out that the latter data were obtained using a larger integration grid size since a smaller grid results in different binding energies and geometries for simple dispersion-bound dimers such as methane and ethene.

  2. Accurate Hf isotope determinations of complex zircons using the "laser ablation split stream" method

    NASA Astrophysics Data System (ADS)

    Fisher, Christopher M.; Vervoort, Jeffery D.; DuFrane, S. Andrew

    2014-01-01

    The "laser ablation split stream" (LASS) technique is a powerful tool for mineral-scale isotope analyses and in particular, for concurrent determination of age and Hf isotope composition of zircon. Because LASS utilizes two independent mass spectrometers, a large range of masses can be measured during a single ablation, and thus, the same sample volume can be analyzed for multiple geochemical systems. This paper describes a simple analytical setup using a laser ablation system coupled to a single-collector (for U-Pb age determination) and a multicollector (for Hf isotope analyses) inductively coupled plasma mass spectrometer (MC-ICPMS). The ability of the LASS for concurrent Hf + age technique to extract meaningful Hf isotope compositions in isotopically zoned zircon is demonstrated using zircons from two Proterozoic gneisses from northern Idaho, USA. These samples illustrate the potential problems associated with inadvertently sampling multiple age and Hf components in zircons, as well as the potential of LASS to recover meaningful Hf isotope compositions. We suggest that such inadvertent sampling of differing age and Hf components can be a significant cause of excess scatter in Hf isotope analyses and demonstrate that the LASS approach offers a robust solution to these issues. The veracity of the approach is demonstrated by accurate analyses of 10 reference zircons with well-characterized age and Hf isotopic composition, using laser spot diameters of 30 and 40 µm. In order to expand the database of high-precision Lu-Hf isotope analyses of reference zircons, we present 27 new isotope dilution-MC-ICPMS Lu-Hf isotope measurements of five U-Pb zircon standards: FC1, Temora, R33, QGNG, and 91500.

  3. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    PubMed Central

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262

  4. A flux monitoring method for easy and accurate flow rate measurement in pressure-driven flows.

    PubMed

    Siria, Alessandro; Biance, Anne-Laure; Ybert, Christophe; Bocquet, Lydéric

    2012-03-07

    We propose a low-cost and versatile method to measure flow rate in microfluidic channels under pressure-driven flows, thereby providing a simple characterization of the hydrodynamic permeability of the system. The technique is inspired by the current monitoring method usually employed to characterize electro-osmotic flows, and makes use of the measurement of the time-dependent electric resistance inside the channel associated with a moving salt front. We have successfully tested the method in a micrometer-size channel, as well as in a complex microfluidic channel with a varying cross-section, demonstrating its ability in detecting internal shape variations.

  5. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    PubMed

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  6. Quantitative measurement of ultrasound pressure field by optical phase contrast method and acoustic holography

    NASA Astrophysics Data System (ADS)

    Oyama, Seiji; Yasuda, Jun; Hanayama, Hiroki; Yoshizawa, Shin; Umemura, Shin-ichiro

    2016-07-01

    A fast and accurate measurement of an ultrasound field with various exposure sequences is necessary to ensure the efficacy and safety of various ultrasound applications in medicine. The most common method used to measure an ultrasound pressure field, that is, hydrophone scanning, requires a long scanning time and potentially disturbs the field. This may limit the efficiency of developing applications of ultrasound. In this study, an optical phase contrast method enabling fast and noninterfering measurements is proposed. In this method, the modulated phase of light caused by the focused ultrasound pressure field is measured. Then, a computed tomography (CT) algorithm used to quantitatively reconstruct a three-dimensional (3D) pressure field is applied. For a high-intensity focused ultrasound field, a new approach that combines the optical phase contrast method and acoustic holography was attempted. First, the optical measurement of focused ultrasound was rapidly performed over the field near a transducer. Second, the nonlinear propagation of the measured ultrasound was simulated. The result of the new approach agreed well with that of the measurement using a hydrophone and was improved from that of the phase contrast method alone with phase unwrapping.

  7. Simultaneous quantitative determination of paracetamol and tramadol in tablet formulation using UV spectrophotometry and chemometric methods.

    PubMed

    Glavanović, Siniša; Glavanović, Marija; Tomišić, Vladislav

    2016-03-15

    The UV spectrophotometric methods for simultaneous quantitative determination of paracetamol and tramadol in paracetamol-tramadol tablets were developed. The spectrophotometric data obtained were processed by means of partial least squares (PLS) and genetic algorithm coupled with PLS (GA-PLS) methods in order to determine the content of active substances in the tablets. The results gained by chemometric processing of the spectroscopic data were statistically compared with those obtained by means of validated ultra-high performance liquid chromatographic (UHPLC) method. The accuracy and precision of data obtained by the developed chemometric models were verified by analysing the synthetic mixture of drugs, and by calculating recovery as well as relative standard error (RSE). A statistically good agreement was found between the amounts of paracetamol determined using PLS and GA-PLS algorithms, and that obtained by UHPLC analysis, whereas for tramadol GA-PLS results were proven to be more reliable compared to those of PLS. The simplest and the most accurate and precise models were constructed by using the PLS method for paracetamol (mean recovery 99.5%, RSE 0.89%) and the GA-PLS method for tramadol (mean recovery 99.4%, RSE 1.69%).

  8. Simultaneous quantitative determination of paracetamol and tramadol in tablet formulation using UV spectrophotometry and chemometric methods

    NASA Astrophysics Data System (ADS)

    Glavanović, Siniša; Glavanović, Marija; Tomišić, Vladislav

    2016-03-01

    The UV spectrophotometric methods for simultaneous quantitative determination of paracetamol and tramadol in paracetamol-tramadol tablets were developed. The spectrophotometric data obtained were processed by means of partial least squares (PLS) and genetic algorithm coupled with PLS (GA-PLS) methods in order to determine the content of active substances in the tablets. The results gained by chemometric processing of the spectroscopic data were statistically compared with those obtained by means of validated ultra-high performance liquid chromatographic (UHPLC) method. The accuracy and precision of data obtained by the developed chemometric models were verified by analysing the synthetic mixture of drugs, and by calculating recovery as well as relative standard error (RSE). A statistically good agreement was found between the amounts of paracetamol determined using PLS and GA-PLS algorithms, and that obtained by UHPLC analysis, whereas for tramadol GA-PLS results were proven to be more reliable compared to those of PLS. The simplest and the most accurate and precise models were constructed by using the PLS method for paracetamol (mean recovery 99.5%, RSE 0.89%) and the GA-PLS method for tramadol (mean recovery 99.4%, RSE 1.69%).

  9. Linear Quantitative Profiling Method Fast Monitors Alkaloids of Sophora Flavescens That Was Verified by Tri-Marker Analyses

    PubMed Central

    Hou, Zhifei; Sun, Guoxiang; Guo, Yong

    2016-01-01

    The present study demonstrated the use of the Linear Quantitative Profiling Method (LQPM) to evaluate the quality of Alkaloids of Sophora flavescens (ASF) based on chromatographic fingerprints in an accurate, economical and fast way. Both linear qualitative and quantitative similarities were calculated in order to monitor the consistency of the samples. The results indicate that the linear qualitative similarity (LQLS) is not sufficiently discriminating due to the predominant presence of three alkaloid compounds (matrine, sophoridine and oxymatrine) in the test samples; however, the linear quantitative similarity (LQTS) was shown to be able to obviously identify the samples based on the difference in the quantitative content of all the chemical components. In addition, the fingerprint analysis was also supported by the quantitative analysis of three marker compounds. The LQTS was found to be highly correlated to the contents of the marker compounds, indicating that quantitative analysis of the marker compounds may be substituted with the LQPM based on the chromatographic fingerprints for the purpose of quantifying all chemicals of a complex sample system. Furthermore, once reference fingerprint (RFP) developed from a standard preparation in an immediate detection way and the composition similarities calculated out, LQPM could employ the classical mathematical model to effectively quantify the multiple components of ASF samples without any chemical standard. PMID:27529425

  10. Methods and challenges in quantitative imaging biomarker development.

    PubMed

    Abramson, Richard G; Burton, Kirsteen R; Yu, John-Paul J; Scalzetti, Ernest M; Yankeelov, Thomas E; Rosenkrantz, Andrew B; Mendiratta-Lala, Mishal; Bartholmai, Brian J; Ganeshan, Dhakshinamoorthy; Lenchik, Leon; Subramaniam, Rathan M

    2015-01-01

    Academic radiology is poised to play an important role in the development and implementation of quantitative imaging (QI) tools. This article, drafted by the Association of University Radiologists Radiology Research Alliance Quantitative Imaging Task Force, reviews current issues in QI biomarker research. We discuss motivations for advancing QI, define key terms, present a framework for QI biomarker research, and outline challenges in QI biomarker development. We conclude by describing where QI research and development is currently taking place and discussing the paramount role of academic radiology in this rapidly evolving field.

  11. Three-Signal Method for Accurate Measurements of Depolarization Ratio with Lidar

    NASA Technical Reports Server (NTRS)

    Reichardt, Jens; Baumgart, Rudolf; McGee, Thomsa J.

    2003-01-01

    A method is presented that permits the determination of atmospheric depolarization-ratio profiles from three elastic-backscatter lidar signals with different sensitivity to the state of polarization of the backscattered light. The three-signal method is insensitive to experimental errors and does not require calibration of the measurement, which could cause large systematic uncertainties of the results, as is the case in the lidar technique conventionally used for the observation of depolarization ratios.

  12. Smartphone based hand-held quantitative phase microscope using the transport of intensity equation method.

    PubMed

    Meng, Xin; Huang, Huachuan; Yan, Keding; Tian, Xiaolin; Yu, Wei; Cui, Haoyang; Kong, Yan; Xue, Liang; Liu, Cheng; Wang, Shouyu

    2016-12-20

    In order to realize high contrast imaging with portable devices for potential mobile healthcare, we demonstrate a hand-held smartphone based quantitative phase microscope using the transport of intensity equation method. With a cost-effective illumination source and compact microscope system, multi-focal images of samples can be captured by the smartphone's camera via manual focusing. Phase retrieval is performed using a self-developed Android application, which calculates sample phases from multi-plane intensities via solving the Poisson equation. We test the portable microscope using a random phase plate with known phases, and to further demonstrate its performance, a red blood cell smear, a Pap smear and monocot root and broad bean epidermis sections are also successfully imaged. Considering its advantages as an accurate, high-contrast, cost-effective and field-portable device, the smartphone based hand-held quantitative phase microscope is a promising tool which can be adopted in the future in remote healthcare and medical diagnosis.

  13. Qualitative and quantitative characterization of protein-phosphoinositide interactions with liposome-based methods.

    PubMed

    Busse, Ricarda A; Scacioc, Andreea; Hernandez, Javier M; Krick, Roswitha; Stephan, Milena; Janshoff, Andreas; Thumm, Michael; Kühnel, Karin

    2013-05-01

    We characterized phosphoinositide binding of the S. cerevisiae PROPPIN Hsv2 qualitatively with density flotation assays and quantitatively through isothermal titration calorimetry (ITC) measurements using liposomes. We discuss the design of these experiments and show with liposome flotation assays that Hsv2 binds with high specificity to both PtdIns3P and PtdIns(3,5)P 2. We propose liposome flotation assays as a more accurate alternative to the commonly used PIP strips for the characterization of phosphoinositide-binding specificities of proteins. We further quantitatively characterized PtdIns3P binding of Hsv2 with ITC measurements and determined a dissociation constant of 0.67 µM and a stoichiometry of 2:1 for PtdIns3P binding to Hsv2. PtdIns3P is crucial for the biogenesis of autophagosomes and their precursors. Besides the PROPPINs there are other PtdIns3P binding proteins with a link to autophagy, which includes the FYVE-domain containing proteins ZFYVE1/DFCP1 and WDFY3/ALFY and the PX-domain containing proteins Atg20 and Snx4/Atg24. The methods described could be useful tools for the characterization of these and other phosphoinositide-binding proteins.

  14. Fast and accurate numerical method for predicting gas chromatography retention time.

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-08-07

    Predictive modeling for gas chromatography compound retention depends on the retention factor (ki) and on the flow of the mobile phase. Thus, different approaches for determining an analyte ki in column chromatography have been developed. The main one is based on the thermodynamic properties of the component and on the characteristics of the stationary phase. These models can be used to estimate the parameters and to optimize the programming of temperatures, in gas chromatography, for the separation of compounds. Different authors have proposed the use of numerical methods for solving these models, but these methods demand greater computational time. Hence, a new method for solving the predictive modeling of analyte retention time is presented. This algorithm is an alternative to traditional methods because it transforms its attainments into root determination problems within defined intervals. The proposed approach allows for tr calculation, with accuracy determined by the user of the methods, and significant reductions in computational time; it can also be used to evaluate the performance of other prediction methods.

  15. A Framework for Mixing Methods in Quantitative Measurement Development, Validation, and Revision: A Case Study

    ERIC Educational Resources Information Center

    Luyt, Russell

    2012-01-01

    A framework for quantitative measurement development, validation, and revision that incorporates both qualitative and quantitative methods is introduced. It extends and adapts Adcock and Collier's work, and thus, facilitates understanding of quantitative measurement development, validation, and revision as an integrated and cyclical set of…

  16. Studying learning in the healthcare setting: the potential of quantitative diary methods.

    PubMed

    Ciere, Yvette; Jaarsma, Debbie; Visser, Annemieke; Sanderman, Robbert; Snippe, Evelien; Fleer, Joke

    2015-08-01

    Quantitative diary methods are longitudinal approaches that involve the repeated measurement of aspects of peoples' experience of daily life. In this article, we outline the main characteristics and applications of quantitative diary methods and discuss how their use may further research in the field of medical education. Quantitative diary methods offer several methodological advantages, such as measuring aspects of learning with great detail, accuracy and authenticity. Moreover, they enable researchers to study how and under which conditions learning in the health care setting occurs and in which way learning can be promoted. Hence, quantitative diary methods may contribute to theory development and the optimization of teaching methods in medical education.

  17. An accurate method for evaluating the kernel of the integral equation relating lift to downwash in unsteady potential flow

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.

    1982-01-01

    The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.

  18. Highly effective and accurate weak point monitoring method for advanced design rule (1x nm) devices

    NASA Astrophysics Data System (ADS)

    Ahn, Jeongho; Seong, ShiJin; Yoon, Minjung; Park, Il-Suk; Kim, HyungSeop; Ihm, Dongchul; Chin, Soobok; Sivaraman, Gangadharan; Li, Mingwei; Babulnath, Raghav; Lee, Chang Ho; Kurada, Satya; Brown, Christine; Galani, Rajiv; Kim, JaeHyun

    2014-04-01

    Historically when we used to manufacture semiconductor devices for 45 nm or above design rules, IC manufacturing yield was mainly determined by global random variations and therefore the chip manufacturers / manufacturing team were mainly responsible for yield improvement. With the introduction of sub-45 nm semiconductor technologies, yield started to be dominated by systematic variations, primarily centered on resolution problems, copper/low-k interconnects and CMP. These local systematic variations, which have become decisively greater than global random variations, are design-dependent [1, 2] and therefore designers now share the responsibility of increasing yield with manufacturers / manufacturing teams. A widening manufacturing gap has led to a dramatic increase in design rules that are either too restrictive or do not guarantee a litho/etch hotspot-free design. The semiconductor industry is currently limited to 193 nm scanners and no relief is expected from the equipment side to prevent / eliminate these systematic hotspots. Hence we have seen a lot of design houses coming up with innovative design products to check hotspots based on model based lithography checks to validate design manufacturability, which will also account for complex two-dimensional effects that stem from aggressive scaling of 193 nm lithography. Most of these hotspots (a.k.a., weak points) are especially seen on Back End of the Line (BEOL) process levels like Mx ADI, Mx Etch and Mx CMP. Inspecting some of these BEOL levels can be extremely challenging as there are lots of wafer noises or nuisances that can hinder an inspector's ability to detect and monitor the defects or weak points of interest. In this work we have attempted to accurately inspect the weak points using a novel broadband plasma optical inspection approach that enhances defect signal from patterns of interest (POI) and precisely suppresses surrounding wafer noises. This new approach is a paradigm shift in wafer inspection

  19. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    SciTech Connect

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  20. An Accurate Method for Free Vibration Analysis of Structures with Application to Plates

    NASA Astrophysics Data System (ADS)

    KEVORKIAN, S.; PASCAL, M.

    2001-10-01

    In this work, the continuous element method which has been used as an alternative to the finite element method of vibration analysis of frames is applied to more general structures like 3-D continuum and rectangular plates. The method is based on the concept of the so-called impedance matrix giving in the frequency domain, the linear relation between the generalized displacements of the boundaries and the generalized forces exerted on these boundaries. For a 3-D continuum, the concept of impedance matrix is introduced assuming a particular kind of boundary conditions. For rectangular plates, this new development leads to the solution of vibration problems for boundary conditions other than the simply supported ones.

  1. A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates

    NASA Astrophysics Data System (ADS)

    Savanevych, V. E.; Briukhovetskyi, O. B.; Sokovikova, N. S.; Bezkrovny, M. M.; Vavilova, I. B.; Ivashchenko, Yu. M.; Elenin, L. V.; Khlamov, S. V.; Movsesian, Ia. S.; Dashkova, A. M.; Pogorelov, A. V.

    2015-08-01

    We describe a new iteration method to estimate asteroid coordinates, based on a subpixel Gaussian model of the discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixel potentials) of the CCD frame. In this model, the kind of coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The method that is developed, which is flexible in adapting to any form of object image, has a high measurement accuracy along with a low calculating complexity, due to the maximum-likelihood procedure that is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for minimization of the quadratic form. Since 2010, the method has been tested as the basis of our Collection Light Technology (COLITEC) software, which has been installed at several observatories across the world with the aim of the automatic discovery of asteroids and comets in sets of CCD frames. As a result, four comets (C/2010 X1 (Elenin), P/2011 NO1(Elenin), C/2012 S1 (ISON) and P/2013 V3 (Nevski)) as well as more than 1500 small Solar system bodies (including five near-Earth objects (NEOs), 21 Trojan asteroids of Jupiter and one Centaur object) have been discovered. We discuss these results, which allowed us to compare the accuracy parameters of the new method and confirm its efficiency. In 2014, the COLITEC software was recommended to all members of the Gaia-FUN-SSO network for analysing observations as a tool to detect faint moving objects in frames.

  2. Accurate method for the Brownian dynamics simulation of spherical particles with hard-body interactions

    NASA Astrophysics Data System (ADS)

    Barenbrug, Theo M. A. O. M.; Peters, E. A. J. F. (Frank); Schieber, Jay D.

    2002-11-01

    In Brownian Dynamics simulations, the diffusive motion of the particles is simulated by adding random displacements, proportional to the square root of the chosen time step. When computing average quantities, these Brownian contributions usually average out, and the overall simulation error becomes proportional to the time step. A special situation arises if the particles undergo hard-body interactions that instantaneously change their properties, as in absorption or association processes, chemical reactions, etc. The common "naı̈ve simulation method" accounts for these interactions by checking for hard-body overlaps after every time step. Due to the simplification of the diffusive motion, a substantial part of the actual hard-body interactions is not detected by this method, resulting in an overall simulation error proportional to the square root of the time step. In this paper we take the hard-body interactions during the time step interval into account, using the relative positions of the particles at the beginning and at the end of the time step, as provided by the naı̈ve method, and the analytical solution for the diffusion of a point particle around an absorbing sphere. Öttinger used a similar approach for the one-dimensional case [Stochastic Processes in Polymeric Fluids (Springer, Berlin, 1996), p. 270]. We applied the "corrected simulation method" to the case of a simple, second-order chemical reaction. The results agree with recent theoretical predictions [K. Hyojoon and Joe S. Kook, Phys. Rev. E 61, 3426 (2000)]. The obtained simulation error is proportional to the time step, instead of its square root. The new method needs substantially less simulation time to obtain the same accuracy. Finally, we briefly discuss a straightforward way to extend the method for simulations of systems with additional (deterministic) forces.

  3. Quantitative methods to characterize morphological properties of cell lines.

    PubMed

    Mancia, Annalaura; Elliott, John T; Halter, Michael; Bhadriraju, Kiran; Tona, Alessandro; Spurlin, Tighe A; Middlebrooks, Bobby L; Baatz, John E; Warr, Gregory W; Plant, Anne L

    2012-07-01

    Descriptive terms are often used to characterize cells in culture, but the use of nonquantitative and poorly defined terms can lead to ambiguities when comparing data from different laboratories. Although recently there has been a good deal of interest in unambiguous identification of cell lines via their genetic markers, it is also critical to have definitive, quantitative metrics to describe cell phenotypic characteristics. Quantitative metrics of cell phenotype will aid the comparison of data from experiments performed at different times and in different laboratories where influences such as the age of the population and differences in culture conditions or protocols can potentially affect cellular metabolic state and gene expression in the absence of changes in the genetic profile. Here, we present examples of robust methodologies for quantitatively assessing characteristics of cell morphology and cell-cell interactions, and of growth rates of cells within the population. We performed these analyses with endothelial cell lines derived from dolphin, bovine and human, and with a mouse fibroblast cell line. These metrics quantify some characteristics of these cells lines that clearly distinguish them from one another, and provide quantitative information on phenotypic changes in one of the cell lines over large number of passages.

  4. Guidelines for Reporting Quantitative Methods and Results in Primary Research

    ERIC Educational Resources Information Center

    Norris, John M.; Plonsky, Luke; Ross, Steven J.; Schoonen, Rob

    2015-01-01

    Adequate reporting of quantitative research about language learning involves careful consideration of the logic, rationale, and actions underlying both study designs and the ways in which data are analyzed. These guidelines, commissioned and vetted by the board of directors of "Language Learning," outline the basic expectations for…

  5. Quantitative Methods for Administrative Decision Making in Junior Colleges.

    ERIC Educational Resources Information Center

    Gold, Benjamin Knox

    With the rapid increase in number and size of junior colleges, administrators must take advantage of the decision-making tools already used in business and industry. This study investigated how these quantitative techniques could be applied to junior college problems. A survey of 195 California junior college administrators found that the problems…

  6. [The method of quantitative assessment of dentition aesthetic parameters].

    PubMed

    Ryakhovsky, A N; Kalacheva, Ya A

    2016-01-01

    This article describes the formula for calculating the aesthetic index of treatment outcome. The formula was derived on the basis of the obtained regression equations showing the dependence of visual assessment of the value of aesthetic violations. The formula can be used for objective quantitative evaluation of the aesthetics of the teeth when smiling before and after dental treatment.

  7. Direct Coupling Method for Time-Accurate Solution of Incompressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Soh, Woo Y.

    1992-01-01

    A noniterative finite difference numerical method is presented for the solution of the incompressible Navier-Stokes equations with second order accuracy in time and space. Explicit treatment of convection and diffusion terms and implicit treatment of the pressure gradient give a single pressure Poisson equation when the discretized momentum and continuity equations are combined. A pressure boundary condition is not needed on solid boundaries in the staggered mesh system. The solution of the pressure Poisson equation is obtained directly by Gaussian elimination. This method is tested on flow problems in a driven cavity and a curved duct.

  8. Advancing the study of violence against women using mixed methods: integrating qualitative methods into a quantitative research program.

    PubMed

    Testa, Maria; Livingston, Jennifer A; VanZile-Tamsen, Carol

    2011-02-01

    A mixed methods approach, combining quantitative with qualitative data methods and analysis, offers a promising means of advancing the study of violence. Integrating semi-structured interviews and qualitative analysis into a quantitative program of research on women's sexual victimization has resulted in valuable scientific insight and generation of novel hypotheses for testing. This mixed methods approach is described and recommendations for integrating qualitative data into quantitative research are provided.

  9. ADVANCING THE STUDY OF VIOLENCE AGAINST WOMEN USING MIXED METHODS: INTEGRATING QUALITATIVE METHODS INTO A QUANTITATIVE RESEARCH PROGRAM

    PubMed Central

    Testa, Maria; Livingston, Jennifer A.; VanZile-Tamsen, Carol

    2011-01-01

    A mixed methods approach, combining quantitative with qualitative data methods and analysis, offers a promising means of advancing the study of violence. Integrating semi-structured interviews and qualitative analysis into a quantitative program of research on women’s sexual victimization has resulted in valuable scientific insight and generation of novel hypotheses for testing. This mixed methods approach is described and recommendations for integrating qualitative data into quantitative research are provided. PMID:21307032

  10. Physalis method for heterogeneous mixtures of dielectrics and conductors: Accurately simulating one million particles using a PC

    NASA Astrophysics Data System (ADS)

    Liu, Qianlong

    2011-09-01

    Prosperetti's seminal Physalis method, an Immersed Boundary/spectral method, had been used extensively to investigate fluid flows with suspended solid particles. Its underlying idea of creating a cage and using a spectral general analytical solution around a discontinuity in a surrounding field as a computational mechanism to enable the accommodation of physical and geometric discontinuities is a general concept, and can be applied to other problems of importance to physics, mechanics, and chemistry. In this paper we provide a foundation for the application of this approach to the determination of the distribution of electric charge in heterogeneous mixtures of dielectrics and conductors. The proposed Physalis method is remarkably accurate and efficient. In the method, a spectral analytical solution is used to tackle the discontinuity and thus the discontinuous boundary conditions at the interface of two media are satisfied exactly. Owing to the hybrid finite difference and spectral schemes, the method is spectrally accurate if the modes are not sufficiently resolved, while higher than second-order accurate if the modes are sufficiently resolved, for the solved potential field. Because of the features of the analytical solutions, the derivative quantities of importance, such as electric field, charge distribution, and force, have the same order of accuracy as the solved potential field during postprocessing. This is an important advantage of the Physalis method over other numerical methods involving interpolation, differentiation, and integration during postprocessing, which may significantly degrade the accuracy of the derivative quantities of importance. The analytical solutions enable the user to use relatively few mesh points to accurately represent the regions of discontinuity. In addition, the spectral convergence and a linear relationship between the cost of computer memory/computation and particle numbers results in a very efficient method. In the present

  11. A novel method to accurately locate and count large numbers of steps by photobleaching

    PubMed Central

    Tsekouras, Konstantinos; Custer, Thomas C.; Jashnsaz, Hossein; Walter, Nils G.; Pressé, Steve

    2016-01-01

    Photobleaching event counting is a single-molecule fluorescence technique that is increasingly being used to determine the stoichiometry of protein and RNA complexes composed of many subunits in vivo as well as in vitro. By tagging protein or RNA subunits with fluorophores, activating them, and subsequently observing as the fluorophores photobleach, one obtains information on the number of subunits in a complex. The noise properties in a photobleaching time trace depend on the number of active fluorescent subunits. Thus, as fluorophores stochastically photobleach, noise properties of the time trace change stochastically, and these varying noise properties have created a challenge in identifying photobleaching steps in a time trace. Although photobleaching steps are often detected by eye, this method only works for high individual fluorophore emission signal-to-noise ratios and small numbers of fluorophores. With filtering methods or currently available algorithms, it is possible to reliably identify photobleaching steps for up to 20–30 fluorophores and signal-to-noise ratios down to ∼1. Here we present a new Bayesian method of counting steps in photobleaching time traces that takes into account stochastic noise variation in addition to complications such as overlapping photobleaching events that may arise from fluorophore interactions, as well as on-off blinking. Our method is capable of detecting ≥50 photobleaching steps even for signal-to-noise ratios as low as 0.1, can find up to ≥500 steps for more favorable noise profiles, and is computationally inexpensive. PMID:27654946

  12. Highly Accurate Beam Torsion Solutions Using the p-Version Finite Element Method

    NASA Technical Reports Server (NTRS)

    Smith, James P.

    1996-01-01

    A new treatment of the classical beam torsion boundary value problem is applied. Using the p-version finite element method with shape functions based on Legendre polynomials, torsion solutions for generic cross-sections comprised of isotropic materials are developed. Element shape functions for quadrilateral and triangular elements are discussed, and numerical examples are provided.

  13. Collision-induced fragmentation accurate mass spectrometric analysis methods to rapidly characterize phytochemicals in plant extracts

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The rapid advances in analytical chromatography equipment have made the reliable and reproducible measurement of a wide range of plant chemical components possible. Full chemical characterization of a given plant material is possible with the new mass spectrometers currently available. New methods a...

  14. A Robust Method of Vehicle Stability Accurate Measurement Using GPS and INS

    NASA Astrophysics Data System (ADS)

    Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu

    2015-12-01

    With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. Integration of Global Positioning System (GPS) and Inertial Navigation System (INS) is a very practical method to get high-precision measurement data. Usually, the Kalman filter is used to fuse the data from GPS and INS. In this paper, a robust method is used to measure vehicle sideslip angle and yaw rate, which are two important parameters for vehicle stability. First, a four-wheel vehicle dynamic model is introduced, based on sideslip angle and yaw rate. Second, a double level Kalman filter is established to fuse the data from Global Positioning System and Inertial Navigation System. Then, this method is simulated on a sample vehicle, using Carsim software to test the sideslip angle and yaw rate. Finally, a real experiment is made to verify the advantage of this approach. The experimental results showed the merits of this method of measurement and estimation, and the approach can meet the design requirements of the vehicle stability controller.

  15. A second-order accurate immersed boundary-lattice Boltzmann method for particle-laden flows

    NASA Astrophysics Data System (ADS)

    Zhou, Qiang; Fan, Liang-Shih

    2014-07-01

    A new immersed boundary-lattice Boltzmann method (IB-LBM) is presented for fully resolved simulations of incompressible viscous flows laden with rigid particles. The immersed boundary method (IBM) recently developed by Breugem (2012) [19] is adopted in the present method, development including the retraction technique, the multi-direct forcing method and the direct account of the inertia of the fluid contained within the particles. The present IB-LBM is, however, formulated with further improvement with the implementation of the high-order Runge-Kutta schemes in the coupled fluid-particle interaction. The major challenge to implement high-order Runge-Kutta schemes in the LBM is that the flow information such as density and velocity cannot be directly obtained at a fractional time step from the LBM since the LBM only provides the flow information at an integer time step. This challenge can be, however, overcome as given in the present IB-LBM by extrapolating the flow field around particles from the known flow field at the previous integer time step. The newly calculated fluid-particle interactions from the previous fractional time steps of the current integer time step are also accounted for in the extrapolation. The IB-LBM with high-order Runge-Kutta schemes developed in this study is validated by several benchmark applications. It is demonstrated, for the first time, that the IB-LBM has the capacity to resolve the translational and rotational motion of particles with the second-order accuracy. The optimal retraction distances for spheres and tubes that help the method achieve the second-order accuracy are found to be around 0.30 and -0.47 times of the lattice spacing, respectively. Simulations of the Stokes flow through a simple cubic lattice of rotational spheres indicate that the lift force produced by the Magnus effect can be very significant in view of the magnitude of the drag force when the practical rotating speed of the spheres is encountered. This finding

  16. The Influence Relevance Voter: An Accurate And Interpretable Virtual High Throughput Screening Method

    PubMed Central

    Swamidass, S. Joshua; Azencott, Chloé-Agathe; Lin, Ting-Wan; Gramajo, Hugo; Tsai, Sheryl; Baldi, Pierre

    2009-01-01

    Given activity training data from Hight-Throughput Screening (HTS) experiments, virtual High-Throughput Screening (vHTS) methods aim to predict in silico the activity of untested chemicals. We present a novel method, the Influence Relevance Voter (IRV), specifically tailored for the vHTS task. The IRV is a low-parameter neural network which refines a k-nearest neighbor classifier by non-linearly combining the influences of a chemical's neighbors in the training set. Influences are decomposed, also non-linearly, into a relevance component and a vote component. The IRV is benchmarked using the data and rules of two large, open, competitions, and its performance compared to the performance of other participating methods, as well as of an in-house Support Vector Machine (SVM) method. On these benchmark datasets, IRV achieves state-of-the-art results, comparable to the SVM in one case, and significantly better than the SVM in the other, retrieving three times as many actives in the top 1% of its prediction-sorted list. The IRV presents several other important advantages over SVMs and other methods: (1) the output predictions have a probabilistic semantic; (2) the underlying inferences are interpretable; (3) the training time is very short, on the order of minutes even for very large data sets; (4) the risk of overfitting is minimal, due to the small number of free parameters; and (5) additional information can easily be incorporated into the IRV architecture. Combined with its performance, these qualities make the IRV particularly well suited for vHTS. PMID:19391629

  17. A Time-Accurate Upwind Unstructured Finite Volume Method for Compressible Flow with Cure of Pathological Behaviors

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Jorgenson, Philip C. E.

    2007-01-01

    A time-accurate, upwind, finite volume method for computing compressible flows on unstructured grids is presented. The method is second order accurate in space and time and yields high resolution in the presence of discontinuities. For efficiency, the Roe approximate Riemann solver with an entropy correction is employed. In the basic Euler/Navier-Stokes scheme, many concepts of high order upwind schemes are adopted: the surface flux integrals are carefully treated, a Cauchy-Kowalewski time-stepping scheme is used in the time-marching stage, and a multidimensional limiter is applied in the reconstruction stage. However even with these up-to-date improvements, the basic upwind scheme is still plagued by the so-called "pathological behaviors," e.g., the carbuncle phenomenon, the expansion shock, etc. A solution to these limitations is presented which uses a very simple dissipation model while still preserving second order accuracy. This scheme is referred to as the enhanced time-accurate upwind (ETAU) scheme in this paper. The unstructured grid capability renders flexibility for use in complex geometry; and the present ETAU Euler/Navier-Stokes scheme is capable of handling a broad spectrum of flow regimes from high supersonic to subsonic at very low Mach number, appropriate for both CFD (computational fluid dynamics) and CAA (computational aeroacoustics). Numerous examples are included to demonstrate the robustness of the methods.

  18. An accurate and efficient acoustic eigensolver based on a fast multipole BEM and a contour integral method

    NASA Astrophysics Data System (ADS)

    Zheng, Chang-Jun; Gao, Hai-Feng; Du, Lei; Chen, Hai-Bo; Zhang, Chuanzeng

    2016-01-01

    An accurate numerical solver is developed in this paper for eigenproblems governed by the Helmholtz equation and formulated through the boundary element method. A contour integral method is used to convert the nonlinear eigenproblem into an ordinary eigenproblem, so that eigenvalues can be extracted accurately by solving a set of standard boundary element systems of equations. In order to accelerate the solution procedure, the parameters affecting the accuracy and efficiency of the method are studied and two contour paths are compared. Moreover, a wideband fast multipole method is implemented with a block IDR (s) solver to reduce the overall solution cost of the boundary element systems of equations with multiple right-hand sides. The Burton-Miller formulation is employed to identify the fictitious eigenfrequencies of the interior acoustic problems with multiply connected domains. The actual effect of the Burton-Miller formulation on tackling the fictitious eigenfrequency problem is investigated and the optimal choice of the coupling parameter as α = i / k is confirmed through exterior sphere examples. Furthermore, the numerical eigenvalues obtained by the developed method are compared with the results obtained by the finite element method to show the accuracy and efficiency of the developed method.

  19. [Quantitative analysis method of natural gas combustion process combining wavelength selection and outlier spectra detection].

    PubMed

    Cao, Hui; Hu, Luo-Na; Zhou, Yan

    2012-10-01

    The present paper uses a combination method of wavelength selection and outlier spectra detection for quantitative analysis of nature gas combustion process based on its near infrared spectra. According to the statistical distribution of partial least squares (PLS) model coefficients and prediction errors, the method realized wavelength selection and outlier spectra detection, respectively. In contrast with PLS, PLS after leave-one-out for outlier detection (LOO-PLS), uninformative variable elimination by PLS (UVE-PLS) and UVE-PLS after leave-one-out for outlier detection (LOO-UVE-PLS), the root-mean-squared error of prediction (RMSEP) based on the method for CH4 prediction model is reduced by 14.33%, 14.33%, 10.96% and 12.21%; the RMSEP value for CO prediction model is reduced by 67.26%, 72.58%, 11.32% and 4.52%; the RMSEP value for CO2 prediction model is reduced by 5.95%, 19.7%, 36.71% and 4.04% respectively. Experimental results demonstrate that the method can significantly decrease the number of selected wavelengths, reduce model complexity and effectively detect outlier spectra. The established prediction model of analytes is more accurate as well as robust.

  20. Polymorphism in nimodipine raw materials: development and validation of a quantitative method through differential scanning calorimetry.

    PubMed

    Riekes, Manoela Klüppel; Pereira, Rafael Nicolay; Rauber, Gabriela Schneider; Cuffini, Silvia Lucia; de Campos, Carlos Eduardo Maduro; Silva, Marcos Antonio Segatto; Stulzer, Hellen Karine

    2012-11-01

    Due to the physical-chemical and therapeutic impacts of polymorphism, its monitoring in raw materials is necessary. The purpose of this study was to develop and validate a quantitative method to determine the polymorphic content of nimodipine (NMP) raw materials based on differential scanning calorimetry (DSC). The polymorphs required for the development of the method were characterized through DSC, X-ray powder diffraction (XRPD) and Raman spectroscopy and their polymorphic identity was confirmed. The developed method was found to be linear, robust, precise, accurate and specific. Three different samples obtained from distinct suppliers (NMP 1, NMP 2 and NMP 3) were firstly characterized through XRPD and DSC as polymorphic mixtures. The determination of their polymorphic identity revealed that all samples presented the Modification I (Mod I) or metastable form in greatest proportion. Since the commercial polymorph is Mod I, the polymorphic characteristic of the samples analyzed needs to be investigated. Thus, the proposed method provides a useful tool for the monitoring of the polymorphic content of NMP raw materials.

  1. Quantitative methods for reconstructing tissue biomechanical properties in optical coherence elastography: a comparison study

    PubMed Central

    Han, Zhaolong; Li, Jiasong; Singh, Manmohan; Wu, Chen; Liu, Chih-hao; Wang, Shang; Idugboe, Rita; Raghunathan, Raksha; Sudheendran, Narendran; Aglyamov, Salavat R.; Twa, Michael D.; Larin, Kirill V.

    2015-01-01

    We present a systematic analysis of the accuracy of five different methods for extracting the biomechanical properties of soft samples using optical coherence elastography (OCE). OCE is an emerging noninvasive technique, which allows assessing biomechanical properties of tissues with a micrometer spatial resolution. However, in order to accurately extract biomechanical properties from OCE measurements, application of proper mechanical model is required. In this study, we utilize tissue-mimicking phantoms with controlled elastic properties and investigate the feasibilities of four available methods for reconstructing elasticity (Young’s modulus) based on OCE measurements of an air-pulse induced elastic wave. The approaches are based on the shear wave equation (SWE), the surface wave equation (SuWE), Rayleigh-Lamb frequency equation (RLFE), and finite element method (FEM), Elasticity values were compared with uniaxial mechanical testing. The results show that the RLFE and the FEM are more robust in quantitatively assessing elasticity than the other simplified models. This study provides a foundation and reference for reconstructing the biomechanical properties of tissues from OCE data, which is important for the further development of noninvasive elastography methods. PMID:25860076

  2. Quantitative analysis of eugenol in clove extract by a validated HPLC method.

    PubMed

    Yun, So-Mi; Lee, Myoung-Heon; Lee, Kwang-Jick; Ku, Hyun-Ok; Son, Seong-Wan; Joo, Yi-Seok

    2010-01-01

    Clove (Eugenia caryophyllata) is a well-known medicinal plant used for diarrhea, digestive disorders, or in antiseptics in Korea. Eugenol is the main active ingredient of clove and has been chosen as a marker compound for the chemical evaluation or QC of clove. This paper reports the development and validation of an HPLC-diode array detection (DAD) method for the determination of eugenol in clove. HPLC separation was accomplished on an XTerra RP18 column (250 x 4.6 mm id, 5 microm) with an isocratic mobile phase of 60% methanol and DAD at 280 nm. Calibration graphs were linear with very good correlation coefficients (r2 > 0.9999) from 12.5 to 1000 ng/mL. The LOD was 0.81 and the LOQ was 2.47 ng/mL. The method showed good intraday precision (%RSD 0.08-0.27%) and interday precision (%RSD 0.32-1.19%). The method was applied to the analysis of eugenol from clove cultivated in various countries (Indonesia, Singapore, and China). Quantitative analysis of the 15 clove samples showed that the content of eugenol varied significantly, ranging from 163 to 1049 ppb. The method of determination of eugenol by HPLC is accurate to evaluate the quality and safety assurance of clove, based on the results of this study.

  3. A method for rapid quantitative assessment of biofilms with biomolecular staining and image analysis

    DOE PAGES

    Larimer, Curtis J.; Winder, Eric M.; Jeters, Robert T.; ...

    2015-12-07

    Here, the accumulation of bacteria in surface attached biofilms, or biofouling, can be detrimental to human health, dental hygiene, and many industrial processes. A critical need in identifying and preventing the deleterious effects of biofilms is the ability to observe and quantify their development. Analytical methods capable of assessing early stage fouling are cumbersome or lab-confined, subjective, and qualitative. Herein, a novel photographic method is described that uses biomolecular staining and image analysis to enhance contrast of early stage biofouling. A robust algorithm was developed to objectively and quantitatively measure surface accumulation of Pseudomonas putida from photographs and results weremore » compared to independent measurements of cell density. Results from image analysis quantified biofilm growth intensity accurately and with approximately the same precision of the more laborious cell counting method. This simple method for early stage biofilm detection enables quantifiable measurement of surface fouling and is flexible enough to be applied from the laboratory to the field. Broad spectrum staining highlights fouling biomass, photography quickly captures a large area of interest, and image analysis rapidly quantifies fouling in the image.« less

  4. A method for rapid quantitative assessment of biofilms with biomolecular staining and image analysis

    SciTech Connect

    Larimer, Curtis J.; Winder, Eric M.; Jeters, Robert T.; Prowant, Matthew S.; Nettleship, Ian; Addleman, Raymond S.; Bonheyo, George T.

    2015-12-07

    Here, the accumulation of bacteria in surface attached biofilms, or biofouling, can be detrimental to human health, dental hygiene, and many industrial processes. A critical need in identifying and preventing the deleterious effects of biofilms is the ability to observe and quantify their development. Analytical methods capable of assessing early stage fouling are cumbersome or lab-confined, subjective, and qualitative. Herein, a novel photographic method is described that uses biomolecular staining and image analysis to enhance contrast of early stage biofouling. A robust algorithm was developed to objectively and quantitatively measure surface accumulation of Pseudomonas putida from photographs and results were compared to independent measurements of cell density. Results from image analysis quantified biofilm growth intensity accurately and with approximately the same precision of the more laborious cell counting method. This simple method for early stage biofilm detection enables quantifiable measurement of surface fouling and is flexible enough to be applied from the laboratory to the field. Broad spectrum staining highlights fouling biomass, photography quickly captures a large area of interest, and image analysis rapidly quantifies fouling in the image.

  5. A Variable Coefficient Method for Accurate Monte Carlo Simulation of Dynamic Asset Price

    NASA Astrophysics Data System (ADS)

    Li, Yiming; Hung, Chih-Young; Yu, Shao-Ming; Chiang, Su-Yun; Chiang, Yi-Hui; Cheng, Hui-Wen

    2007-07-01

    In this work, we propose an adaptive Monte Carlo (MC) simulation technique to compute the sample paths for the dynamical asset price. In contrast to conventional MC simulation with constant drift and volatility (μ,σ), our MC simulation is performed with variable coefficient methods for (μ,σ) in the solution scheme, where the explored dynamic asset pricing model starts from the formulation of geometric Brownian motion. With the method of simultaneously updated (μ,σ), more than 5,000 runs of MC simulation are performed to fulfills basic accuracy of the large-scale computation and suppresses statistical variance. Daily changes of stock market index in Taiwan and Japan are investigated and analyzed.

  6. Computer-implemented system and method for automated and highly accurate plaque analysis, reporting, and visualization

    NASA Technical Reports Server (NTRS)

    Kemp, James Herbert (Inventor); Talukder, Ashit (Inventor); Lambert, James (Inventor); Lam, Raymond (Inventor)

    2008-01-01

    A computer-implemented system and method of intra-oral analysis for measuring plaque removal is disclosed. The system includes hardware for real-time image acquisition and software to store the acquired images on a patient-by-patient basis. The system implements algorithms to segment teeth of interest from surrounding gum, and uses a real-time image-based morphing procedure to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The system integrates these components into a single software suite with an easy-to-use graphical user interface (GUI) that allows users to do an end-to-end run of a patient record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image.

  7. Methods to achieve accurate projection of regional and global raster databases

    USGS Publications Warehouse

    Usery, E.L.; Seong, J.C.; Steinwand, D.R.; Finn, M.P.

    2002-01-01

    This research aims at building a decision support system (DSS) for selecting an optimum projection considering various factors, such as pixel size, areal extent, number of categories, spatial pattern of categories, resampling methods, and error correction methods. Specifically, this research will investigate three goals theoretically and empirically and, using the already developed empirical base of knowledge with these results, develop an expert system for map projection of raster data for regional and global database modeling. The three theoretical goals are as follows: (1) The development of a dynamic projection that adjusts projection formulas for latitude on the basis of raster cell size to maintain equal-sized cells. (2) The investigation of the relationships between the raster representation and the distortion of features, number of categories, and spatial pattern. (3) The development of an error correction and resampling procedure that is based on error analysis of raster projection.

  8. A second-order accurate immersed boundary-lattice Boltzmann method for particle-laden flows

    SciTech Connect

    Zhou, Qiang; Fan, Liang-Shih

    2014-07-01

    A new immersed boundary-lattice Boltzmann method (IB-LBM) is presented for fully resolved simulations of incompressible viscous flows laden with rigid particles. The immersed boundary method (IBM) recently developed by Breugem (2012) [19] is adopted in the present method, development including the retraction technique, the multi-direct forcing method and the direct account of the inertia of the fluid contained within the particles. The present IB-LBM is, however, formulated with further improvement with the implementation of the high-order Runge–Kutta schemes in the coupled fluid–particle interaction. The major challenge to implement high-order Runge–Kutta schemes in the LBM is that the flow information such as density and velocity cannot be directly obtained at a fractional time step from the LBM since the LBM only provides the flow information at an integer time step. This challenge can be, however, overcome as given in the present IB-LBM by extrapolating the flow field around particles from the known flow field at the previous integer time step. The newly calculated fluid–particle interactions from the previous fractional time steps of the current integer time step are also accounted for in the extrapolation. The IB-LBM with high-order Runge–Kutta schemes developed in this study is validated by several benchmark applications. It is demonstrated, for the first time, that the IB-LBM has the capacity to resolve the translational and rotational motion of particles with the second-order accuracy. The optimal retraction distances for spheres and tubes that help the method achieve the second-order accuracy are found to be around 0.30 and −0.47 times of the lattice spacing, respectively. Simulations of the Stokes flow through a simple cubic lattice of rotational spheres indicate that the lift force produced by the Magnus effect can be very significant in view of the magnitude of the drag force when the practical rotating speed of the spheres is encountered

  9. Empirical and accurate method for the three-dimensional electrostatic potential (EM-ESP) of biomolecules.

    PubMed

    Du, Qi-Shi; Wang, Cheng-Hua; Wang, Yu-Ting; Huang, Ri-Bo

    2010-04-01

    The electrostatic potential (ESP) is an important property of interactions within and between macromolecules, including those of importance in the life sciences. Semiempirical quantum chemical methods and classical Coulomb calculations fail to provide even qualitative ESP for many of these biomolecules. A new empirical ESP calculation method, namely, EM-ESP, is developed in this study, in which the traditional approach of point atomic charges and the classical Coulomb equation is discarded. In its place, the EM-ESP generates a three-dimensional electrostatic potential V(EM)(r) in molecular space that is the sum of contributions from all component atoms. The contribution of an atom k is formulated as a Gaussian function g(r(k);alpha(k),beta(k)) = alpha(k)/r(k)(betak) with two parameters (alpha(k) and beta(k)). The benchmark for the parameter optimization is the ESP obtained by using higher-level quantum chemical approaches (e.g., CCSD/TZVP). A set of atom-based parameters is optimized in a training set of common organic molecules. Calculated examples demonstrate that the EM-ESP approach is a vast improvement over the Coulombic approach in producing the molecular ESP contours that are comparable to the results obtained with higher-level quantum chemical methods. The atom-based parameters are shown to be transferrable between one part of closely related aromatic molecules. The atom-based ESP formulization and parametrization strategy can be extended to biological macromolecules, such as proteins, DNA, and RNA molecules. Since ESP is frequently used to rationalize and predict intermolecular interactions, we expect that the EM-ESP method will have important applications for studies of protein-ligand and protein-protein interactions in numerous areas of chemistry, molecular biology, and other life sciences.

  10. Computational methods toward accurate RNA structure prediction using coarse-grained and all-atom models.

    PubMed

    Krokhotin, Andrey; Dokholyan, Nikolay V

    2015-01-01

    Computational methods can provide significant insights into RNA structure and dynamics, bridging the gap in our understanding of the relationship between structure and biological function. Simulations enrich and enhance our understanding of data derived on the bench, as well as provide feasible alternatives to costly or technically challenging experiments. Coarse-grained computational models of RNA are especially important in this regard, as they allow analysis of events occurring in timescales relevant to RNA biological function, which are inaccessible through experimental methods alone. We have developed a three-bead coarse-grained model of RNA for discrete molecular dynamics simulations. This model is efficient in de novo prediction of short RNA tertiary structure, starting from RNA primary sequences of less than 50 nucleotides. To complement this model, we have incorporated additional base-pairing constraints and have developed a bias potential reliant on data obtained from hydroxyl probing experiments that guide RNA folding to its correct state. By introducing experimentally derived constraints to our computer simulations, we are able to make reliable predictions of RNA tertiary structures up to a few hundred nucleotides. Our refined model exemplifies a valuable benefit achieved through integration of computation and experimental methods.

  11. EEMD based pitch evaluation method for accurate grating measurement by AFM

    NASA Astrophysics Data System (ADS)

    Li, Changsheng; Yang, Shuming; Wang, Chenying; Jiang, Zhuangde

    2016-09-01

    The pitch measurement and AFM calibration precision are significantly influenced by the grating pitch evaluation method. This paper presents the ensemble empirical mode decomposition (EEMD) based pitch evaluation method to relieve the accuracy deterioration caused by high and low frequency components of scanning profile during pitch evaluation. The simulation analysis shows that the application of EEMD can improve the pitch accuracy of the FFT-FT algorithm. The pitch error is small when the iteration number of the FFT-FT algorithms was 8. The AFM measurement of the 500 nm-pitch one-dimensional grating shows that the EEMD based pitch evaluation method could improve the pitch precision, especially the grating line position precision, and greatly expand the applicability of the gravity center algorithm when particles and impression marks were distributed on the sample surface. The measurement indicates that the nonlinearity was stable, and the nonlinearity of x axis and forward scanning was much smaller than their counterpart. Finally, a detailed pitch measurement uncertainty evaluation model suitable for commercial AFMs was demonstrated and a pitch uncertainty in the sub-nanometer range was achieved. The pitch uncertainty was reduced about 10% by EEMD.

  12. Highly accurate spatial mode generation using spatial cross modulation method for mode division multiplexing

    NASA Astrophysics Data System (ADS)

    Sakuma, Hiroki; Okamoto, Atsushi; Shibukawa, Atsushi; Goto, Yuta; Tomita, Akihisa

    2016-02-01

    We propose a spatial mode generation technology using spatial cross modulation (SCM) for mode division multiplexing (MDM). The most well-known method for generating arbitrary complex amplitude fields is to display an off-axis computer-generated hologram (CGH) on a spatial light modulator (SLM). However, in this method, a desired complex amplitude field is obtained with first order diffraction light. This critically lowers the light utilization efficiency. On the other hand, in the SCM, the desired complex field is provided with zeroth order diffraction light. For this reason, our technology can generate spatial modes with large light utilization efficiency in addition to high accuracy. In this study, first, a numerical simulation was performed to verify that the SCM is applicable for spatial mode generation. Next, we made a comparison from two view points of the coupling efficiency and the light utilization between our technology and the technology using an off-axis amplitude hologram as a representative complex amplitude generation method. The simulation results showed that our technology can achieve considerably high light utilization efficiency while maintaining the enough coupling efficiency comparable to the technology using an off-axis amplitude hologram. Finally, we performed an experiment on spatial modes generation using the SCM. Experimental results showed that our technology has the great potential to realize the spatial mode generation with high accuracy.

  13. Quantitative risk assessment methods for cancer and noncancer effects.

    PubMed

    Baynes, Ronald E

    2012-01-01

    Human health risk assessments have evolved from the more qualitative approaches to more quantitative approaches in the past decade. This has been facilitated by the improvement in computer hardware and software capability and novel computational approaches being slowly recognized by regulatory agencies. These events have helped reduce the reliance on experimental animals as well as better utilization of published animal toxicology data in deriving quantitative toxicity indices that may be useful for risk management purposes. This chapter briefly describes some of the approaches as described in the guidance documents from several of the regulatory agencies as it pertains to hazard identification and dose-response assessment of a chemical. These approaches are contrasted with more novel computational approaches that provide a better grasp of the uncertainty often associated with chemical risk assessments.

  14. An Accurate Calibration Method Based on Velocity in a Rotational Inertial Navigation System.

    PubMed

    Zhang, Qian; Wang, Lei; Liu, Zengjun; Feng, Peide

    2015-07-28

    Rotation modulation is an effective method to enhance the accuracy of an inertial navigation system (INS) by modulating the gyroscope drifts and accelerometer bias errors into periodically varying components. The typical RINS drives the inertial measurement unit (IMU) rotation along the vertical axis and the horizontal sensors' errors are modulated, however, the azimuth angle error is closely related to vertical gyro drift, and the vertical gyro drift also should be modulated effectively. In this paper, a new rotation strategy in a dual-axis rotational INS (RINS) is proposed and the drifts of three gyros could be modulated, respectively. Experimental results from a real dual-axis RINS demonstrate that the maximum azimuth angle error is decreased from 0.04° to less than 0.01° during 1 h. Most importantly, the changing of rotation strategy leads to some additional errors in the velocity which is unacceptable in a high-precision INS. Then the paper studies the basic reason underlying horizontal velocity errors in detail and a relevant new calibration method is designed. Experimental results show that after calibration and compensation, the fluctuation and stages in the velocity curve disappear and velocity precision is improved.

  15. Simple and accurate methods for quantifying deformation, disruption, and development in biological tissues

    PubMed Central

    Boyle, John J.; Kume, Maiko; Wyczalkowski, Matthew A.; Taber, Larry A.; Pless, Robert B.; Xia, Younan; Genin, Guy M.; Thomopoulos, Stavros

    2014-01-01

    When mechanical factors underlie growth, development, disease or healing, they often function through local regions of tissue where deformation is highly concentrated. Current optical techniques to estimate deformation can lack precision and accuracy in such regions due to challenges in distinguishing a region of concentrated deformation from an error in displacement tracking. Here, we present a simple and general technique for improving the accuracy and precision of strain estimation and an associated technique for distinguishing a concentrated deformation from a tracking error. The strain estimation technique improves accuracy relative to other state-of-the-art algorithms by directly estimating strain fields without first estimating displacements, resulting in a very simple method and low computational cost. The technique for identifying local elevation of strain enables for the first time the successful identification of the onset and consequences of local strain concentrating features such as cracks and tears in a highly strained tissue. We apply these new techniques to demonstrate a novel hypothesis in prenatal wound healing. More generally, the analytical methods we have developed provide a simple tool for quantifying the appearance and magnitude of localized deformation from a series of digital images across a broad range of disciplines. PMID:25165601

  16. An accurate and efficient method to predict the electronic excitation energies of BODIPY fluorescent dyes.

    PubMed

    Wang, Jia-Nan; Jin, Jun-Ling; Geng, Yun; Sun, Shi-Ling; Xu, Hong-Liang; Lu, Ying-Hua; Su, Zhong-Min

    2013-03-15

    Recently, the extreme learning machine neural network (ELMNN) as a valid computing method has been proposed to predict the nonlinear optical property successfully (Wang et al., J. Comput. Chem. 2012, 33, 231). In this work, first, we follow this line of work to predict the electronic excitation energies using the ELMNN method. Significantly, the root mean square deviation of the predicted electronic excitation energies of 90 4,4-difluoro-4-bora-3a,4a-diaza-s-indacene (BODIPY) derivatives between the predicted and experimental values has been reduced to 0.13 eV. Second, four groups of molecule descriptors are considered when building the computing models. The results show that the quantum chemical descriptions have the closest intrinsic relation with the electronic excitation energy values. Finally, a user-friendly web server (EEEBPre: Prediction of electronic excitation energies for BODIPY dyes), which is freely accessible to public at the web site: http://202.198.129.218, has been built for prediction. This web server can return the predicted electronic excitation energy values of BODIPY dyes that are high consistent with the experimental values. We hope that this web server would be helpful to theoretical and experimental chemists in related research.

  17. An Accurate Calibration Method Based on Velocity in a Rotational Inertial Navigation System

    PubMed Central

    Zhang, Qian; Wang, Lei; Liu, Zengjun; Feng, Peide

    2015-01-01

    Rotation modulation is an effective method to enhance the accuracy of an inertial navigation system (INS) by modulating the gyroscope drifts and accelerometer bias errors into periodically varying components. The typical RINS drives the inertial measurement unit (IMU) rotation along the vertical axis and the horizontal sensors’ errors are modulated, however, the azimuth angle error is closely related to vertical gyro drift, and the vertical gyro drift also should be modulated effectively. In this paper, a new rotation strategy in a dual-axis rotational INS (RINS) is proposed and the drifts of three gyros could be modulated, respectively. Experimental results from a real dual-axis RINS demonstrate that the maximum azimuth angle error is decreased from 0.04° to less than 0.01° during 1 h. Most importantly, the changing of rotation strategy leads to some additional errors in the velocity which is unacceptable in a high-precision INS. Then the paper studies the basic reason underlying horizontal velocity errors in detail and a relevant new calibration method is designed. Experimental results show that after calibration and compensation, the fluctuation and stages in the velocity curve disappear and velocity precision is improved. PMID:26225983

  18. A novel method for more accurately mapping the surface temperature of ultrasonic transducers.

    PubMed

    Axell, Richard G; Hopper, Richard H; Jarritt, Peter H; Oxley, Chris H

    2011-10-01

    This paper introduces a novel method for measuring the surface temperature of ultrasound transducer membranes and compares it with two standard measurement techniques. The surface temperature rise was measured as defined in the IEC Standard 60601-2-37. The measurement techniques were (i) thermocouple, (ii) thermal camera and (iii) novel infra-red (IR) "micro-sensor." Peak transducer surface measurements taken with the thermocouple and thermal camera were -3.7 ± 0.7 (95% CI)°C and -4.3 ± 1.8 (95% CI)°C, respectively, within the limits of the IEC Standard. Measurements taken with the novel IR micro-sensor exceeded these limits by 3.3 ± 0.9 (95% CI)°C. The ambiguity between our novel method and the standard techniques could have direct patient safety implications because the IR micro-sensor measurements were beyond set limits. The spatial resolution of the measurement technique is not well defined in the IEC Standard and this has to be taken into consideration when selecting which measurement technique is used to determine the maximum surface temperature.

  19. A method for the accurate and smooth approximation of standard thermodynamic functions

    NASA Astrophysics Data System (ADS)

    Coufal, O.

    2013-01-01

    A method is proposed for the calculation of approximations of standard thermodynamic functions. The method is consistent with the physical properties of standard thermodynamic functions. This means that the approximation functions are, in contrast to the hitherto used approximations, continuous and smooth in every temperature interval in which no phase transformations take place. The calculation algorithm was implemented by the SmoothSTF program in the C++ language which is part of this paper. Program summaryProgram title:SmoothSTF Catalogue identifier: AENH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3807 No. of bytes in distributed program, including test data, etc.: 131965 Distribution format: tar.gz Programming language: C++. Computer: Any computer with gcc version 4.3.2 compiler. Operating system: Debian GNU Linux 6.0. The program can be run in operating systems in which the gcc compiler can be installed, see http://gcc.gnu.org/install/specific.html. RAM: 256 MB are sufficient for the table of standard thermodynamic functions with 500 lines Classification: 4.9. Nature of problem: Standard thermodynamic functions (STF) of individual substances are given by thermal capacity at constant pressure, entropy and enthalpy. STF are continuous and smooth in every temperature interval in which no phase transformations take place. The temperature dependence of STF as expressed by the table of its values is for further application approximated by temperature functions. In the paper, a method is proposed for calculating approximation functions which, in contrast to the hitherto used approximations, are continuous and smooth in every temperature interval. Solution method: The approximation functions are

  20. Assessment of a high-order accurate Discontinuous Galerkin method for turbomachinery flows

    NASA Astrophysics Data System (ADS)

    Bassi, F.; Botti, L.; Colombo, A.; Crivellini, A.; Franchina, N.; Ghidoni, A.

    2016-04-01

    In this work the capabilities of a high-order Discontinuous Galerkin (DG) method applied to the computation of turbomachinery flows are investigated. The Reynolds averaged Navier-Stokes equations coupled with the two equations k-ω turbulence model are solved to predict the flow features, either in a fixed or rotating reference frame, to simulate the fluid flow around bodies that operate under an imposed steady rotation. To ensure, by design, the positivity of all thermodynamic variables at a discrete level, a set of primitive variables based on pressure and temperature logarithms is used. The flow fields through the MTU T106A low-pressure turbine cascade and the NASA Rotor 37 axial compressor have been computed up to fourth-order of accuracy and compared to the experimental and numerical data available in the literature.

  1. Practical implementation of an accurate method for multilevel design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.

    1987-01-01

    Solution techniques for handling large scale engineering optimization problems are reviewed. Potentials for practical applications as well as their limited capabilities are discussed. A new solution algorithm for design sensitivity is proposed. The algorithm is based upon the multilevel substructuring concept to be coupled with the adjoint method of sensitivity analysis. There are no approximations involved in the present algorithm except the usual approximations introduced due to the discretization of the finite element model. Results from the six- and thirty-bar planar truss problems show that the proposed multilevel scheme for sensitivity analysis is more effective (in terms of computer incore memory and the total CPU time) than a conventional (one level) scheme even on small problems. The new algorithm is expected to perform better for larger problems and its applications on the new generation of computer hardwares with 'parallel processing' capability is very promising.

  2. An Inexpensive, Stable, and Accurate Relative Humidity Measurement Method for Challenging Environments.

    PubMed

    Zhang, Wei; Ma, Hong; Yang, Simon X

    2016-03-18

    In this research, an improved psychrometer is developed to solve practical issues arising in the relative humidity measurement of challenging drying environments for meat manufacturing in agricultural and agri-food industries. The design in this research focused on the structure of the improved psychrometer, signal conversion, and calculation methods. The experimental results showed the effect of varying psychrometer structure on relative humidity measurement accuracy. An industrial application to dry-cured meat products demonstrated the effective performance of the improved psychrometer being used as a relative humidity measurement sensor in meat-drying rooms. In a drying environment for meat manufacturing, the achieved measurement accuracy for relative humidity using the improved psychrometer was ±0.6%. The system test results showed that the improved psychrometer can provide reliable and long-term stable relative humidity measurements with high accuracy in the drying system of meat products.

  3. An accurate heart beat detection method in the EKG recorded in fMRI system.

    PubMed

    Oh, Sung Suk; Chung, Jun-Young; Yoon, Hyo Woon; Park, HyunWook

    2007-01-01

    The simultaneous recording of functional magnetic resonance imaging (fMRI) and electroencephalogram (EEG) provides an efficient signal for the high spatiotemporal brain mapping because each modality provides complementary information. The peak detection in the EEG signal measured in the MR scanner is necessary for removal of the ballistocardiac artifact. Especially, it would be affected by the quality of the EKG signal and the variation of the heart beat rate. Therefore, we propose the peak detection method using a K-teager energy operator (K-TEO) as well as further refinement processes in order to detect precise peaks. We applied this technique to the analysis of simulation waves with random noise and abrupt heat beat changes.

  4. An Inexpensive, Stable, and Accurate Relative Humidity Measurement Method for Challenging Environments

    PubMed Central

    Zhang, Wei; Ma, Hong; Yang, Simon X.

    2016-01-01

    In this research, an improved psychrometer is developed to solve practical issues arising in the relative humidity measurement of challenging drying environments for meat manufacturing in agricultural and agri-food industries. The design in this research focused on the structure of the improved psychrometer, signal conversion, and calculation methods. The experimental results showed the effect of varying psychrometer structure on relative humidity measurement accuracy. An industrial application to dry-cured meat products demonstrated the effective performance of the improved psychrometer being used as a relative humidity measurement sensor in meat-drying rooms. In a drying environment for meat manufacturing, the achieved measurement accuracy for relative humidity using the improved psychrometer was ±0.6%. The system test results showed that the improved psychrometer can provide reliable and long-term stable relative humidity measurements with high accuracy in the drying system of meat products. PMID:26999161

  5. Accurate treatment of total photoabsorption cross sections by an ab initio time-dependent method

    NASA Astrophysics Data System (ADS)

    Daud, Mohammad Noh

    2014-09-01

    A detailed discussion of parallel and perpendicular transitions required for the photoabsorption of a molecule is presented within a time-dependent view. Total photoabsorption cross sections for the first two ultraviolet absorption bands of the N2O molecule corresponding to transitions from the X1 A' state to the 21 A' and 11 A'' states are calculated to test the reliability of the method. By fully considering the property of the electric field polarization vector of the incident light, the method treats the coupling of angular momentum and the parity differently for two kinds of transitions depending on the direction of the vector whether it is: (a) situated parallel in a molecular plane for an electronic transition between states with the same symmetry; (b) situated perpendicular to a molecular plane for an electronic transition between states with different symmetry. Through this, for those transitions, we are able to offer an insightful picture of the dynamics involved and to characterize some new aspects in the photoabsorption process of N2O. Our calculations predicted that the parallel transition to the 21 A' state is the major dissociation pathway which is in qualitative agreement with the experimental observations. Most importantly, a significant improvement in the absolute value of the total cross section over previous theoretical results [R. Schinke, J. Chem. Phys. 134, 064313 (2011), M.N. Daud, G.G. Balint-Kurti, A. Brown, J. Chem. Phys. 122, 054305 (2005), S. Nanbu, M.S. Johnson, J. Phys. Chem. A 108, 8905 (2004)] was obtained.

  6. Accurate treatment of total photoabsorption cross sections by an ab initio time-dependent method

    NASA Astrophysics Data System (ADS)

    Noh Daud, Mohammad

    2014-09-01

    A detailed discussion of parallel and perpendicular transitions required for the photoabsorption of a molecule is presented within a time-dependent view. Total photoabsorption cross sections for the first two ultraviolet absorption bands of the N2O molecule corresponding to transitions from the X1A' state to the 21A' and 11A'' states are calculated to test the reliability of the method. By fully considering the property of the electric field polarization vector of the incident light, the method treats the coupling of angular momentum and the parity differently for two kinds of transitions depending on the direction of the vector whether it is: (a) situated parallel in a molecular plane for an electronic transition between states with the same symmetry; (b) situated perpendicular to a molecular plane for an electronic transition between states with different symmetry. Through this, for those transitions, we are able to offer an insightful picture of the dynamics involved and to characterize some new aspects in the photoabsorption process of N2O. Our calculations predicted that the parallel transition to the 21A' state is the major dissociation pathway which is in qualitative agreement with the experimental observations. Most importantly, a significant improvement in the absolute value of the total cross section over previous theoretical results [R. Schinke, J. Chem. Phys. 134, 064313 (2011), M.N. Daud, G.G. Balint-Kurti, A. Brown, J. Chem. Phys. 122, 054305 (2005), S. Nanbu, M.S. Johnson, J. Phys. Chem. A 108, 8905 (2004)] was obtained.

  7. DiScRIBinATE: a rapid method for accurate taxonomic classification of metagenomic sequences

    PubMed Central

    2010-01-01

    Background In metagenomic sequence data, majority of sequences/reads originate from new or partially characterized genomes, the corresponding sequences of which are absent in existing reference databases. Since taxonomic assignment of reads is based on their similarity to sequences from known organisms, the presence of reads originating from new organisms poses a major challenge to taxonomic binning methods. The recently published SOrt-ITEMS algorithm uses an elaborate work-flow to assign reads originating from hitherto unknown genomes with significant accuracy and specificity. Nevertheless, a significant proportion of reads still get misclassified. Besides, the use of an alignment-based orthology step (for improving the specificity of assignments) increases the total binning time of SOrt-ITEMS. Results In this paper, we introduce a rapid binning approach called DiScRIBinATE (Distance Score Ratio for Improved Binning And Taxonomic Estimation). DiScRIBinATE replaces the orthology approach of SOrt-ITEMS with a quicker 'alignment-free' approach. We demonstrate that incorporating this approach reduces binning time by half without any loss in the specificity and accuracy of assignments. Besides, a novel reclassification strategy incorporated in DiScRIBinATE results in reducing the overall misclassification rate to around 3 - 7%. This misclassification rate is 1.5 - 3 times lower as compared to that by SOrt-ITEMS, and 3 - 30 times lower as compared to that by MEGAN. Conclusions A significant reduction in binning time, coupled with a superior assignment accuracy (as compared to existing binning methods), indicates the immense applicability of the proposed algorithm in rapidly mapping the taxonomic diversity of large metagenomic samples with high accuracy and specificity. Availability The program is available on request from the authors. PMID:21106121

  8. A More Accurate and Efficient Technique Developed for Using Computational Methods to Obtain Helical Traveling-Wave Tube Interaction Impedance

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    1999-01-01

    The phenomenal growth of commercial communications has created a great demand for traveling-wave tube (TWT) amplifiers. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. For the first time, an accurate three-dimensional helical model was developed that allows accurate prediction of TWT cold-test characteristics including operating frequency, interaction impedance, and attenuation. This computational model, which was developed at the NASA Lewis Research Center, allows TWT designers to obtain a more accurate value of interaction impedance than is possible using experimental methods. Obtaining helical slow-wave circuit interaction impedance is an important part of the design process for a TWT because it is related to the gain and efficiency of the tube. This impedance cannot be measured directly; thus, conventional methods involve perturbing a helical circuit with a cylindrical dielectric rod placed on the central axis of the circuit and obtaining the difference in resonant frequency between the perturbed and unperturbed circuits. A mathematical relationship has been derived between this frequency difference and the interaction impedance (ref. 1). However, because of the complex configuration of the helical circuit, deriving this relationship involves several approximations. In addition, this experimental procedure is time-consuming and expensive, but until recently it was widely accepted as the most accurate means of determining interaction impedance. The advent of an accurate three-dimensional helical circuit model (ref. 2) made it possible for Lewis researchers to fully investigate standard approximations made in deriving the relationship between measured perturbation data and interaction impedance. The most prominent approximations made

  9. AN ACCURATE NEW METHOD OF CALCULATING ABSOLUTE MAGNITUDES AND K-CORRECTIONS APPLIED TO THE SLOAN FILTER SET

    SciTech Connect

    Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin

    2014-12-20

    We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.

  10. An accurate Rb density measurement method for a plasma wakefield accelerator experiment using a novel Rb reservoir

    NASA Astrophysics Data System (ADS)

    Öz, E.; Batsch, F.; Muggli, P.

    2016-09-01

    A method to accurately measure the density of Rb vapor is described. We plan on using this method for the Advanced Wakefield (AWAKE) (Assmann et al., 2014 [1]) project at CERN , which will be the world's first proton driven plasma wakefield experiment. The method is similar to the hook (Marlow, 1967 [2]) method and has been described in great detail in the work by Hill et al. (1986) [3]. In this method a cosine fit is applied to the interferogram to obtain a relative accuracy on the order of 1% for the vapor density-length product. A single-mode, fiber-based, Mach-Zenhder interferometer will be built and used near the ends of the 10 meter-long AWAKE plasma source to be able to make accurate relative density measurement between these two locations. This can then be used to infer the vapor density gradient along the AWAKE plasma source and also change it to the value desired for the plasma wakefield experiment. Here we describe the plan in detail and show preliminary results obtained using a prototype 8 cm long novel Rb vapor cell.

  11. Benchmarking Semiempirical Methods for Thermochemistry, Kinetics, and Noncovalent Interactions: OMx Methods Are Almost As Accurate and Robust As DFT-GGA Methods for Organic Molecules.

    PubMed

    Korth, Martin; Thiel, Walter

    2011-09-13

    Semiempirical quantum mechanical (SQM) methods offer a fast approximate treatment of the electronic structure and the properties of large molecules. Careful benchmarks are required to establish their accuracy. Here, we report a validation of standard SQM methods using a subset of the comprehensive GMTKN24 database for general main group thermochemistry, kinetics, and noncovalent interactions, which has recently been introduced to evaluate density functional theory (DFT) methods ( J. Chem. Theory Comput. 2010 , 6 , 107 ). For all SQM methods considered presently, parameters are available for the elements H, C, N, and O, and consequently, we have extracted from the GMTKN24 database all species containing only these four elements (excluding multireference cases). The resulting GMTKN24-hcno database has 370 entries (derived from 593 energies) compared with 715 entries (derived from 1033 energies) in the original GMTKN24 database. The current benchmark covers established standard SQM methods (AM1, PM6), more recent approaches with orthogonalization corrections (OM1, OM2, OM3), and the self-consistent-charge density functional tight binding method (SCC-DFTB). The results are compared against each other and against DFT results using standard functionals. We find that the OMx methods outperform AM1, PM6, and SCC-DFTB by a significant margin, with a substantial gain in accuracy especially for OM2 and OM3. These latter methods are quite accurate even in comparison with DFT, with an overall mean absolute deviation of 6.6 kcal/mol for PBE and 7.9 kcal/mol for OM3. The OMx methods are also remarkably robust with regard to the unusual bonding situations encountered in the "mindless" MB08-165 test set, for which all other SQM methods fail badly.

  12. A hybrid method for efficient and accurate simulations of diffusion compartment imaging signals

    NASA Astrophysics Data System (ADS)

    Rensonnet, Gaëtan; Jacobs, Damien; Macq, Benoît; Taquet, Maxime

    2015-12-01

    Diffusion-weighted imaging is sensitive to the movement of water molecules through the tissue microstructure and can therefore be used to gain insight into the tissue cellular architecture. While the diffusion signal arising from simple geometrical microstructure is known analytically, it remains unclear what diffusion signal arises from complex microstructural configurations. Such knowledge is important to design optimal acquisition sequences, to understand the limitations of diffusion-weighted imaging and to validate novel models of the brain microstructure. We present a novel framework for the efficient simulation of high-quality DW-MRI signals based on the hybrid combination of exact analytic expressions in simple geometric compartments such as cylinders and spheres and Monte Carlo simulations in more complex geometries. We validate our approach on synthetic arrangements of parallel cylinders representing the geometry of white matter fascicles, by comparing it to complete, all-out Monte Carlo simulations commonly used in the literature. For typical configurations, equal levels of accuracy are obtained with our hybrid method in less than one fifth of the computational time required for Monte Carlo simulations.

  13. A Cost-Benefit and Accurate Method for Assessing Microalbuminuria: Single versus Frequent Urine Analysis.

    PubMed

    Hemmati, Roholla; Gharipour, Mojgan; Khosravi, Alireza; Jozan, Mahnaz

    2013-01-01

    Background. The purpose of this study was to answer the question whether a single testing for microalbuminuria results in a reliable conclusion leading costs saving. Methods. This current cross-sectional study included a total of 126 consecutive persons. Microalbuminuria was assessed by collection of two fasting random urine specimens on arrival to the clinic as well as one week later in the morning. Results. In overall, 17 out of 126 participants suffered from microalbuminuria that, among them, 12 subjects were also diagnosed as microalbuminuria once assessing this factor with a sensitivity of 70.6%, a specificity of 100%, a PPV of 100%, a NPV of 95.6%, and an accuracy of 96.0%. The measured sensitivity, specificity, PVV, NPV, and accuracy in hypertensive patients were 73.3%, 100%, 100%, 94.8%, and 95.5%, respectively. Also, these rates in nonhypertensive groups were 50.0%, 100%, 100%, 97.3%, and 97.4%, respectively. According to the ROC curve analysis, a single measurement of UACR had a high value for discriminating defected from normal renal function state (c = 0.989). Urinary albumin concentration in a single measurement had also high discriminative value for diagnosis of damaged kidney (c = 0.995). Conclusion. The single testing of both UACR and urine albumin level rather frequent testing leads to high diagnostic sensitivity, specificity, and accuracy as well as high predictive values in total population and also in hypertensive subgroups.

  14. An efficient method for accurate segmentation of LV in contrast-enhanced cardiac MR images

    NASA Astrophysics Data System (ADS)

    Suryanarayana K., Venkata; Mitra, Abhishek; Srikrishnan, V.; Jo, Hyun Hee; Bidesi, Anup

    2016-03-01

    Segmentation of left ventricle (LV) in contrast-enhanced cardiac MR images is a challenging task because of high variability in the image intensity. This is due to a) wash-in and wash-out of the contrast agent over time and b) poor contrast around the epicardium (outer wall) region. Current approaches for segmentation of the endocardium (inner wall) usually involve application of a threshold within the region of interest, followed by refinement techniques like active contours. A limitation of this method is under-segmentation of the inner wall because of gradual loss of contrast at the wall boundary. On the other hand, the challenge in outer wall segmentation is the lack of reliable boundaries because of poor contrast. There are four main contributions in this paper to address the aforementioned issues. First, a seed image is selected using variance based approach on 4D time-frame images over which initial endocardium and epicardium is segmented. Secondly, we propose a patch based feature which overcomes the problem of gradual contrast loss for LV endocardium segmentation. Third, we propose a novel Iterative-Edge-Refinement (IER) technique for epicardium segmentation. Fourth, we propose a greedy search algorithm for propagating the initial contour segmented on seed-image across other time frame images. We have experimented our technique on five contrast-enhanced cardiac MR Datasets (4D) having a total of 1097 images. The segmentation results for all 1097 images have been visually inspected by a clinical expert and have shown good accuracy.

  15. Spectral Quadrature method for accurate O(N) electronic structure calculations of metals and insulators

    DOE PAGES

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    2015-12-02

    We present the Clenshaw–Curtis Spectral Quadrature (SQ) method for real-space O(N) Density Functional Theory (DFT) calculations. In this approach, all quantities of interest are expressed as bilinear forms or sums over bilinear forms, which are then approximated by spatially localized Clenshaw–Curtis quadrature rules. This technique is identically applicable to both insulating and metallic systems, and in conjunction with local reformulation of the electrostatics, enables the O(N) evaluation of the electronic density, energy, and atomic forces. The SQ approach also permits infinite-cell calculations without recourse to Brillouin zone integration or large supercells. We employ a finite difference representation in order tomore » exploit the locality of electronic interactions in real space, enable systematic convergence, and facilitate large-scale parallel implementation. In particular, we derive expressions for the electronic density, total energy, and atomic forces that can be evaluated in O(N) operations. We demonstrate the systematic convergence of energies and forces with respect to quadrature order as well as truncation radius to the exact diagonalization result. In addition, we show convergence with respect to mesh size to established O(N3) planewave results. In conclusion, we establish the efficiency of the proposed approach for high temperature calculations and discuss its particular suitability for large-scale parallel computation.« less

  16. Spectral Quadrature method for accurate O(N) electronic structure calculations of metals and insulators

    NASA Astrophysics Data System (ADS)

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    2016-03-01

    We present the Clenshaw-Curtis Spectral Quadrature (SQ) method for real-space O(N) Density Functional Theory (DFT) calculations. In this approach, all quantities of interest are expressed as bilinear forms or sums over bilinear forms, which are then approximated by spatially localized Clenshaw-Curtis quadrature rules. This technique is identically applicable to both insulating and metallic systems, and in conjunction with local reformulation of the electrostatics, enables the O(N) evaluation of the electronic density, energy, and atomic forces. The SQ approach also permits infinite-cell calculations without recourse to Brillouin zone integration or large supercells. We employ a finite difference representation in order to exploit the locality of electronic interactions in real space, enable systematic convergence, and facilitate large-scale parallel implementation. In particular, we derive expressions for the electronic density, total energy, and atomic forces that can be evaluated in O(N) operations. We demonstrate the systematic convergence of energies and forces with respect to quadrature order as well as truncation radius to the exact diagonalization result. In addition, we show convergence with respect to mesh size to established O(N3) planewave results. Finally, we establish the efficiency of the proposed approach for high temperature calculations and discuss its particular suitability for large-scale parallel computation.

  17. Method for accurately positioning a device at a desired area of interest

    DOEpatents

    Jones, Gary D.; Houston, Jack E.; Gillen, Kenneth T.

    2000-01-01

    A method for positioning a first device utilizing a surface having a viewing translation stage, the surface being movable between a first position where the viewing stage is in operational alignment with a first device and a second position where the viewing stage is in operational alignment with a second device. The movable surface is placed in the first position and an image is produced with the first device of an identifiable characteristic of a calibration object on the viewing stage. The moveable surface is then placed in the second position and only the second device is moved until an image of the identifiable characteristic in the second device matches the image from the first device. The calibration object is then replaced on the stage of the surface with a test object, and the viewing translation stage is adjusted until the second device images the area of interest. The surface is then moved to the first position where the test object is scanned with the first device to image the area of interest. An alternative embodiment where the devices move is also disclosed.

  18. Quantitative Analysis of Single and Mix Food Antiseptics Basing on SERS Spectra with PLSR Method

    NASA Astrophysics Data System (ADS)

    Hou, Mengjing; Huang, Yu; Ma, Lingwei; Zhang, Zhengjun

    2016-06-01

    Usage and dosage of food antiseptics are very concerned due to their decisive influence in food safety. Surface-enhanced Raman scattering (SERS) effect was employed in this research to realize trace potassium sorbate (PS) and sodium benzoate (SB) detection. HfO2 ultrathin film-coated Ag NR array was fabricated as SERS substrate. Protected by HfO2 film, the SERS substrate possesses good acid resistance, which enables it to be applicable in acidic environment where PS and SB work. Regression relationship between SERS spectra of 0.3~10 mg/L PS solution and their concentration was calibrated by partial least squares regression (PLSR) method, and the concentration prediction performance was quite satisfactory. Furthermore, mixture solution of PS and SB was also quantitatively analyzed by PLSR method. Spectrum data of characteristic peak sections corresponding to PS and SB was used to establish the regression models of these two solutes, respectively, and their concentrations were determined accurately despite their characteristic peak sections overlapping. It is possible that the unique modeling process of PLSR method prevented the overlapped Raman signal from reducing the model accuracy.

  19. Quantitatively estimating defects in graphene devices using discharge current analysis method

    PubMed Central

    Jung, Ukjin; Lee, Young Gon; Kang, Chang Goo; Lee, Sangchul; Kim, Jin Ju; Hwang, Hyeon June; Lim, Sung Kwan; Ham, Moon-Ho; Lee, Byoung Hun

    2014-01-01

    Defects of graphene are the most important concern for the successful applications of graphene since they affect device performance significantly. However, once the graphene is integrated in the device structures, the quality of graphene and surrounding environment could only be assessed using indirect information such as hysteresis, mobility and drive current. Here we develop a discharge current analysis method to measure the quality of graphene integrated in a field effect transistor structure by analyzing the discharge current and examine its validity using various device structures. The density of charging sites affecting the performance of graphene field effect transistor obtained using the discharge current analysis method was on the order of 1014/cm2, which closely correlates with the intensity ratio of the D to G bands in Raman spectroscopy. The graphene FETs fabricated on poly(ethylene naphthalate) (PEN) are found to have a lower density of charging sites than those on SiO2/Si substrate, mainly due to reduced interfacial interaction between the graphene and the PEN. This method can be an indispensable means to improve the stability of devices using a graphene as it provides an accurate and quantitative way to define the quality of graphene after the device fabrication. PMID:24811431

  20. Quantitative structure-activity relationships of imidazole-containing farnesyltransferase inhibitors using different chemometric methods.

    PubMed

    Shayanfar, Ali; Ghasemi, Saeed; Soltani, Somaieh; Asadpour-Zeynali, Karim; Doerksen, Robert J; Jouyban, Abolghasem

    2013-05-01

    Farnesyltranseferase inhibitors (FTIs) are one of the most promising classes of anticancer agents, but though some compounds in this category are in clinical trials there are no marketed drugs in this class yet. Quantitative structure activity relationship (QSAR) models can be used for predicting the activity of FTI candidates in early stages of drug discovery. In this study 192 imidazole-containing FTIs were obtained from the literature, structures of the molecules were optimized using Hyperchem software, and molecular descriptors were calculated using Dragon software. The most suitable descriptors were selected using genetic algorithms-partial least squares (GA-PLS) and stepwise regression, and indicated that the volume, shape and polarity of the FTIs are important for their activities. 2D-QSAR models were prepared using both linear methods, i.e., multiple linear regression (MLR), and non-linear methods, i.e., artificial neural networks (ANN) and support vector machines (SVM). The proposed QSAR models were validated using internal and external validation methods. The results show that the proposed 2D-QSAR models are valid and that they can be applied to predict the activities of imidazole-containing FTIs. The prediction capability of the 2D-QSAR (linear and non-linear) models is comparable to and somewhat better than that of previous 3D-QSAR models and the non-linear models are more accurate than the linear models.

  1. Quantitative Method to Investigate the Balance between Metabolism and Proteome Biomass: Starting from Glycine.

    PubMed

    Gu, Haiwei; Carroll, Patrick A; Du, Jianhai; Zhu, Jiangjiang; Neto, Fausto Carnevale; Eisenman, Robert N; Raftery, Daniel

    2016-12-12

    The balance between metabolism and biomass is very important in biological systems; however, to date there has been no quantitative method to characterize the balance. In this methodological study, we propose to use the distribution of amino acids in different domains to investigate this balance. It is well known that endogenous or exogenous amino acids in a biological system are either metabolized or incorporated into free amino acids (FAAs) or proteome amino acids (PAAs). Using glycine (Gly) as an example, we demonstrate a novel method to accurately determine the amounts of amino acids in various domains using serum, urine, and cell samples. As expected, serum and urine had very different distributions of FAA- and PAA-Gly. Using Tet21N human neuroblastoma cells, we also found that Myc(oncogene)-induced metabolic reprogramming included a higher rate of metabolizing Gly, which provides additional evidence that the metabolism of proliferating cells is adapted to facilitate producing new cells. It is therefore anticipated that our method will be very valuable for further studies of the metabolism and biomass balance that will lead to a better understanding of human cancers.

  2. Quantitatively estimating defects in graphene devices using discharge current analysis method.

    PubMed

    Jung, Ukjin; Lee, Young Gon; Kang, Chang Goo; Lee, Sangchul; Kim, Jin Ju; Hwang, Hyeon June; Lim, Sung Kwan; Ham, Moon-Ho; Lee, Byoung Hun

    2014-05-08

    Defects of graphene are the most important concern for the successful applications of graphene since they affect device performance significantly. However, once the graphene is integrated in the device structures, the quality of graphene and surrounding environment could only be assessed using indirect information such as hysteresis, mobility and drive current. Here we develop a discharge current analysis method to measure the quality of graphene integrated in a field effect transistor structure by analyzing the discharge current and examine its validity using various device structures. The density of charging sites affecting the performance of graphene field effect transistor obtained using the discharge current analysis method was on the order of 10(14)/cm(2), which closely correlates with the intensity ratio of the D to G bands in Raman spectroscopy. The graphene FETs fabricated on poly(ethylene naphthalate) (PEN) are found to have a lower density of charging sites than those on SiO2/Si substrate, mainly due to reduced interfacial interaction between the graphene and the PEN. This method can be an indispensable means to improve the stability of devices using a graphene as it provides an accurate and quantitative way to define the quality of graphene after the device fabrication.

  3. Quantitatively estimating defects in graphene devices using discharge current analysis method

    NASA Astrophysics Data System (ADS)

    Jung, Ukjin; Lee, Young Gon; Kang, Chang Goo; Lee, Sangchul; Kim, Jin Ju; Hwang, Hyeon June; Lim, Sung Kwan; Ham, Moon-Ho; Lee, Byoung Hun

    2014-05-01

    Defects of graphene are the most important concern for the successful applications of graphene since they affect device performance significantly. However, once the graphene is integrated in the device structures, the quality of graphene and surrounding environment could only be assessed using indirect information such as hysteresis, mobility and drive current. Here we develop a discharge current analysis method to measure the quality of graphene integrated in a field effect transistor structure by analyzing the discharge current and examine its validity using various device structures. The density of charging sites affecting the performance of graphene field effect transistor obtained using the discharge current analysis method was on the order of 1014/cm2, which closely correlates with the intensity ratio of the D to G bands in Raman spectroscopy. The graphene FETs fabricated on poly(ethylene naphthalate) (PEN) are found to have a lower density of charging sites than those on SiO2/Si substrate, mainly due to reduced interfacial interaction between the graphene and the PEN. This method can be an indispensable means to improve the stability of devices using a graphene as it provides an accurate and quantitative way to define the quality of graphene after the device fabrication.

  4. Quantitative and chemical fingerprint analysis for quality control of rhizoma Coptidischinensis based on UPLC-PAD combined with chemometrics methods.

    PubMed

    Kong, Wei-Jun; Zhao, Yan-Ling; Xiao, Xiao-He; Jin, Cheng; Li, Zu-Lun

    2009-10-01

    To control the quality of rhizoma Coptidis, a method based on ultra performance liquid chromatography with photodiode array detector (UPLC-PAD) was developed for quantitative analysis of five active alkaloids and chemical fingerprint analysis. In quantitative analysis, the five alkaloids showed good regression (R > 0.9992) within test ranges and the recovery of the method was in the range of 98.4-100.8%. The limit of detections and quantifications for five alkaloids in PAD were less than 0.07 and 0.22 microg/ml, respectively. In order to compare the UPLC fingerprints between rhizoma Coptidis from different origins, the chemometrics procedures, including similarity analysis (SA), hierarchical clustering analysis (HCA), principal component analysis (PCA) were applied to classify the rhizoma Coptidis samples according to their cultivated origins. Consistent results were obtained to show that rhizoma Coptidis samples could be successfully grouped in accordance with the province of origin. Furthermore, five marker constituents were screened out to be the main chemical marker, which could be applied to accurate discrimination and quality control for rhizoma Coptidis by quantitative analysis. This study revealed that UPLC-PAD method was simple, sensitive and reliable for quantitative and chemical fingerprint analysis, moreover, for the quality evaluation and control of rhizoma Coptidis.

  5. Absolute age Determinations on Diamond by Radioisotopic Methods: NOT the way to Accurately Identify Diamond Provenance

    NASA Astrophysics Data System (ADS)

    Shirey, S. B.

    2002-05-01

    Gem-quality diamond contains such low abundances of parent-daughter radionuclides that dating the diamond lattice directly by isotopic measurements has been and will be impossible. Absolute ages on diamonds typically are obtained through measurements of their syngenetic mineral inclusions: Rb-Sr in garnet; Sm-Nd in garnet and pyroxene; Re-Os and U-Th-Pb in sulfide; K-Ar in pyroxene; and U-Pb in zircon. The application of the first two isotope schemes in the list requires putting together many inclusions from many diamonds whereas the latter isotope schemes permit ages on single diamonds. The key limitations on the application of these decay pairs are the availability and size of the inclusions, the abundance levels of the radionuclides, and instrumental sensitivity. Practical complications of radioisotope dating of inclusions are fatal to the application of the technique for diamond provenance. In all mines, the ratio of gem-quality diamonds to stones with datable inclusions is very high. Thus there is no way to date the valuable, marketable stones that are part of the conflict diamond problem, just their rare, flawed cousins. Each analysis destroys the diamond host plus the inclusion and can only be carried out in research labs by highly trained scientists. Thus, these methods can not be automated or applied to the bulk of diamond production. The geological problems with age dating are equally fatal to its application to diamond provenance. From the geological perspective, for age determination to work as a tool for diamond provenance studies, diamond ages would have to be specific to particular kimberlites or kimberlite fields and different between fields. The southern African Kaapvaal-Zimbabwe Craton and Limpopo Mobile Belt is the only cratonic region where age determinations have been applied on a large enough scale to a number of kimberlites to illustrate the geological problems in age measurements for diamond provenance. However, this southern African example

  6. A modified method for accurate correlation between the craze density and the optomechanical properties of fibers using Pluta microscope.

    PubMed

    Sokkar, T Z N; El-Farahaty, K A; El-Bakary, M A; Omar, E Z; Hamza, A A

    2016-05-01

    A modified method was suggested to improve the performance of the Pluta microscope in its nonduplicated mode in the calculation of the areal craze density especially, for relatively low draw ratio (low areal craze density). This method decreases the error that is resulted from the similarity between the formed crazes and the dark fringes of the interference pattern. Furthermore, an accurate method to calculate the birefringence and the orientation function of the drawn fibers via nonduplicated Pluta polarizing interference microscope for high areal craze density (high draw ratio) was suggested. The advantage of the suggested method is to relate the optomechanical properties of the tested fiber with the areal craze density, for the same region of the fiber material.

  7. Photometric brown-dwarf classification. I. A method to identify and accurately classify large samples of brown dwarfs without spectroscopy

    NASA Astrophysics Data System (ADS)

    Skrzypek, N.; Warren, S. J.; Faherty, J. K.; Mortlock, D. J.; Burgasser, A. J.; Hewett, P. C.

    2015-02-01

    Aims: We present a method, named photo-type, to identify and accurately classify L and T dwarfs onto the standard spectral classification system using photometry alone. This enables the creation of large and deep homogeneous samples of these objects efficiently, without the need for spectroscopy. Methods: We created a catalogue of point sources with photometry in 8 bands, ranging from 0.75 to 4.6 μm, selected from an area of 3344 deg2, by combining SDSS, UKIDSS LAS, and WISE data. Sources with 13.0 0.8, were then classified by comparison against template colours of quasars, stars, and brown dwarfs. The L and T templates, spectral types L0 to T8, were created by identifying previously known sources with spectroscopic classifications, and fitting polynomial relations between colour and spectral type. Results: Of the 192 known L and T dwarfs with reliable photometry in the surveyed area and magnitude range, 189 are recovered by our selection and classification method. We have quantified the accuracy of the classification method both externally, with spectroscopy, and internally, by creating synthetic catalogues and accounting for the uncertainties. We find that, brighter than J = 17.5, photo-type classifications are accurate to one spectral sub-type, and are therefore competitive with spectroscopic classifications. The resultant catalogue of 1157 L and T dwarfs will be presented in a companion paper.

  8. Laser flare photometry: a noninvasive, objective, and quantitative method to measure intraocular inflammation.

    PubMed

    Tugal-Tutkun, Ilknur; Herbort, Carl P

    2010-10-01

    Aqueous flare and cells are the two inflammatory parameters of anterior chamber inflammation resulting from disruption of the blood-ocular barriers. When examined with the slit lamp, measurement of intraocular inflammation remains subjective with considerable intra- and interobserver variations. Laser flare cell photometry is an objective quantitative method that enables accurate measurement of these parameters with very high reproducibility. Laser flare photometry allows detection of subclinical alterations in the blood-ocular barriers, identifying subtle pathological changes that could not have been recorded otherwise. With the use of this method, it has been possible to compare the effect of different surgical techniques, surgical adjuncts, and anti-inflammatory medications on intraocular inflammation. Clinical studies of uveitis patients have shown that flare measurements by laser flare photometry allowed precise monitoring of well-defined uveitic entities and prediction of disease relapse. Relationships of laser flare photometry values with complications of uveitis and visual loss further indicate that flare measurement by laser flare photometry should be included in the routine follow-up of patients with uveitis.

  9. Compensation method for obtaining accurate, sub-micrometer displacement measurements of immersed specimens using electronic speckle interferometry.

    PubMed

    Fazio, Massimo A; Bruno, Luigi; Reynaud, Juan F; Poggialini, Andrea; Downs, J Crawford

    2012-03-01

    We proposed and validated a compensation method that accounts for the optical distortion inherent in measuring displacements on specimens immersed in aqueous solution. A spherically-shaped rubber specimen was mounted and pressurized on a custom apparatus, with the resulting surface displacements recorded using electronic speckle pattern interferometry (ESPI). Point-to-point light direction computation is achieved by a ray-tracing strategy coupled with customized B-spline-based analytical representation of the specimen shape. The compensation method reduced the mean magnitude of the displacement error induced by the optical distortion from 35% to 3%, and ESPI displacement measurement repeatability showed a mean variance of 16 nm at the 95% confidence level for immersed specimens. The ESPI interferometer and numerical data analysis procedure presented herein provide reliable, accurate, and repeatable measurement of sub-micrometer deformations obtained from pressurization tests of spherically-shaped specimens immersed in aqueous salt solution. This method can be used to quantify small deformations in biological tissue samples under load, while maintaining the hydration necessary to ensure accurate material property assessment.

  10. Protostellar hydrodynamics: Constructing and testing a spacially and temporally second-order accurate method. 2: Cartesian coordinates

    NASA Technical Reports Server (NTRS)

    Myhill, Elizabeth A.; Boss, Alan P.

    1993-01-01

    In Boss & Myhill (1992) we described the derivation and testing of a spherical coordinate-based scheme for solving the hydrodynamic equations governing the gravitational collapse of nonisothermal, nonmagnetic, inviscid, radiative, three-dimensional protostellar clouds. Here we discuss a Cartesian coordinate-based scheme based on the same set of hydrodynamic equations. As with the spherical coorrdinate-based code, the Cartesian coordinate-based scheme employs explicit Eulerian methods which are both spatially and temporally second-order accurate. We begin by describing the hydrodynamic equations in Cartesian coordinates and the numerical methods used in this particular code. Following Finn & Hawley (1989), we pay special attention to the proper implementations of high-order accuracy, finite difference methods. We evaluate the ability of the Cartesian scheme to handle shock propagation problems, and through convergence testing, we show that the code is indeed second-order accurate. To compare the Cartesian scheme discussed here with the spherical coordinate-based scheme discussed in Boss & Myhill (1992), the two codes are used to calculate the standard isothermal collapse test case described by Bodenheimer & Boss (1981). We find that with the improved codes, the intermediate bar-configuration found previously disappears, and the cloud fragments directly into a binary protostellar system. Finally, we present the results from both codes of a new test for nonisothermal protostellar collapse.

  11. Multi-stencils fast marching methods: a highly accurate solution to the eikonal equation on cartesian domains.

    PubMed

    Hassouna, M Sabry; Farag, A A

    2007-09-01

    A wide range of computer vision applications require an accurate solution of a particular Hamilton- Jacobi (HJ) equation, known as the Eikonal equation. In this paper, we propose an improved version of the fast marching method (FMM) that is highly accurate for both 2D and 3D Cartesian domains. The new method is called multi-stencils fast marching (MSFM), which computes the solution at each grid point by solving the Eikonal equation along several stencils and then picks the solution that satisfies the upwind condition. The stencils are centered at each grid point and cover its entire nearest neighbors. In 2D space, 2 stencils cover the 8-neighbors of the point, while in 3D space, 6 stencils cover its 26-neighbors. For those stencils that are not aligned with the natural coordinate system, the Eikonal equation is derived using directional derivatives and then solved using higher order finite difference schemes. The accuracy of the proposed method over the state-of-the-art FMM-based techniques has been demonstrated through comprehensive numerical experiments.

  12. Accurate and economical solution of the pressure-head form of Richards' equation by the method of lines

    NASA Astrophysics Data System (ADS)

    Tocci, Michael D.; Kelley, C. T.; Miller, Cass T.

    The pressure-head form of Richards' equation (RE) is difficult to solve accurately using standard time integration methods. For example, mass balance errors grow as the integration progresses unless very small time steps are taken. Further, RE may be solved for many problems more economically and robustly with variable-size time steps rather than with a constant time-step size, but variable step-size methods applied to date have relied upon empirical approaches to control step size, which do not explicitly control temporal truncation error of the solution. We show how a differential algebrain equation implementation of the method of lines can give solutions to RE that are accurate, have good mass balance properties, explicitly control temporal truncation error, and are more economical than standard approaches for a wide range of solution accuracy. We detail changes to a standard integrator, DASPK, that improves efficiency for the test problems considered, and we advocate the use of this approach for both RE and other problems involving subsurface flow and transport phenomena.

  13. A fast accurate approximation method with multigrid solver for two-dimensional fractional sub-diffusion equation

    NASA Astrophysics Data System (ADS)

    Lin, Xue-lei; Lu, Xin; Ng, Micheal K.; Sun, Hai-Wei

    2016-10-01

    A fast accurate approximation method with multigrid solver is proposed to solve a two-dimensional fractional sub-diffusion equation. Using the finite difference discretization of fractional time derivative, a block lower triangular Toeplitz matrix is obtained where each main diagonal block contains a two-dimensional matrix for the Laplacian operator. Our idea is to make use of the block ɛ-circulant approximation via fast Fourier transforms, so that the resulting task is to solve a block diagonal system, where each diagonal block matrix is the sum of a complex scalar times the identity matrix and a Laplacian matrix. We show that the accuracy of the approximation scheme is of O (ɛ). Because of the special diagonal block structure, we employ the multigrid method to solve the resulting linear systems. The convergence of the multigrid method is studied. Numerical examples are presented to illustrate the accuracy of the proposed approximation scheme and the efficiency of the proposed solver.

  14. Quantitative estimation of poikilocytosis by the coherent optical method

    NASA Astrophysics Data System (ADS)

    Safonova, Larisa P.; Samorodov, Andrey V.; Spiridonov, Igor N.

    2000-05-01

    The investigation upon the necessity and the reliability required of the determination of the poikilocytosis in hematology has shown that existing techniques suffer from grave shortcomings. To determine a deviation of the erythrocytes' form from the normal (rounded) one in blood smears it is expedient to use an integrative estimate. The algorithm which is based on the correlation between erythrocyte morphological parameters with properties of the spatial-frequency spectrum of blood smear is suggested. During analytical and experimental research an integrative form parameter (IFP) which characterizes the increase of the relative concentration of cells with the changed form over 5% and the predominating type of poikilocytes was suggested. An algorithm of statistically reliable estimation of the IFP on the standard stained blood smears has been developed. To provide the quantitative characterization of the morphological features of cells a form vector has been proposed, and its validity for poikilocytes differentiation was shown.

  15. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    SciTech Connect

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.

  16. A scalable and accurate method for classifying protein-ligand binding geometries using a MapReduce approach.

    PubMed

    Estrada, T; Zhang, B; Cicotti, P; Armen, R S; Taufer, M

    2012-07-01

    We present a scalable and accurate method for classifying protein-ligand binding geometries in molecular docking. Our method is a three-step process: the first step encodes the geometry of a three-dimensional (3D) ligand conformation into a single 3D point in the space; the second step builds an octree by assigning an octant identifier to every single point in the space under consideration; and the third step performs an octree-based clustering on the reduced conformation space and identifies the most dense octant. We adapt our method for MapReduce and implement it in Hadoop. The load-balancing, fault-tolerance, and scalability in MapReduce allow screening of very large conformation spaces not approachable with traditional clustering methods. We analyze results for docking trials for 23 protein-ligand complexes for HIV protease, 21 protein-ligand complexes for Trypsin, and 12 protein-ligand complexes for P38alpha kinase. We also analyze cross docking trials for 24 ligands, each docking into 24 protein conformations of the HIV protease, and receptor ensemble docking trials for 24 ligands, each docking in a pool of HIV protease receptors. Our method demonstrates significant improvement over energy-only scoring for the accurate identification of native ligand geometries in all these docking assessments. The advantages of our clustering approach make it attractive for complex applications in real-world drug design efforts. We demonstrate that our method is particularly useful for clustering docking results using a minimal ensemble of representative protein conformational states (receptor ensemble docking), which is now a common strategy to address protein flexibility in molecular docking.

  17. Post-reconstruction non-local means filtering methods using CT side information for quantitative SPECT

    NASA Astrophysics Data System (ADS)

    Chun, Se Young; Fessler, Jeffrey A.; Dewaraja, Yuni K.

    2013-09-01

    Quantitative SPECT techniques are important for many applications including internal emitter therapy dosimetry where accurate estimation of total target activity and activity distribution within targets are both potentially important for dose-response evaluations. We investigated non-local means (NLM) post-reconstruction filtering for accurate I-131 SPECT estimation of both total target activity and the 3D activity distribution. We first investigated activity estimation versus number of ordered-subsets expectation-maximization (OSEM) iterations. We performed simulations using the XCAT phantom with tumors containing a uniform and a non-uniform activity distribution, and measured the recovery coefficient (RC) and the root mean squared error (RMSE) to quantify total target activity and activity distribution, respectively. We observed that using more OSEM iterations is essential for accurate estimation of RC, but may or may not improve RMSE. We then investigated various post-reconstruction filtering methods to suppress noise at high iteration while preserving image details so that both RC and RMSE can be improved. Recently, NLM filtering methods have shown promising results for noise reduction. Moreover, NLM methods using high-quality side information can improve image quality further. We investigated several NLM methods with and without CT side information for I-131 SPECT imaging and compared them to conventional Gaussian filtering and to unfiltered methods. We studied four different ways of incorporating CT information in the NLM methods: two known (NLM CT-B and NLM CT-M) and two newly considered (NLM CT-S and NLM CT-H). We also evaluated the robustness of NLM filtering using CT information to erroneous CT. NLM CT-S and NLM CT-H yielded comparable RC values to unfiltered images while substantially reducing RMSE. NLM CT-S achieved -2.7 to 2.6% increase of RC compared to no filtering and NLM CT-H yielded up to 6% decrease in RC while other methods yielded lower RCs

  18. Post-reconstruction non-local means filtering methods using CT side information for quantitative SPECT.

    PubMed

    Chun, Se Young; Fessler, Jeffrey A; Dewaraja, Yuni K

    2013-09-07

    Quantitative SPECT techniques are important for many applications including internal emitter therapy dosimetry where accurate estimation of total target activity and activity distribution within targets are both potentially important for dose–response evaluations. We investigated non-local means (NLM) post-reconstruction filtering for accurate I-131 SPECT estimation of both total target activity and the 3D activity distribution. We first investigated activity estimation versus number of ordered-subsets expectation–maximization (OSEM) iterations. We performed simulations using the XCAT phantom with tumors containing a uniform and a non-uniform activity distribution, and measured the recovery coefficient (RC) and the root mean squared error (RMSE) to quantify total target activity and activity distribution, respectively. We observed that using more OSEM iterations is essential for accurate estimation of RC, but may or may not improve RMSE. We then investigated various post-reconstruction filtering methods to suppress noise at high iteration while preserving image details so that both RC and RMSE can be improved. Recently, NLM filtering methods have shown promising results for noise reduction. Moreover, NLM methods using high-quality side information can improve image quality further. We investigated several NLM methods with and without CT side information for I-131 SPECT imaging and compared them to conventional Gaussian filtering and to unfiltered methods. We studied four different ways of incorporating CT information in the NLM methods: two known (NLM CT-B and NLM CT-M) and two newly considered (NLM CT-S and NLM CT-H). We also evaluated the robustness of NLM filtering using CT information to erroneous CT. NLM CT-S and NLM CT-H yielded comparable RC values to unfiltered images while substantially reducing RMSE. NLM CT-S achieved −2.7 to 2.6% increase of RC compared to no filtering and NLM CT-H yielded up to 6% decrease in RC while other methods yielded lower

  19. Evaluation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in Pyrus pyrifolia using different tissue samples and seasonal conditions.

    PubMed

    Imai, Tsuyoshi; Ubi, Benjamin E; Saito, Takanori; Moriguchi, Takaya

    2014-01-01

    We have evaluated suitable reference genes for real time (RT)-quantitative PCR (qPCR) analysis in Japanese pear (Pyrus pyrifolia). We tested most frequently used genes in the literature such as β-Tubulin, Histone H3, Actin, Elongation factor-1α, Glyceraldehyde-3-phosphate dehydrogenase, together with newly added genes Annexin, SAND and TIP41. A total of 17 primer combinations for these eight genes were evaluated using cDNAs synthesized from 16 tissue samples from four groups, namely: flower bud, flower organ, fruit flesh and fruit skin. Gene expression stabilities were analyzed using geNorm and NormFinder software packages or by ΔCt method. geNorm analysis indicated three best performing genes as being sufficient for reliable normalization of RT-qPCR data. Suitable reference genes were different among sample groups, suggesting the importance of validation of gene expression stability of reference genes in the samples of interest. Ranking of stability was basically similar between geNorm and NormFinder, suggesting usefulness of these programs based on different algorithms. ΔCt method suggested somewhat different results in some groups such as flower organ or fruit skin; though the overall results were in good correlation with geNorm or NormFinder. Gene expression of two cold-inducible genes PpCBF2 and PpCBF4 were quantified using the three most and the three least stable reference genes suggested by geNorm. Although normalized quantities were different between them, the relative quantities within a group of samples were similar even when the least stable reference genes were used. Our data suggested that using the geometric mean value of three reference genes for normalization is quite a reliable approach to evaluating gene expression by RT-qPCR. We propose that the initial evaluation of gene expression stability by ΔCt method, and subsequent evaluation by geNorm or NormFinder for limited number of superior gene candidates will be a practical way of finding out

  20. Validation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in strawberry fruits using different cultivars and osmotic stresses.

    PubMed

    Galli, Vanessa; Borowski, Joyce Moura; Perin, Ellen Cristina; Messias, Rafael da Silva; Labonde, Julia; Pereira, Ivan dos Santos; Silva, Sérgio Delmar Dos Anjos; Rombaldi, Cesar Valmor

    2015-01-10

    The increasing demand of strawberry (Fragaria×ananassa Duch) fruits is associated mainly with their sensorial characteristics and the content of antioxidant compounds. Nevertheless, the strawberry production has been hampered due to its sensitivity to abiotic stresses. Therefore, to understand the molecular mechanisms highlighting stress response is of great importance to enable genetic engineering approaches aiming to improve strawberry tolerance. However, the study of expression of genes in strawberry requires the use of suitable reference genes. In the present study, seven traditional and novel candidate reference genes were evaluated for transcript normalization in fruits of ten strawberry cultivars and two abiotic stresses, using RefFinder, which integrates the four major currently available software programs: geNorm, NormFinder, BestKeeper and the comparative delta-Ct method. The results indicate that the expression stability is dependent on the experimental conditions. The candidate reference gene DBP (DNA binding protein) was considered the most suitable to normalize expression data in samples of strawberry cultivars and under drought stress condition, and the candidate reference gene HISTH4 (histone H4) was the most stable under osmotic stresses and salt stress. The traditional genes GAPDH (glyceraldehyde-3-phosphate dehydrogenase) and 18S (18S ribosomal RNA) were considered the most unstable genes in all conditions. The expression of phenylalanine ammonia lyase (PAL) and 9-cis epoxycarotenoid dioxygenase (NCED1) genes were used to further confirm the validated candidate reference genes, showing that the use of an inappropriate reference gene may induce erroneous results. This study is the first survey on the stability of reference genes in strawberry cultivars and osmotic stresses and provides guidelines to obtain more accurate RT-qPCR results for future breeding efforts.

  1. Increasing Literacy in Quantitative Methods: The Key to the Future of Canadian Psychology.

    PubMed

    Counsell, Alyssa; Cribbie, Robert A; Harlow, Lisa L

    2016-08-01

    Quantitative methods (QM) dominate empirical research in psychology. Unfortunately most researchers in psychology receive inadequate training in QM. This creates a challenge for researchers who require advanced statistical methods to appropriately analyze their data. Many of the recent concerns about research quality, replicability, and reporting practices are directly tied to the problematic use of QM. As such, improving quantitative literacy in psychology is an important step towards eliminating these concerns. The current paper will include two main sections that discuss quantitative challenges and opportunities. The first section discusses training and resources for students and presents descriptive results on the number of quantitative courses required and available to graduate students in Canadian psychology departments. In the second section, we discuss ways of improving quantitative literacy for faculty, researchers, and clinicians. This includes a strong focus on the importance of collaboration. The paper concludes with practical recommendations for improving quantitative skills and literacy for students and researchers in Canada.

  2. Establishment of chondroitin B lyase-based analytical methods for sensitive and quantitative detection of dermatan sulfate in heparin.

    PubMed

    Wu, Jingjun; Ji, Yang; Su, Nan; Li, Ye; Liu, Xinxin; Mei, Xiang; Zhou, Qianqian; Zhang, Chong; Xing, Xin-hui

    2016-06-25

    Dermatan sulfate (DS) is one of the hardest impurities to remove from heparin products due to their high structural similarity. The development of a sensitive and feasible method for quantitative detection of DS in heparin is essential to ensure the clinical safety of heparin pharmaceuticals. In the current study, based on the substrate specificity of chondroitin B lyase, ultraviolet spectrophotometric and strong anion-exchange high-performance liquid chromatographic methods were established for detection of DS in heparin. The former method facilitated analysis in heparin with DS concentrations greater than 0.1mgmL(-1) at 232nm, with good linearity, precision and recovery. The latter method allowed sensitive and accurate detection of DS at concentrations lower than 0.1mgmL(-1), exhibiting good linearity, precision and recovery. The linear range of DS detection using the latter method was between 0.01 and 0.5mgmL(-1).

  3. Electron paramagnetic resonance method for the quantitative assay of ketoconazole in pharmaceutical preparations.

    PubMed

    Morsy, Mohamed A; Sultan, Salah M; Dafalla, Hatim

    2009-08-15

    In this study, electron paramagnetic resonance (EPR) is used, for the first time, as an analytical tool for the quantitative assay of ketoconazole (KTZ) in drug formulations. The drug was successfully characterized by the prominent signals by two radical species produced as a result of its oxidation with 400 microg/mL cerium(IV) in 0.10 mol dm(-3) sulfuric acid. The EPR signal of the reaction mixture was measured in eight capillary tubes housed in a 4 mm EPR sample tube. The radical stability was investigated by obtaining multi-EPR scans of each KTZ sample solution at time intervals of 2.5 min of the reaction mixing time. The plot of the disappearance of the radical species show that the disappearance is apparently of zero order. The zero-time intercept of the EPR signal amplitude, which should be proportional to the initial radical concentration, is linear in the sample concentration in the range between 100 and 400 microg/mL, with a correlation coefficient, r, of 0.999. The detection limit was determined to be 11.7 +/- 2.5 microg/mL. The method newly adopted was fully validated following the United States Pharmacopeia (USP) monograph protocol in both the generic and the proprietary forms. The method is very accurate, such that we were able to measure the concentration at confidence levels of 99.9%. The method was also found to be suitable for the assay of KTZ in its tablet and cream pharmaceutical preparations, as no interferences were encountered from excipients of the proprietary drugs. High specificity, simplicity, and rapidity are the merits of the present method compared to the previously reported methods.

  4. Accurate Gaussian basis sets for atomic and molecular calculations obtained from the generator coordinate method with polynomial discretization.

    PubMed

    Celeste, Ricardo; Maringolo, Milena P; Comar, Moacyr; Viana, Rommel B; Guimarães, Amanda R; Haiduke, Roberto L A; da Silva, Albérico B F

    2015-10-01

    Accurate Gaussian basis sets for atoms from H to Ba were obtained by means of the generator coordinate Hartree-Fock (GCHF) method based on a polynomial expansion to discretize the Griffin-Wheeler-Hartree-Fock equations (GWHF). The discretization of the GWHF equations in this procedure is based on a mesh of points not equally distributed in contrast with the original GCHF method. The results of atomic Hartree-Fock energies demonstrate the capability of these polynomial expansions in designing compact and accurate basis sets to be used in molecular calculations and the maximum error found when compared to numerical values is only 0.788 mHartree for indium. Some test calculations with the B3LYP exchange-correlation functional for N2, F2, CO, NO, HF, and HCN show that total energies within 1.0 to 2.4 mHartree compared to the cc-pV5Z basis sets are attained with our contracted bases with a much smaller number of polarization functions (2p1d and 2d1f for hydrogen and heavier atoms, respectively). Other molecular calculations performed here are also in very good accordance with experimental and cc-pV5Z results. The most important point to be mentioned here is that our generator coordinate basis sets required only a tiny fraction of the computational time when compared to B3LYP/cc-pV5Z calculations.

  5. Method Development and Validation of a Stability-Indicating RP-HPLC Method for the Quantitative Analysis of Dronedarone Hydrochloride in Pharmaceutical Tablets

    PubMed Central

    Dabhi, Batuk; Jadeja, Yashwantsinh; Patel, Madhavi; Jebaliya, Hetal; Karia, Denish; Shah, Anamik

    2013-01-01

    A simple, precise, and accurate HPLC method has been developed and validated for the quantitative analysis of Dronedarone Hydrochloride in tablet form. An isocratic separation was achieved using a Waters Symmetry C8 (100 × 4.6 mm), 5 μm particle size column with a flow rate of 1 ml/min and UV detector at 290 nm. The mobile phase consisted of buffer: methanol (40:60 v/v) (buffer: 50 mM KH2PO4 + 1 ml triethylamine in 1 liter water, pH=2.5 adjusted with ortho-phosphoric acid). The method was validated for specificity, linearity, precision, accuracy, robustness, and solution stability. The specificity of the method was determined by assessing interference from the placebo and by stress testing the drug (forced degradation). The method was linear over the concentration range 20–80 μg/ml (r2 = 0.999) with a Limit of Detection (LOD) and Limit of Quantitation (LOQ) of 0.1 and 0.3 μg/ml respectively. The accuracy of the method was between 99.2–100.5%. The method was found to be robust and suitable for the quantitative analysis of Dronedarone Hydrochloride in a tablet formulation. Degradation products resulting from the stress studies did not interfere with the detection of Dronedarone Hydrochloride so the assay is thus stability-indicating. PMID:23641332

  6. Uncertainty in environmental health impact assessment: quantitative methods and perspectives.

    PubMed

    Mesa-Frias, Marco; Chalabi, Zaid; Vanni, Tazio; Foss, Anna M

    2013-01-01

    Environmental health impact assessment models are subjected to great uncertainty due to the complex associations between environmental exposures and health. Quantifying the impact of uncertainty is important if the models are used to support health policy decisions. We conducted a systematic review to identify and appraise current methods used to quantify the uncertainty in environmental health impact assessment. In the 19 studies meeting the inclusion criteria, several methods were identified. These were grouped into random sampling methods, second-order probability methods, Bayesian methods, fuzzy sets, and deterministic sensitivity analysis methods. All 19 studies addressed the uncertainty in the parameter values but only 5 of the studies also addressed the uncertainty in the structure of the models. None of the articles reviewed considered conceptual sources of uncertainty associated with the framing assumptions or the conceptualisation of the model. Future research should attempt to broaden the way uncertainty is taken into account in environmental health impact assessments.

  7. Barcoding T Cell Calcium Response Diversity with Methods for Automated and Accurate Analysis of Cell Signals (MAAACS)

    PubMed Central

    Sergé, Arnauld; Bernard, Anne-Marie; Phélipot, Marie-Claire; Bertaux, Nicolas; Fallet, Mathieu; Grenot, Pierre; Marguet, Didier; He, Hai-Tao; Hamon, Yannick

    2013-01-01

    We introduce a series of experimental procedures enabling sensitive calcium monitoring in T cell populations by confocal video-microscopy. Tracking and post-acquisition analysis was performed using Methods for Automated and Accurate Analysis of Cell Signals (MAAACS), a fully customized program that associates a high throughput tracking algorithm, an intuitive reconnection routine and a statistical platform to provide, at a glance, the calcium barcode of a population of individual T-cells. Combined with a sensitive calcium probe, this method allowed us to unravel the heterogeneity in shape and intensity of the calcium response in T cell populations and especially in naive T cells, which display intracellular calcium oscillations upon stimulation by antigen presenting cells. PMID:24086124

  8. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate

    PubMed Central

    Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul

    2015-01-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821

  9. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate.

    PubMed

    Minyoo, Abel B; Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul; Lankester, Felix

    2015-12-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere.

  10. Double calibration: an accurate, reliable and easy-to-use method for 3D scapular motion analysis.

    PubMed

    Brochard, Sylvain; Lempereur, Mathieu; Rémy-Néris, Olivier

    2011-02-24

    The most recent non-invasive methods for the recording of scapular motion are based on an acromion marker (AM) set and a single calibration (SC) of the scapula in a resting position. However, this method fails to accurately measure scapular kinematics above 90° of arm elevation, due to soft tissue artifacts of the skin and muscles covering the acromion. The aim of this study was to evaluate the accuracy, and inter-trial and inter-session repeatability of a double calibration method (DC) in comparison with SC. The SC and DC data were measured with an optoelectronic system during arm flexion and abduction at different angles of elevation (0-180°). They were compared with palpation of the scapula using a scapula locator. DC data was not significantly different from palpation for 5/6 axes of rotation tested (Y, X, and Z in abduction and flexion), where as SC showed significant differences for 5/6 axes. The root mean square errors ranged from 2.96° to 4.48° for DC and from 6° to 9.19° for SC. The inter-trial repeatability was good to excellent for SC and DC. The inter-session repeatability was moderate to excellent for SC and moderate to good for DC. Coupling AM and DC is an easy-to-use method, which yields accurate and reliable measurements of scapular kinematics for the complete range of arm motion. It can be applied to the measurement of shoulder motion in many fields (sports, orthopaedics, and rehabilitation), especially when large ranges of arm motion are required.

  11. Accurate reporting of adherence to inhaled therapies in adults with cystic fibrosis: methods to calculate “normative adherence”

    PubMed Central

    Hoo, Zhe Hui; Curley, Rachael; Campbell, Michael J; Walters, Stephen J; Hind, Daniel; Wildman, Martin J

    2016-01-01

    Background Preventative inhaled treatments in cystic fibrosis will only be effective in maintaining lung health if used appropriately. An accurate adherence index should therefore reflect treatment effectiveness, but the standard method of reporting adherence, that is, as a percentage of the agreed regimen between clinicians and people with cystic fibrosis, does not account for the appropriateness of the treatment regimen. We describe two different indices of inhaled therapy adherence for adults with cystic fibrosis which take into account effectiveness, that is, “simple” and “sophisticated” normative adherence. Methods to calculate normative adherence Denominator adjustment involves fixing a minimum appropriate value based on the recommended therapy given a person’s characteristics. For simple normative adherence, the denominator is determined by the person’s Pseudomonas status. For sophisticated normative adherence, the denominator is determined by the person’s Pseudomonas status and history of pulmonary exacerbations over the previous year. Numerator adjustment involves capping the daily maximum inhaled therapy use at 100% so that medication overuse does not artificially inflate the adherence level. Three illustrative cases Case A is an example of inhaled therapy under prescription based on Pseudomonas status resulting in lower simple normative adherence compared to unadjusted adherence. Case B is an example of inhaled therapy under-prescription based on previous exacerbation history resulting in lower sophisticated normative adherence compared to unadjusted adherence and simple normative adherence. Case C is an example of nebulizer overuse exaggerating the magnitude of unadjusted adherence. Conclusion Different methods of reporting adherence can result in different magnitudes of adherence. We have proposed two methods of standardizing the calculation of adherence which should better reflect treatment effectiveness. The value of these indices can

  12. Toward accurate molecular identification of species in complex environmental samples: testing the performance of sequence filtering and clustering methods

    PubMed Central

    Flynn, Jullien M; Brown, Emily A; Chain, Frédéric J J; MacIsaac, Hugh J; Cristescu, Melania E

    2015-01-01

    Metabarcoding has the potential to become a rapid, sensitive, and effective approach for identifying species in complex environmental samples. Accurate molecular identification of species depends on the ability to generate operational taxonomic units (OTUs) that correspond to biological species. Due to the sometimes enormous estimates of biodiversity using this method, there is a great need to test the efficacy of data analysis methods used to derive OTUs. Here, we evaluate the performance of various methods for clustering length variable 18S amplicons from complex samples into OTUs using a mock community and a natural community of zooplankton species. We compare analytic procedures consisting of a combination of (1) stringent and relaxed data filtering, (2) singleton sequences included and removed, (3) three commonly used clustering algorithms (mothur, UCLUST, and UPARSE), and (4) three methods of treating alignment gaps when calculating sequence divergence. Depending on the combination of methods used, the number of OTUs varied by nearly two orders of magnitude for the mock community (60–5068 OTUs) and three orders of magnitude for the natural community (22–22191 OTUs). The use of relaxed filtering and the inclusion of singletons greatly inflated OTU numbers without increasing the ability to recover species. Our results also suggest that the method used to treat gaps when calculating sequence divergence can have a great impact on the number of OTUs. Our findings are particularly relevant to studies that cover taxonomically diverse species and employ markers such as rRNA genes in which length variation is extensive. PMID:26078860

  13. Analytical Validation of a Highly Quantitative, Sensitive, Accurate, and Reproducible Assay (HERmark®) for the Measurement of HER2 Total Protein and HER2 Homodimers in FFPE Breast Cancer Tumor Specimens

    PubMed Central

    Larson, Jeffrey S.; Goodman, Laurie J.; Tan, Yuping; Defazio-Eli, Lisa; Paquet, Agnes C.; Cook, Jennifer W.; Rivera, Amber; Frankson, Kristi; Bose, Jolly; Chen, Lili; Cheung, Judy; Shi, Yining; Irwin, Sarah; Kiss, Linda D. B.; Huang, Weidong; Utter, Shannon; Sherwood, Thomas; Bates, Michael; Weidler, Jodi; Parry, Gordon; Winslow, John; Petropoulos, Christos J.; Whitcomb, Jeannette M.

    2010-01-01

    We report here the results of the analytical validation of assays that measure HER2 total protein (H2T) and HER2 homodimer (H2D) expression in Formalin Fixed Paraffin Embedded (FFPE) breast cancer tumors as well as cell line controls. The assays are based on the VeraTag technology platform and are commercially available through a central CAP-accredited clinical reference laboratory. The accuracy of H2T measurements spans a broad dynamic range (2-3 logs) as evaluated by comparison with cross-validating technologies. The measurement of H2T expression demonstrates a sensitivity that is approximately 7–10 times greater than conventional immunohistochemistry (IHC) (HercepTest). The HERmark assay is a quantitative assay that sensitively and reproducibly measures continuous H2T and H2D protein expression levels and therefore may have the potential to stratify patients more accurately with respect to response to HER2-targeted therapies than current methods which rely on semiquantitative protein measurements (IHC) or on indirect assessments of gene amplification (FISH). PMID:21151530

  14. Quantitative assessment of MS plaques and brain atrophy in multiple sclerosis using semiautomatic segmentation method

    NASA Astrophysics Data System (ADS)

    Heinonen, Tomi; Dastidar, Prasun; Ryymin, Pertti; Lahtinen, Antti J.; Eskola, Hannu; Malmivuo, Jaakko

    1997-05-01

    Quantitative magnetic resonance (MR) imaging of the brain is useful in multiple sclerosis (MS) in order to obtain reliable indices of disease progression. The goal of this project was to estimate the total volume of gliotic and non gliotic plaques in chronic progressive multiple sclerosis with the help of a semiautomatic segmentation method developed at the Ragnar Granit Institute. Youth developed program running on a PC based computer provides de displays of the segmented data, in addition to the volumetric analyses. The volumetric accuracy of the program was demonstrated by segmenting MR images of fluid filed syringes. An anatomical atlas is to be incorporated in the segmentation system to estimate the distribution of MS plaques in various neural pathways of the brain. A total package including MS plaque volume estimation, estimation of brain atrophy and ventricular enlargement, distribution of MS plaques in different neural segments of the brain has ben planned for the near future. Our study confirmed that total lesion volumes in chronic MS disease show a poor correlation to EDSS scores but show a positive correlation to neuropsychological scores. Therefore accurate total volume measurements of MS plaques using the developed semiautomatic segmentation technique helped us to evaluate the degree of neuropsychological impairment.

  15. FAMBE-pH: a fast and accurate method to compute the total solvation free energies of proteins.

    PubMed

    Vorobjev, Yury N; Vila, Jorge A; Scheraga, Harold A

    2008-09-04

    A fast and accurate method to compute the total solvation free energies of proteins as a function of pH is presented. The method makes use of a combination of approaches, some of which have already appeared in the literature; (i) the Poisson equation is solved with an optimized fast adaptive multigrid boundary element (FAMBE) method; (ii) the electrostatic free energies of the ionizable sites are calculated for their neutral and charged states by using a detailed model of atomic charges; (iii) a set of optimal atomic radii is used to define a precise dielectric surface interface; (iv) a multilevel adaptive tessellation of this dielectric surface interface is achieved by using multisized boundary elements; and (v) 1:1 salt effects are included. The equilibrium proton binding/release is calculated with the Tanford-Schellman integral if the proteins contain more than approximately 20-25 ionizable groups; for a smaller number of ionizable groups, the ionization partition function is calculated directly. The FAMBE method is tested as a function of pH (FAMBE-pH) with three proteins, namely, bovine pancreatic trypsin inhibitor (BPTI), hen egg white lysozyme (HEWL), and bovine pancreatic ribonuclease A (RNaseA). The results are (a) the FAMBE-pH method reproduces the observed pK a's of the ionizable groups of these proteins within an average absolute value of 0.4 p K units and a maximum error of 1.2 p K units and (b) comparison of the calculated total pH-dependent solvation free energy for BPTI, between the exact calculation of the ionization partition function and the Tanford-Schellman integral method, shows agreement within 1.2 kcal/mol. These results indicate that calculation of total solvation free energies with the FAMBE-pH method can provide an accurate prediction of protein conformational stability at a given fixed pH and, if coupled with molecular mechanics or molecular dynamics methods, can also be used for more realistic studies of protein folding, unfolding, and

  16. Methods for equine preantral follicles isolation: quantitative aspects.

    PubMed

    Leonel, E C R; Bento-Silva, V; Ambrozio, K S; Luna, H S; Costa e Silva, E V; Zúccari, C E S N

    2013-12-01

    The aim of this study was to test the use of mechanical and mechanical-enzymatic methods, saline solution (SS), and PBS solution for the manipulation and isolation of mare ovarian preantral follicles (PAFs). The ovaries were subjected to mechanical isolation (mixer) alone or in association with enzymatic digestion (collagenase). Incubation times of 10 and 20 min were employed. In the first group, 4.1 ± 4.9 PAFs were harvested with the mechanical-enzymatic method vs 71.1 ± 19.2 with the mechanical procedure, showing a significant difference between methods; using SS and PBS, these numbers were 35.7 ± 34.3 and 39.6 ± 39.6, respectively, with no significant difference between solutions. In the second group, there was significant difference between methods, with 7.1 ± 10.6 follicles harvested with the mechanical-enzymatic method vs 63.2 ± 22.9 with the mechanical procedure; using SS and PBS, means were 35.5 ± 36.4 and 34.9 ± 31.1, respectively. The mechanical method proved more effective than the mechanical-enzymatic approach. Both SS and PBS can be used as a media for equine PAFs preparation.

  17. Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis

    PubMed Central

    Abbasi, Mahdi

    2014-01-01

    Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N2log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR. PMID:24696808

  18. A robust and accurate numerical method for transcritical turbulent flows at supercritical pressure with an arbitrary equation of state

    SciTech Connect

    Kawai, Soshi; Terashima, Hiroshi; Negishi, Hideyo

    2015-11-01

    This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture the steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier–Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.

  19. Establishment of an accurate and fast detection method using molecular beacons in loop-mediated isothermal amplification assay

    PubMed Central

    Liu, Wei; Huang, Simo; Liu, Ningwei; Dong, Derong; Yang, Zhan; Tang, Yue; Ma, Wen; He, Xiaoming; Ao, Da; Xu, Yaqing; Zou, Dayang; Huang, Liuyu

    2017-01-01

    This study established a constant-temperature fluorescence quantitative detection method, combining loop-mediated isothermal amplification (LAMP) with molecular beacons. The advantages of LAMP are its convenience and efficiency, as it does not require a thermocycler and results are easily visualized by the naked eye. However, a major disadvantage of current LAMP techniques is the use of indirect evaluation methods (e.g., electrophoresis, SYBR Green I dye, precipitation, hydroxynaphthol blue dye, the turbidimetric method, calcein/Mn2+ dye, and the composite probe method), which cannot distinguish between the desired products and products of nonspecific amplification, thereby leading to false positives. Use of molecular beacons avoids this problem because molecular beacons produce fluorescence signals only when binding to target DNA, thus acting as a direct indicator of amplification products. Our analyses determined the optimal conditions for molecular beacons as an evaluation tool in LAMP: beacon length of 25–45 bp, beacon concentration of 0.6–1 pmol/μL, and reaction temperature of 60–65 °C. In conclusion, we validated a novel molecular beacon loop-mediated isothermal amplification method (MB-LAMP), realizing the direct detection of LAMP product. PMID:28059137

  20. Quantitative characterization of metastatic disease in the spine. Part I. Semiautomated segmentation using atlas-based deformable registration and the level set method

    SciTech Connect

    Hardisty, M.; Gordon, L.; Agarwal, P.; Skrinskas, T.; Whyne, C.

    2007-08-15

    Quantitative assessment of metastatic disease in bone is often considered immeasurable and, as such, patients with skeletal metastases are often excluded from clinical trials. In order to effectively quantify the impact of metastatic tumor involvement in the spine, accurate segmentation of the vertebra is required. Manual segmentation can be accurate but involves extensive and time-consuming user interaction. Potential solutions to automating segmentation of metastatically involved vertebrae are demons deformable image registration and level set methods. The purpose of this study was to develop a semiautomated method to accurately segment tumor-bearing vertebrae using the aforementioned techniques. By maintaining morphology of an atlas, the demons-level set composite algorithm was able to accurately differentiate between trans-cortical tumors and surrounding soft tissue of identical intensity. The algorithm successfully segmented both the vertebral body and trabecular centrum of tumor-involved and healthy vertebrae. This work validates our approach as equivalent in accuracy to an experienced user.

  1. A general method for the quantitative assessment of mineral pigments.

    PubMed

    Ares, M C Zurita; Fernández, J M

    2016-01-01

    A general method for the estimation of mineral pigment contents in different bases has been proposed using a sole set of calibration curves, (one for each pigment), calculated for a white standard base, thus elaborating patterns for each utilized base is not necessary. The method can be used in different bases and its validity had ev en been proved in strongly tinted bases. The method consists of a novel procedure that combines diffuse reflectance spectroscopy, second derivatives and the Kubelka-Munk function. This technique has proved to be at least one order of magnitude more sensitive than X-Ray diffraction for colored compounds, since it allowed the determination of the pigment amount in colored samples containing 0.5 wt% of pigment that was not detected by X-Ray Diffraction. The method can be used to estimate the concentration of mineral pigments in a wide variety of either natural or artificial materials, since it does not requiere the calculation of each pigment pattern in every base. This fact could have important industrial consequences, as the proposed method would be more convenient, faster and cheaper.

  2. Interpretation and application of reaction class transition state theory for accurate calculation of thermokinetic parameters using isodesmic reaction method.

    PubMed

    Wang, Bi-Yao; Li, Ze-Rong; Tan, Ning-Xin; Yao, Qian; Li, Xiang-Yuan

    2013-04-25

    We present a further interpretation of reaction class transition state theory (RC-TST) proposed by Truong et al. for the accurate calculation of rate coefficients for reactions in a class. It is found that the RC-TST can be interpreted through the isodesmic reaction method, which is usually used to calculate reaction enthalpy or enthalpy of formation for a species, and the theory can also be used for the calculation of the reaction barriers and reaction enthalpies for reactions in a class. A correction scheme based on this theory is proposed for the calculation of the reaction barriers and reaction enthalpies for reactions in a class. To validate the scheme, 16 combinations of various ab initio levels with various basis sets are used as the approximate methods and CCSD(T)/CBS method is used as the benchmarking method in this study to calculate the reaction energies and energy barriers for a representative set of five reactions from the reaction class: R(c)CH(R(b))CR(a)CH2 + OH(•) → R(c)C(•)(R(b))CR(a)CH2 + H2O (R(a), R(b), and R(c) in the reaction formula represent the alkyl or hydrogen). Then the results of the approximate methods are corrected by the theory. The maximum values of the average deviations of the energy barrier and the reaction enthalpy are 99.97 kJ/mol and 70.35 kJ/mol, respectively, before correction and are reduced to 4.02 kJ/mol and 8.19 kJ/mol, respectively, after correction, indicating that after correction the results are not sensitive to the level of the ab initio method and the size of the basis set, as they are in the case before correction. Therefore, reaction energies and energy barriers for reactions in a class can be calculated accurately at a relatively low level of ab initio method using our scheme. It is also shown that the rate coefficients for the five representative reactions calculated at the BHandHLYP/6-31G(d,p) level of theory via our scheme are very close to the values calculated at CCSD(T)/CBS level. Finally, reaction

  3. A quantitative evaluation of two methods for preserving hair samples

    USGS Publications Warehouse

    Roon, David A.; Waits, L.P.; Kendall, K.C.

    2003-01-01

    Hair samples are an increasingly important DNA source for wildlife studies, yet optimal storage methods and DNA degradation rates have not been rigorously evaluated. We tested amplification success rates over a one-year storage period for DNA extracted from brown bear (Ursus arctos) hair samples preserved using silica desiccation and -20C freezing. For three nuclear DNA microsatellites, success rates decreased significantly after a six-month time point, regardless of storage method. For a 1000 bp mitochondrial fragment, a similar decrease occurred after a two-week time point. Minimizing delays between collection and DNA extraction will maximize success rates for hair-based noninvasive genetic sampling projects.

  4. Optogalvanic intracavity quantitative detector and method for its use

    DOEpatents

    Zalewski, Edward F.; Keller, Richard A.; Apel, Charles T.

    1983-01-01

    The disclosure relates to an optogalvanic intracavity detector and method for its use. Measurement is made of the amount of light absorbed by atoms, small molecules and ions in a laser cavity utilizing laser-produced changes in plasmas containing the same atoms, molecules, or ions.

  5. Optogalvanic intracavity quantitative detector and method for its use

    DOEpatents

    Zalewski, E.F.; Keller, R.A.; Apel, C.T.

    1983-09-06

    The disclosure relates to an optogalvanic intracavity detector and method for its use. Measurement is made of the amount of light absorbed by atoms, small molecules and ions in a laser cavity utilizing laser-produced changes in plasmas containing the same atoms, molecules, or ions. 6 figs.

  6. Optogalvanic intracavity quantitative detector and method for its use

    DOEpatents

    Zalewski, E.F.; Keller, R.A.; Apel, C.T.

    1981-02-25

    The disclosure relates to an optogalvanic intracavity detector and method for its use. Measurement is made of the amount of light absorbed by atoms, small molecules and ions in a laser cavity utilizing laser-produced changes in plasmas containing the same atoms, molecules or ions.

  7. Selection methods in forage breeding: a quantitative appraisal

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Forage breeding can be extraordinarily complex because of the number of species, perenniality, mode of reproduction, mating system, and the genetic correlation for some traits evaluated in spaced plants vs. performance under cultivation. Aiming to compare eight forage breeding methods for direct sel...

  8. Magnetic Ligation Method for Quantitative Detection of MicroRNAs

    PubMed Central

    Liong, Monty; Im, Hyungsoon; Majmudar, Maulik D.; Aguirre, Aaron D.; Sebas, Matthew; Lee, Hakho; Weissleder, Ralph

    2014-01-01

    A magnetic ligation method is utilized for the detection of microRNAs amongst a complex biological background without polymerase chain reaction or nucleotide modification. The sandwich probes assay can be adapted to analyze a panel of microRNAs associated with cardiovascular diseases in heart tissue samples. PMID:24532323

  9. Computer Image Analysis Method for Rapid Quantitation of Macrophage Phagocytosis

    DTIC Science & Technology

    1990-01-01

    None- Methods of Enzymology, ( Sabato , D.G., and Everse, J.. Eds.) theless, occasionally an excessive number of micro- New York: Academic Press, vol...heterogeneity in neonates mology ( Sabato , G.D., and Everse, J., Eds.) New York: Aca- and adults. Blood 68,200, 1986. demic Press. Vol. 132, p. 3, 1986. 21

  10. Ion chromatography as highly suitable method for rapid and accurate determination of antibiotic fosfomycin in pharmaceutical wastewater.

    PubMed

    Zeng, Ping; Xie, Xiaolin; Song, Yonghui; Liu, Ruixia; Zhu, Chaowei; Galarneau, Anne; Pic, Jean-Stéphane

    2014-01-01

    A rapid and accurate ion chromatography (IC) method (limit of detection as low as 0.06 mg L(-1)) for fosfomycin concentration determination in pharmaceutical industrial wastewater was developed. This method was compared with the performance of high performance liquid chromatography determination (with a high detection limit of 96.0 mg L(-1)) and ultraviolet spectrometry after reacting with alizarin (difficult to perform in colored solutions). The accuracy of the IC method was established in the linear range of 1.0-15.0 mg L(-1) and a linear correlation was found with a correlation coefficient of 0.9998. The recoveries of fosfomycin from industrial pharmaceutical wastewater at spiking concentrations of 2.0, 5.0 and 8.0 mg L(-1) ranged from 81.91 to 94.74%, with a relative standard deviation (RSD) from 1 to 4%. The recoveries of effluent from a sequencing batch reactor treated fosfomycin with activated sludge at spiking concentrations of 5.0, 8.0, 10.0 mg L(-1) ranging from 98.25 to 99.91%, with a RSD from 1 to 2%. The developed IC procedure provided a rapid, reliable and sensitive method for the determination of fosfomycin concentration in industrial pharmaceutical wastewater and samples containing complex components.

  11. Design of accurate predictors for DNA-binding sites in proteins using hybrid SVM-PSSM method.

    PubMed

    Ho, Shinn-Ying; Yu, Fu-Chieh; Chang, Chia-Yun; Huang, Hui-Ling

    2007-01-01

    In this paper, we investigate the design of accurate predictors for DNA-binding sites in proteins from amino acid sequences. As a result, we propose a hybrid method using support vector machine (SVM) in conjunction with evolutionary information of amino acid sequences in terms of their position-specific scoring matrices (PSSMs) for prediction of DNA-binding sites. Considering the numbers of binding and non-binding residues in proteins are significantly unequal, two additional weights as well as SVM parameters are analyzed and adopted to maximize net prediction (NP, an average of sensitivity and specificity) accuracy. To evaluate the generalization ability of the proposed method SVM-PSSM, a DNA-binding dataset PDC-59 consisting of 59 protein chains with low sequence identity on each other is additionally established. The SVM-based method using the same six-fold cross-validation procedure and PSSM features has NP=80.15% for the training dataset PDNA-62 and NP=69.54% for the test dataset PDC-59, which are much better than the existing neural network-based method by increasing the NP values for training and test accuracies up to 13.45% and 16.53%, respectively. Simulation results reveal that SVM-PSSM performs well in predicting DNA-binding sites of novel proteins from amino acid sequences.

  12. The multiscale coarse-graining method. XI. Accurate interactions based on the centers of charge of coarse-grained sites

    SciTech Connect

    Cao, Zhen; Voth, Gregory A.

    2015-12-28

    It is essential to be able to systematically construct coarse-grained (CG) models that can efficiently and accurately reproduce key properties of higher-resolution models such as all-atom. To fulfill this goal, a mapping operator is needed to transform the higher-resolution configuration to a CG configuration. Certain mapping operators, however, may lose information related to the underlying electrostatic properties. In this paper, a new mapping operator based on the centers of charge of CG sites is proposed to address this issue. Four example systems are chosen to demonstrate this concept. Within the multiscale coarse-graining framework, CG models that use this mapping operator are found to better reproduce the structural correlations of atomistic models. The present work also demonstrates the flexibility of the mapping operator and the robustness of the force matching method. For instance, important functional groups can be isolated and emphasized in the CG model.

  13. Quantitative method for enumeration of enterotoxigenic Escherichia coli.

    PubMed Central

    Calderon, R L; Levin, M A

    1981-01-01

    A rapid method was developed to quantify toxigenic Escherichia coli, using a membrane filter procedure. After filtration of samples, the membrane filter was first incubated on a medium selective for E. coli (24 h, 44 degrees C) and then transferred to tryptic soy agar (3%; 6 h, 37 degrees C). To assay for labile toxin-producing colonies, the filter was then transferred to a monolayer of Y-1 cells, the E. coli colonies were marked on the bottom of the petri dish, and the filter was removed after 15 min. The monolayer was observed for a positive rounding effect after a 15- to 24-h incubation. The method has an upper limit of detecting 30 toxigenic colonies per plate and can detect as few as one toxigenic colony per plate. A preliminary screening for these enterotoxigenic strains in polluted waters and known positive fecal samples was performed, and positive results were obtained with fecal samples only. PMID:7007415

  14. Do inverse ecosystem models accurately reconstruct plankton trophic flows? Comparing two solution methods using field data from the California Current

    NASA Astrophysics Data System (ADS)

    Stukel, Michael R.; Landry, Michael R.; Ohman, Mark D.; Goericke, Ralf; Samo, Ty; Benitez-Nelson, Claudia R.

    2012-03-01

    Despite the increasing use of linear inverse modeling techniques to elucidate fluxes in undersampled marine ecosystems, the accuracy with which they estimate food web flows has not been resolved. New Markov Chain Monte Carlo (MCMC) solution methods have also called into question the biases of the commonly used L2 minimum norm (L 2MN) solution technique. Here, we test the abilities of MCMC and L 2MN methods to recover field-measured ecosystem rates that are sequentially excluded from the model input. For data, we use experimental measurements from process cruises of the California Current Ecosystem (CCE-LTER) Program that include rate estimates of phytoplankton and bacterial production, micro- and mesozooplankton grazing, and carbon export from eight study sites varying from rich coastal upwelling to offshore oligotrophic conditions. Both the MCMC and L 2MN methods predicted well-constrained rates of protozoan and mesozooplankton grazing with reasonable accuracy, but the MCMC method overestimated primary production. The MCMC method more accurately predicted the poorly constrained rate of vertical carbon export than the L 2MN method, which consistently overestimated export. Results involving DOC and bacterial production were equivocal. Overall, when primary production is provided as model input, the MCMC method gives a robust depiction of ecosystem processes. Uncertainty in inverse ecosystem models is large and arises primarily from solution under-determinacy. We thus suggest that experimental programs focusing on food web fluxes expand the range of experimental measurements to include the nature and fate of detrital pools, which play large roles in the model.

  15. A quantitative sampling method for Oncomelania quadrasi by filter paper.

    PubMed

    Tanaka, H; Santos, M J; Matsuda, H; Yasuraoka, K; Santos, A T

    1975-08-01

    Filter paper was found to attract Oncomelania quadrasi in waters the same way as fallen dried banana leaves, although less number of other species of snails was collected on the former than on the latter. Snails were collected in limited areas using a tube (85 cm2 area at cross-section) and a filter paper (20 X 20 CM) samplers. The sheet of filter paper was placed close to the spot where a tube sample was taken, and recovered after 24 hours. At each sampling, 30 samples were taken by each method in an area and sampling was made four times. The correlation of the number of snails collected by the tube and that by filter paper was studied. The ratio of the snail counts by the tube sampler to those by the filter paper was 1.18. A loose correlation was observed between snail counts of both methods as shown by the correlation coefficient r = 0.6502. The formulas for the regression line were Y = 0.77 X + 1.6 and X = 0.55 Y + 1.35 for 3 experiments where Y is the number of snails collected by tube sampling and X is the number of snails collected in the sheet of filter paper. The type of snail distribution was studied in the 30 samples taken by each method and this was observed to be nearly the same in both sampling methods. All sampling data were found to fit the negative binomial distribution with the values of the constant k varying very much from 0.5775 to 5.9186 in (q -- p)-k. In each experiment, the constant k was always larger in tube sampling than in filter paper sampling. This indicates that the uneven distribution of snails on the soil surface becomes more conspicuous by the filter paper sampling.

  16. Facile colorimetric methods for the quantitative determination of tetramisole hydrochloride

    NASA Astrophysics Data System (ADS)

    Amin, A. S.; Dessouki, H. A.

    2002-10-01

    A facile, rapid and sensitive methods for the determination of tetramisole hydrochloride in pure and in dosage forms are described. The procedures are based on the formation of coloured products with the chromogenic reagents alizarin blue BB (I), alizarin red S (II), alizarin violet 3R (III) and alizarin yellow G (IV). The coloured products showed absorption maxima at 605, 468, 631 and 388 nm for I-IV, respectively. The colours obtained were stable for 24 h. The colour system obeyed Beer's law in the concentration range 1.0-36, 0.8-32, 1.2-42 and 0.8-30 μg ml -1, respectively. The results obtained showed good recoveries with relative standard deviations of 1.27, 0.96, 1.13 and 1.35%, respectively. The detection and determination limits were found to be 1.0 and 3.8, 1.2 and 4.2, 1.0 and 3.9 and finally 1.4 and 4.8 ng ml -1 for I-IV complexes, respectively. Applications of the method to representative pharmaceutical formulations are represented and the validity assessed by applying the standard addition technique, which is comparable with that obtained using the official method.

  17. Quantitative evaluation of proteins with bicinchoninic acid (BCA): resonance Raman and surface-enhanced resonance Raman scattering-based methods.

    PubMed

    Chen, Lei; Yu, Zhi; Lee, Youngju; Wang, Xu; Zhao, Bing; Jung, Young Mee

    2012-12-21

    A rapid and highly sensitive bicinchoninic acid (BCA) reagent-based protein quantitation tool was developed using competitive resonance Raman (RR) and surface-enhanced resonance Raman scattering (SERRS) methods. A chelation reaction between BCA and Cu(+), which is reduced by protein in an alkaline environment, is exploited to create a BCA-Cu(+) complex that has strong RR and SERRS activities. Using these methods, protein concentrations in solutions can be quantitatively measured at concentrations as low as 50 μg mL(-1) and 10 pg mL(-1). There are many advantages of using RR and SERRS-based assays. These assays exhibit a much wider linear concentration range and provide an additional one (RR method) to four (SERRS method) orders of magnitude increase in detection limits relative to UV-based methods. Protein-to-protein variation is determined using a reference to a standard curve at concentrations of BSA that exhibits excellent recoveries. These novel methods are extremely accurate in detecting total protein concentrations in solution. This improvement in protein detection sensitivity could yield advances in the biological sciences and medical diagnostic field and extend the applications of reagent-based protein assay techniques.

  18. A novel generalized ridge regression method for quantitative genetics.

    PubMed

    Shen, Xia; Alam, Moudud; Fikse, Freddy; Rönnegård, Lars

    2013-04-01

    As the molecular marker density grows, there is a strong need in both genome-wide association studies and genomic selection to fit models with a large number of parameters. Here we present a computationally efficient generalized ridge regression (RR) algorithm for situations in which the number of parameters largely exceeds the number of observations. The computationally demanding parts of the method depend mainly on the number of observations and not the number of parameters. The algorithm was implemented in the R package bigRR based on the previously developed package hglm. Using such an approach, a heteroscedastic effects model (HEM) was also developed, implemented, and tested. The efficiency for different data sizes were evaluated via simulation. The method was tested for a bacteria-hypersensitive trait in a publicly available Arabidopsis data set including 84 inbred lines and 216,130 SNPs. The computation of all the SNP effects required <10 sec using a single 2.7-GHz core. The advantage in run time makes permutation test feasible for such a whole-genome model, so that a genome-wide significance threshold can be obtained. HEM was found to be more robust than ordinary RR (a.k.a. SNP-best linear unbiased prediction) in terms of QTL mapping, because SNP-specific shrinkage was applied instead of a common shrinkage. The proposed algorithm was also assessed for genomic evaluation and was shown to give better predictions than ordinary RR.

  19. Quantitative assessment of gene expression network module-validation methods.

    PubMed

    Li, Bing; Zhang, Yingying; Yu, Yanan; Wang, Pengqian; Wang, Yongcheng; Wang, Zhong; Wang, Yongyan

    2015-10-16

    Validation of pluripotent modules in diverse networks holds enormous potential for systems biology and network pharmacology. An arising challenge is how to assess the accuracy of discovering all potential modules from multi-omic networks and validating their architectural characteristics based on innovative computational methods beyond function enrichment and biological validation. To display the framework progress in this domain, we systematically divided the existing Computational Validation Approaches based on Modular Architecture (CVAMA) into topology-based approaches (TBA) and statistics-based approaches (SBA). We compared the available module validation methods based on 11 gene expression datasets, and partially consistent results in the form of homogeneous models were obtained with each individual approach, whereas discrepant contradictory results were found between TBA and SBA. The TBA of the Zsummary value had a higher Validation Success Ratio (VSR) (51%) and a higher Fluctuation Ratio (FR) (80.92%), whereas the SBA of the approximately unbiased (AU) p-value had a lower VSR (12.3%) and a lower FR (45.84%). The Gray area simulated study revealed a consistent result for these two models and indicated a lower Variation Ratio (VR) (8.10%) of TBA at 6 simulated levels. Despite facing many novel challenges and evidence limitations, CVAMA may offer novel insights into modular networks.

  20. A quantitative method for measurement of HL-60 cell apoptosis based on diffraction imaging flow cytometry technique.

    PubMed

    Yang, Xu; Feng, Yuanming; Liu, Yahui; Zhang, Ning; Lin, Wang; Sa, Yu; Hu, Xin-Hua

    2014-07-01

    A quantitative method for measurement of apoptosis in HL-60 cells based on polarization diffraction imaging flow cytometry technique is presented in this paper. Through comparative study with existing methods and the analysis of diffraction images by a gray level co-occurrence matrix algorithm (GLCM), we found 4 GLCM parameters of contrast (CON), cluster shade (CLS), correlation (COR) and dissimilarity (DIS) exhibit high sensitivities as the apoptotic rates. It was further demonstrated that the CLS parameter correlates significantly (R(2) = 0.899) with the degree of nuclear fragmentation and other three parameters showed a very good correlations (R(2) ranges from 0.69 to 0.90). These results demonstrated that the new method has the capability for rapid and accurate extraction of morphological features to quantify cellular apoptosis without the need for cell staining.

  1. A quantitative method for measurement of HL-60 cell apoptosis based on diffraction imaging flow cytometry technique

    PubMed Central

    Yang, Xu; Feng, Yuanming; Liu, Yahui; Zhang, Ning; Lin, Wang; Sa, Yu; Hu, Xin-Hua

    2014-01-01

    A quantitative method for measurement of apoptosis in HL-60 cells based on polarization diffraction imaging flow cytometry technique is presented in this paper. Through comparative study with existing methods and the analysis of diffraction images by a gray level co-occurrence matrix algorithm (GLCM), we found 4 GLCM parameters of contrast (CON), cluster shade (CLS), correlation (COR) and dissimilarity (DIS) exhibit high sensitivities as the apoptotic rates. It was further demonstrated that the CLS parameter correlates significantly (R2 = 0.899) with the degree of nuclear fragmentation and other three parameters showed a very good correlations (R2 ranges from 0.69 to 0.90). These results demonstrated that the new method has the capability for rapid and accurate extraction of morphological features to quantify cellular apoptosis without the need for cell staining. PMID:25071957

  2. An accurate method for determining residual stresses with magnetic non-destructive techniques in welded ferromagnetic steels

    NASA Astrophysics Data System (ADS)

    Vourna, P.

    2016-03-01

    The scope of the present research work was to investigate the proper selection criteria for developing a suitable methodology for the accurate determination of residual stresses existing in welded parts. Magnetic non-destructive testing took place by the use of two magnetic non-destructive techniques: by the measurement of the magnetic Barkhausen noise and by the evaluation of the magnetic hysteresis loop parameters. The spatial distribution of residual stresses in welded metal parts by both non-destructive magnetic methods and two diffraction methods was determined. The conduction of magnetic measurements required an initial calibration of ferromagnetic steels. Based on the examined volume of the sample, all methods used were divided into two large categories: the first one was related to the determination of surface residual stress, whereas the second one was related to bulk residual stress determination. The first category included the magnetic Barkhausen noise and the X-ray diffraction measurements, while the second one included the magnetic permeability and the neutron diffraction data. The residual stresses determined by the magnetic techniques were in a good agreement with the diffraction ones.

  3. An accurate method to measure alpha-emitting natural radionuclides in atmospheric filters: Application in two NORM industries

    NASA Astrophysics Data System (ADS)

    Lozano, R. L.; Bolívar, J. P.; San Miguel, E. G.; García-Tenorio, R.; Gázquez, M. J.

    2011-12-01

    In this work, an accurate method for the measurement of natural alpha-emitting radionuclides from aerosols collected in air filters is presented and discussed in detail. The knowledge of the levels of several natural alpha-emitting radionuclides (238U, 234U, 232Th, 230Th, 228Th, 226Ra and 210Po) in atmospheric aerosols is essential not only for a better understanding of the several atmospheric processes and changes, but also for a proper evaluation of the potential doses, which can inadvertently be received by the population via inhalation. The proposed method takes into account the presence of intrinsic amounts of these radionuclides in the matrices of the quartz filters used, as well as the possible variation in the humidity of the filters throughout the collection process. In both cases, the corrections necessary in order to redress these levels have been evaluated and parameterized. Furthermore, a detailed study has been performed into the optimisation of the volume of air to be sampled in order to increase the accuracy in the determination of the radionuclides. The method as a whole has been applied for the determination of the activity concentrations of U- and Th-isotopes in aerosols collected at two NORM (Naturally Occurring Radioactive Material) industries located in the southwest of Spain. Based on the levels found, a conservative estimation has been performed to yield the additional committed effective doses to which the workers are potentially susceptible due to inhalation of anthropogenic material present in the environment of these two NORM industries.

  4. An accurate and linear-scaling method for calculating charge-transfer excitation energies and diabatic couplings

    SciTech Connect

    Pavanello, Michele; Van Voorhis, Troy; Visscher, Lucas; Neugebauer, Johannes

    2013-02-07

    Quantum-mechanical methods that are both computationally fast and accurate are not yet available for electronic excitations having charge transfer character. In this work, we present a significant step forward towards this goal for those charge transfer excitations that take place between non-covalently bound molecules. In particular, we present a method that scales linearly with the number of non-covalently bound molecules in the system and is based on a two-pronged approach: The molecular electronic structure of broken-symmetry charge-localized states is obtained with the frozen density embedding formulation of subsystem density-functional theory; subsequently, in a post-SCF calculation, the full-electron Hamiltonian and overlap matrix elements among the charge-localized states are evaluated with an algorithm which takes full advantage of the subsystem DFT density partitioning technique. The method is benchmarked against coupled-cluster calculations and achieves chemical accuracy for the systems considered for intermolecular separations ranging from hydrogen-bond distances to tens of Angstroms. Numerical examples are provided for molecular clusters comprised of up to 56 non-covalently bound molecules.

  5. Quantitative evaluation of material degradation by Barkhausen noise method

    SciTech Connect

    Yamaguchi, Atsunori; Maeda, Noriyoshi; Sugibayashi, Takuya

    1995-12-01

    Evaluation the life of nuclear power plant becomes inevitable to extend the plant operating period. This paper applied the magnetic method using Barkhausen noise (BHN) to detect the degradation by fatigue and thermal aging. Low alloy steel (SA 508 cl.2) was fatigued at the strain amplitudes of {+-}1% and {+-}0.4%, and duplex stainless steel (SCS14A) was heated at 400 C for a long period (thermal aging). For the degraded material by thermal aging, BHN was measured and good correlation between magnetic properties and absorption energy of the material was obtained. For fatigued material, BHNM was measured at each predetermined cycle and the effect of stress or strain of the material when it measured was evaluated, and good correlation between BHN and fatigue damage ratio was obtained.

  6. Comparison of reconstruction methods and quantitative accuracy in Siemens Inveon PET scanner

    NASA Astrophysics Data System (ADS)

    Ram Yu, A.; Kim, Jin Su; Kang, Joo Hyun; Moo Lim, Sang

    2015-04-01

    concentrations for radioactivity Our data collectively showed that OSEM 2D reconstruction method provides quantitatively accurate reconstructed PET data results.

  7. Response monitoring using quantitative ultrasound methods and supervised dictionary learning in locally advanced breast cancer

    NASA Astrophysics Data System (ADS)

    Gangeh, Mehrdad J.; Fung, Brandon; Tadayyon, Hadi; Tran, William T.; Czarnota, Gregory J.

    2016-03-01

    A non-invasive computer-aided-theragnosis (CAT) system was developed for the early assessment of responses to neoadjuvant chemotherapy in patients with locally advanced breast cancer. The CAT system was based on quantitative ultrasound spectroscopy methods comprising several modules including feature extraction, a metric to measure the dissimilarity between "pre-" and "mid-treatment" scans, and a supervised learning algorithm for the classification of patients to responders/non-responders. One major requirement for the successful design of a high-performance CAT system is to accurately measure the changes in parametric maps before treatment onset and during the course of treatment. To this end, a unified framework based on Hilbert-Schmidt independence criterion (HSIC) was used for the design of feature extraction from parametric maps and the dissimilarity measure between the "pre-" and "mid-treatment" scans. For the feature extraction, HSIC was used to design a supervised dictionary learning (SDL) method by maximizing the dependency between the scans taken from "pre-" and "mid-treatment" with "dummy labels" given to the scans. For the dissimilarity measure, an HSIC-based metric was employed to effectively measure the changes in parametric maps as an indication of treatment effectiveness. The HSIC-based feature extraction and dissimilarity measure used a kernel function to nonlinearly transform input vectors into a higher dimensional feature space and computed the population means in the new space, where enhanced group separability was ideally obtained. The results of the classification using the developed CAT system indicated an improvement of performance compared to a CAT system with basic features using histogram of intensity.

  8. Methods for Computing Accurate Atomic Spin Moments for Collinear and Noncollinear Magnetism in Periodic and Nonperiodic Materials.

    PubMed

    Manz, Thomas A; Sholl, David S

    2011-12-13

    The partitioning of electron spin density among atoms in a material gives atomic spin moments (ASMs), which are important for understanding magnetic properties. We compare ASMs computed using different population analysis methods and introduce a method for computing density derived electrostatic and chemical (DDEC) ASMs. Bader and DDEC ASMs can be computed for periodic and nonperiodic materials with either collinear or noncollinear magnetism, while natural population analysis (NPA) ASMs can be computed for nonperiodic materials with collinear magnetism. Our results show Bader, DDEC, and (where applicable) NPA methods give similar ASMs, but different net atomic charges. Because they are optimized to reproduce both the magnetic field and the chemical states of atoms in a material, DDEC ASMs are especially suitable for constructing interaction potentials for atomistic simulations. We describe the computation of accurate ASMs for (a) a variety of systems using collinear and noncollinear spin DFT, (b) highly correlated materials (e.g., magnetite) using DFT+U, and (c) various spin states of ozone using coupled cluster expansions. The computed ASMs are in good agreement with available experimental results for a variety of periodic and nonperiodic materials. Examples considered include the antiferromagnetic metal organic framework Cu3(BTC)2, several ozone spin states, mono- and binuclear transition metal complexes, ferri- and ferro-magnetic solids (e.g., Fe3O4, Fe3Si), and simple molecular systems. We briefly discuss the theory of exchange-correlation functionals for studying noncollinear magnetism. A method for finding the ground state of systems with highly noncollinear magnetism is introduced. We use these methods to study the spin-orbit coupling potential energy surface of the single molecule magnet Fe4C40H52N4O12, which has highly noncollinear magnetism, and find that it contains unusual features that give a new interpretation to experimental data.

  9. Optimization of Quantitative PCR Methods for Enteropathogen Detection.

    PubMed

    Liu, Jie; Gratz, Jean; Amour, Caroline; Nshama, Rosemary; Walongo, Thomas; Maro, Athanasia; Mduma, Esto; Platts-Mills, James; Boisen, Nadia; Nataro, James; Haverstick, Doris M; Kabir, Furqan; Lertsethtakarn, Paphavee; Silapong, Sasikorn; Jeamwattanalert, Pimmada; Bodhidatta, Ladaporn; Mason, Carl; Begum, Sharmin; Haque, Rashidul; Praharaj, Ira; Kang, Gagandeep; Houpt, Eric R

    2016-01-01

    Detection and quantification of enteropathogens in stool specimens is useful for diagnosing the cause of diarrhea but is technically challenging. Here we evaluate several important determinants of quantification: specimen collection, nucleic acid extraction, and extraction and amplification efficiency. First, we evaluate the molecular detection and quantification of pathogens in rectal swabs versus stool, using paired flocked rectal swabs and whole stool collected from 129 children hospitalized with diarrhea in Tanzania. Swabs generally yielded a higher quantification cycle (Cq) (average 29.7, standard deviation 3.5 vs. 25.3 ± 2.9 from stool, P<0.001) but were still able to detect 80% of pathogens with a Cq < 30 in stool. Second, a simplified total nucleic acid (TNA) extraction procedure was compared to separate DNA and RNA extractions and showed 92% (318/344) sensitivity and 98% (951/968) specificity, with no difference in Cq value for the positive results (ΔCq(DNA+RNA-TNA) = -0.01 ± 1.17, P = 0.972, N = 318). Third, we devised a quantification scheme that adjusts pathogen quantity to the specimen's extraction and amplification efficiency, and show that this better estimates the quantity of spiked specimens than the raw target Cq. In sum, these methods for enteropathogen quantification, stool sample collection, and nucleic acid extraction will be useful for laboratories studying enteric disease.

  10. Optimization of Quantitative PCR Methods for Enteropathogen Detection

    PubMed Central

    Liu, Jie; Gratz, Jean; Amour, Caroline; Nshama, Rosemary; Walongo, Thomas; Maro, Athanasia; Mduma, Esto; Platts-Mills, James; Boisen, Nadia; Nataro, James; Haverstick, Doris M.; Kabir, Furqan; Lertsethtakarn, Paphavee; Silapong, Sasikorn; Jeamwattanalert, Pimmada; Bodhidatta, Ladaporn; Mason, Carl; Begum, Sharmin; Haque, Rashidul; Praharaj, Ira; Kang, Gagandeep; Houpt, Eric R.

    2016-01-01

    Detection and quantification of enteropathogens in stool specimens is useful for diagnosing the cause of diarrhea but is technically challenging. Here we evaluate several important determinants of quantification: specimen collection, nucleic acid extraction, and extraction and amplification efficiency. First, we evaluate the molecular detection and quantification of pathogens in rectal swabs versus stool, using paired flocked rectal swabs and whole stool collected from 129 children hospitalized with diarrhea in Tanzania. Swabs generally yielded a higher quantification cycle (Cq) (average 29.7, standard deviation 3.5 vs. 25.3 ± 2.9 from stool, P<0.001) but were still able to detect 80% of pathogens with a Cq < 30 in stool. Second, a simplified total nucleic acid (TNA) extraction procedure was compared to separate DNA and RNA extractions and showed 92% (318/344) sensitivity and 98% (951/968) specificity, with no difference in Cq value for the positive results (ΔCq(DNA+RNA-TNA) = -0.01 ± 1.17, P = 0.972, N = 318). Third, we devised a quantification scheme that adjusts pathogen quantity to the specimen’s extraction and amplification efficiency, and show that this better estimates the quantity of spiked specimens than the raw target Cq. In sum, these methods for enteropathogen quantification, stool sample collection, and nucleic acid extraction will be useful for laboratories studying enteric disease. PMID:27336160

  11. Analyses on Regional Cultivated Land Changebased on Quantitative Method

    NASA Astrophysics Data System (ADS)

    Cao, Yingui; Yuan, Chun; Zhou, Wei; Wang, Jing

    Three Gorges Project is the great project in the world, which accelerates economic development in the reservoir area of Three Gorges Project. In the process of development in the reservoir area of Three Gorges Project, cultivated land has become the important resources, a lot of cultivated land has been occupied and become the constructing land. In the same time, a lot of cultivated land has been flooded because of the rising of the water level. This paper uses the cultivated land areas and social economic indicators of reservoir area of Three Gorges in 1990-2004, takes the statistic analyses and example research in order to analyze the process of cultivated land, get the driving forces of cultivated land change, find the new methods to stimulate and forecast the cultivated land areas in the future, and serve for the cultivated land protection and successive development in reservoir area of Three Gorges. The results indicate as follow, firstly, in the past 15 years, the cultivated land areas has decreased 200142 hm2, the decreasing quantity per year is 13343 hm2. The whole reservoir area is divided into three different areas, they are upper reaches area, belly area and lower reaches area. The trends of cultivated land change in different reservoir areas are similar to the whole reservoir area. Secondly, the curve of cultivated land areas and per capita GDP takes on the reverse U, and the steps between the change rate of cultivated land and the change rate of GDP are different in some years, which indicates that change of cultivated land and change of GDP are decoupling, besides that, change of cultivated land is connection with the development of urbanization and the policy of returning forestry greatly. Lastly, the precision of multi-regression is lower than the BP neural network in the stimulation of cultivated land, then takes use of the BP neural network to forecast the cultivated land areas in 2005, 2010 and 2015, and the forecasting results are reasonable.

  12. Quantitative research on the primary process: method and findings.

    PubMed

    Holt, Robert R

    2002-01-01

    Freud always defined the primary process metapsychologically, but he described the ways it shows up in dreams, parapraxes, jokes, and symptoms with enough observational detail to make it possible to create an objective, reliable scoring system to measure its manifestations in Rorschach responses, dreams, TAT stories, free associations, and other verbal texts. That system can identify signs of the thinker's efforts, adaptive or maladaptive, to control or defend against the emergence of primary process. A prerequisite and a consequence of the research that used this system was clarification and elaboration of the psychoanalytic theory of thinking. Results of empirical tests of several propositions derived from psychoanalytic theory are summarized. Predictions concerning the method's most useful index, of adaptive vs. maladaptive regression, have been repeatedly verified: People who score high on this index (who are able to produce well-controlled "primary products" in their Rorschach responses), as compared to those who score at the maladaptive pole (producing primary-process-filled responses with poor reality testing, anxiety, and pathological defensive efforts), are better able to tolerate sensory deprivation, are more able to enter special states of consciousness comfortably (drug-induced, hypnotic, etc.), and have higher achievements in artistic creativity, while schizophrenics tend to score at the extreme of maladaptive regression. Capacity for adaptive regression also predicts success in psychotherapy, and rises with the degree of improvement after both psychotherapy and drug treatment. Some predictive failures have been theoretically interesting: Kris's hypothesis about creativity and the controlled use of primary process holds for males but usually not for females. This body of work is presented as a refutation of charges, brought by such critics as Crews, that psychoanalysis cannot become a science.

  13. Extensive Peptide Fractionation and y1 Ion-Based Interference Detection Method for Enabling Accurate Quantification by Isobaric Labeling and Mass Spectrometry.

    PubMed

    Niu, Mingming; Cho, Ji-Hoon; Kodali, Kiran; Pagala, Vishwajeeth; High, Anthony A; Wang, Hong; Wu, Zhiping; Li, Yuxin; Bi, Wenjian; Zhang, Hui; Wang, Xusheng; Zou, Wei; Peng, Junmin

    2017-02-22

    Isobaric labeling quantification by mass spectrometry (MS) has emerged as a powerful technology for multiplexed large-scale protein profiling, but measurement accuracy in complex mixtures is confounded by the interference from coisolated ions, resulting in ratio compression. Here we report that the ratio compression can be essentially resolved by the combination of pre-MS peptide fractionation, MS2-based interference detection, and post-MS computational interference correction. To recapitulate the complexity of biological samples, we pooled tandem mass tag (TMT)-labeled Escherichia coli peptides at 1:3:10 ratios and added in ∼20-fold more rat peptides as background, followed by the analysis of two-dimensional liquid chromatography (LC)-MS/MS. Systematic investigation shows that quantitative interference was impacted by LC fractionation depth, MS isolation window, and peptide loading amount. Exhaustive fractionation (320 × 4 h) can nearly eliminate the interference and achieve results comparable to the MS3-based method. Importantly, the interference in MS2 scans can be estimated by the intensity of contaminated y1 product ions, and we thus developed an algorithm to correct reporter ion ratios of tryptic peptides. Our data indicate that intermediate fractionation (40 × 2 h) and y1 ion-based correction allow accurate and deep TMT profiling of more than 10 000 proteins, which represents a straightforward and affordable strategy in isobaric labeling proteomics.

  14. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    NASA Astrophysics Data System (ADS)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  15. Full Discretisations for Nonlinear Evolutionary Inequalities Based on Stiffly Accurate Runge-Kutta and hp-Finite Element Methods.

    PubMed

    Gwinner, J; Thalhammer, M

    The convergence of full discretisations by implicit Runge-Kutta and nonconforming Galerkin methods applied to nonlinear evolutionary inequalities is studied. The scope of applications includes differential inclusions governed by a nonlinear operator that is monotone and fulfills a certain growth condition. A basic assumption on the considered class of stiffly accurate Runge-Kutta time discretisations is a stability criterion which is in particular satisfied by the Radau IIA and Lobatto IIIC methods. In order to allow nonconforming hp-finite element approximations of unilateral constraints, set convergence of convex subsets in the sense of Glowinski-Mosco-Stummel is utilised. An appropriate formulation of the fully discrete variational inequality is deduced on the basis of a characteristic example of use, a Signorini-type initial-boundary value problem. Under hypotheses close to the existence theory of nonlinear first-order evolutionary equations and inequalities involving a monotone main part, a convergence result for the piecewise constant in time interpolant is established.

  16. Accurate energy bands calculated by the hybrid quasiparticle self-consistent GW method implemented in the ecalj package

    NASA Astrophysics Data System (ADS)

    Deguchi, Daiki; Sato, Kazunori; Kino, Hiori; Kotani, Takao

    2016-05-01

    We have recently implemented a new version of the quasiparticle self-consistent GW (QSGW) method in the ecalj package released at http://github.com/tkotani/ecalj. Since the new version of the ecalj package is numerically stable and more accurate than the previous versions, we can perform calculations easily without being bothered with tuning input parameters. Here we examine its ability to describe energy band properties, e.g., band-gap energy, eigenvalues at special points, and effective mass, for a variety of semiconductors and insulators. We treat C, Si, Ge, Sn, SiC (in 2H, 3C, and 4H structures), (Al, Ga, In) × (N, P, As, Sb), (Zn, Cd, Mg) × (O, S, Se, Te), SiO2, HfO2, ZrO2, SrTiO3, PbS, PbTe, MnO, NiO, and HgO. We propose that a hybrid QSGW method, where we mix 80% of QSGW and 20% of LDA, gives universally good agreement with experiments for these materials.

  17. Accurate prediction of polarised high order electrostatic interactions for hydrogen bonded complexes using the machine learning method kriging

    NASA Astrophysics Data System (ADS)

    Hughes, Timothy J.; Kandathil, Shaun M.; Popelier, Paul L. A.

    2015-02-01

    As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G**, B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol-1, decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol-1.

  18. Accurate prediction of polarised high order electrostatic interactions for hydrogen bonded complexes using the machine learning method kriging.

    PubMed

    Hughes, Timothy J; Kandathil, Shaun M; Popelier, Paul L A

    2015-02-05

    As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G(**), B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol(-1), decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol(-1).

  19. Accurate diagnosis of myalgic encephalomyelitis and chronic fatigue syndrome based upon objective test methods for characteristic symptoms

    PubMed Central

    Twisk, Frank NM

    2015-01-01

    Although myalgic encephalomyelitis (ME) and chronic fatigue syndrome (CFS) are considered to be synonymous, the definitional criteria for ME and CFS define two distinct, partially overlapping, clinical entities. ME, whether defined by the original criteria or by the recently proposed criteria, is not equivalent to CFS, let alone a severe variant of incapacitating chronic fatigue. Distinctive features of ME are: muscle weakness and easy muscle fatigability, cognitive impairment, circulatory deficits, a marked variability of the symptoms in presence and severity, but above all, post-exertional “malaise”: a (delayed) prolonged aggravation of symptoms after a minor exertion. In contrast, CFS is primarily defined by (unexplained) chronic fatigue, which should be accompanied by four out of a list of 8 symptoms, e.g., headaches. Due to the subjective nature of several symptoms of ME and CFS, researchers and clinicians have questioned the physiological origin of these symptoms and qualified ME and CFS as functional somatic syndromes. However, various characteristic symptoms, e.g., post-exertional “malaise” and muscle weakness, can be assessed objectively using well-accepted methods, e.g., cardiopulmonary exercise tests and cognitive tests. The objective measures acquired by these methods should be used to accurately diagnose patients, to evaluate the severity and impact of the illness objectively and to assess the positive and negative effects of proposed therapies impartially. PMID:26140274

  20. A new method of accurate broken rotor bar diagnosis based on modulation signal bispectrum analysis of motor current signals

    NASA Astrophysics Data System (ADS)

    Gu, F.; Wang, T.; Alwodai, A.; Tian, X.; Shao, Y.; Ball, A. D.

    2015-01-01

    Motor current signature analysis (MCSA) has been an effective way of monitoring electrical machines for many years. However, inadequate accuracy in diagnosing incipient broken rotor bars (BRB) has motivated many studies into improving this method. In this paper a modulation signal bispectrum (MSB) analysis is applied to motor currents from different broken bar cases and a new MSB based sideband estimator (MSB-SE) and sideband amplitude estimator are introduced for obtaining the amplitude at (1 ± 2 s)fs (s is the rotor slip and fs is the fundamental supply frequency) with high accuracy. As the MSB-SE has a good performance of noise suppression, the new estimator produces more accurate results in predicting the number of BRB, compared with conventional power spectrum analysis. Moreover, the paper has also developed an improved model for motor current signals under rotor fault conditions and an effective method to decouple the BRB current which interferes with that of speed oscillations associated with BRB. These provide theoretical supports for the new estimators and clarify the issues in using conventional bispectrum analysis.

  1. Accurate diagnosis of myalgic encephalomyelitis and chronic fatigue syndrome based upon objective test methods for characteristic symptoms.

    PubMed

    Twisk, Frank Nm

    2015-06-26

    Although myalgic encephalomyelitis (ME) and chronic fatigue syndrome (CFS) are considered to be synonymous, the definitional criteria for ME and CFS define two distinct, partially overlapping, clinical entities. ME, whether defined by the original criteria or by the recently proposed criteria, is not equivalent to CFS, let alone a severe variant of incapacitating chronic fatigue. Distinctive features of ME are: muscle weakness and easy muscle fatigability, cognitive impairment, circulatory deficits, a marked variability of the symptoms in presence and severity, but above all, post-exertional "malaise": a (delayed) prolonged aggravation of symptoms after a minor exertion. In contrast, CFS is primarily defined by (unexplained) chronic fatigue, which should be accompanied by four out of a list of 8 symptoms, e.g., headaches. Due to the subjective nature of several symptoms of ME and CFS, researchers and clinicians have questioned the physiological origin of these symptoms and qualified ME and CFS as functional somatic syndromes. However, various characteristic symptoms, e.g., post-exertional "malaise" and muscle weakness, can be assessed objectively using well-accepted methods, e.g., cardiopulmonary exercise tests and cognitive tests. The objective measures acquired by these methods should be used to accurately diagnose patients, to evaluate the severity and impact of the illness objectively and to assess the positive and negative effects of proposed therapies impartially.

  2. Implementation and evaluation of the Level Set method: Towards efficient and accurate simulation of wet etching for microengineering applications

    NASA Astrophysics Data System (ADS)

    Montoliu, C.; Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Colom, R. J.

    2013-10-01

    The use of atomistic methods, such as the Continuous Cellular Automaton (CCA), is currently regarded as a computationally efficient and experimentally accurate approach for the simulation of anisotropic etching of various substrates in the manufacture of Micro-electro-mechanical Systems (MEMS). However, when the features of the chemical process are modified, a time-consuming calibration process needs to be used to transform the new macroscopic etch rates into a corresponding set of atomistic rates. Furthermore, changing the substrate requires a labor-intensive effort to reclassify most atomistic neighborhoods. In this context, the Level Set (LS) method provides an alternative approach where the macroscopic forces affecting the front evolution are directly applied at the discrete level, thus avoiding the need for reclassification and/or calibration. Correspondingly, we present a fully-operational Sparse Field Method (SFM) implementation of the LS approach, discussing in detail the algorithm and providing a thorough characterization of the computational cost and simulation accuracy, including a comparison to the performance by the most recent CCA model. We conclude that the SFM implementation achieves similar accuracy as the CCA method with less fluctuations in the etch front and requiring roughly 4 times less memory. Although SFM can be up to 2 times slower than CCA for the simulation of anisotropic etchants, it can also be up to 10 times faster than CCA for isotropic etchants. In addition, we present a parallel, GPU-based implementation (gSFM) and compare it to an optimized, multicore CPU version (cSFM), demonstrating that the SFM algorithm can be successfully parallelized and the simulation times consequently reduced, while keeping the accuracy of the simulations. Although modern multicore CPUs provide an acceptable option, the massively parallel architecture of modern GPUs is more suitable, as reflected by computational times for gSFM up to 7.4 times faster than

  3. SU-F-BRF-09: A Non-Rigid Point Matching Method for Accurate Bladder Dose Summation in Cervical Cancer HDR Brachytherapy

    SciTech Connect

    Chen, H; Zhen, X; Zhou, L; Zhong, Z; Pompos, A; Yan, H; Jiang, S; Gu, X

    2014-06-15

    Purpose: To propose and validate a deformable point matching scheme for surface deformation to facilitate accurate bladder dose summation for fractionated HDR cervical cancer treatment. Method: A deformable point matching scheme based on the thin plate spline robust point matching (TPSRPM) algorithm is proposed for bladder surface registration. The surface of bladders segmented from fractional CT images is extracted and discretized with triangular surface mesh. Deformation between the two bladder surfaces are obtained by matching the two meshes' vertices via the TPS-RPM algorithm, and the deformation vector fields (DVFs) characteristic of this deformation is estimated by B-spline approximation. Numerically, the algorithm is quantitatively compared with the Demons algorithm using five clinical cervical cancer cases by several metrics: vertex-to-vertex distance (VVD), Hausdorff distance (HD), percent error (PE), and conformity index (CI). Experimentally, the algorithm is validated on a balloon phantom with 12 surface fiducial markers. The balloon is inflated with different amount of water, and the displacement of fiducial markers is benchmarked as ground truth to study TPS-RPM calculated DVFs' accuracy. Results: In numerical evaluation, the mean VVD is 3.7(±2.0) mm after Demons, and 1.3(±0.9) mm after TPS-RPM. The mean HD is 14.4 mm after Demons, and 5.3mm after TPS-RPM. The mean PE is 101.7% after Demons and decreases to 18.7% after TPS-RPM. The mean CI is 0.63 after Demons, and increases to 0.90 after TPS-RPM. In the phantom study, the mean Euclidean distance of the fiducials is 7.4±3.0mm and 4.2±1.8mm after Demons and TPS-RPM, respectively. Conclusions: The bladder wall deformation is more accurate using the feature-based TPS-RPM algorithm than the intensity-based Demons algorithm, indicating that TPS-RPM has the potential for accurate bladder dose deformation and dose summation for multi-fractional cervical HDR brachytherapy. This work is supported in part by

  4. The Quantitative Methods Boot Camp: Teaching Quantitative Thinking and Computing Skills to Graduate Students in the Life Sciences

    PubMed Central

    Stefan, Melanie I.; Gutlerner, Johanna L.; Born, Richard T.; Springer, Michael

    2015-01-01

    The past decade has seen a rapid increase in the ability of biologists to collect large amounts of data. It is therefore vital that research biologists acquire the necessary skills during their training to visualize, analyze, and interpret such data. To begin to meet this need, we have developed a “boot camp” in quantitative methods for biology graduate students at Harvard Medical School. The goal of this short, intensive course is to enable students to use computational tools to visualize and analyze data, to strengthen their computational thinking skills, and to simulate and thus extend their intuition about the behavior of complex biological systems. The boot camp teaches basic programming using biological examples from statistics, image processing, and data analysis. This integrative approach to teaching programming and quantitative reasoning motivates students’ engagement by demonstrating the relevance of these skills to their work in life science laboratories. Students also have the opportunity to analyze their own data or explore a topic of interest in more detail. The class is taught with a mixture of short lectures, Socratic discussion, and in-class exercises. Students spend approximately 40% of their class time working through both short and long problems. A high instructor-to-student ratio allows students to get assistance or additional challenges when needed, thus enhancing the experience for students at all levels of mastery. Data collected from end-of-course surveys from the last five offerings of the course (between 2012 and 2014) show that students report high learning gains and feel that the course prepares them for solving quantitative and computational problems they will encounter in their research. We outline our course here which, together with the course materials freely available online under a Creative Commons License, should help to facilitate similar efforts by others. PMID:25880064

  5. The quantitative methods boot camp: teaching quantitative thinking and computing skills to graduate students in the life sciences.

    PubMed

    Stefan, Melanie I; Gutlerner, Johanna L; Born, Richard T; Springer, Michael

    2015-04-01

    The past decade has seen a rapid increase in the ability of biologists to collect large amounts of data. It is therefore vital that research biologists acquire the necessary