Sample records for reference methods based

  1. Measurement of susceptibility artifacts with histogram-based reference value on magnetic resonance images according to standard ASTM F2119.

    PubMed

    Heinrich, Andreas; Teichgräber, Ulf K; Güttler, Felix V

    2015-12-01

    The standard ASTM F2119 describes a test method for measuring the size of a susceptibility artifact based on the example of a passive implant. A pixel in an image is considered to be a part of an image artifact if the intensity is changed by at least 30% in the presence of a test object, compared to a reference image in which the test object is absent (reference value). The aim of this paper is to simplify and accelerate the test method using a histogram-based reference value. Four test objects were scanned parallel and perpendicular to the main magnetic field, and the largest susceptibility artifacts were measured using two methods of reference value determination (reference image-based and histogram-based reference value). The results between both methods were compared using the Mann-Whitney U-test. The difference between both reference values was 42.35 ± 23.66. The difference of artifact size was 0.64 ± 0.69 mm. The artifact sizes of both methods did not show significant differences; the p-value of the Mann-Whitney U-test was between 0.710 and 0.521. A standard-conform method for a rapid, objective, and reproducible evaluation of susceptibility artifacts could be implemented. The result of the histogram-based method does not significantly differ from the ASTM-conform method.

  2. Coordinates and intervals in graph-based reference genomes.

    PubMed

    Rand, Knut D; Grytten, Ivar; Nederbragt, Alexander J; Storvik, Geir O; Glad, Ingrid K; Sandve, Geir K

    2017-05-18

    It has been proposed that future reference genomes should be graph structures in order to better represent the sequence diversity present in a species. However, there is currently no standard method to represent genomic intervals, such as the positions of genes or transcription factor binding sites, on graph-based reference genomes. We formalize offset-based coordinate systems on graph-based reference genomes and introduce methods for representing intervals on these reference structures. We show the advantage of our methods by representing genes on a graph-based representation of the newest assembly of the human genome (GRCh38) and its alternative loci for regions that are highly variable. More complex reference genomes, containing alternative loci, require methods to represent genomic data on these structures. Our proposed notation for genomic intervals makes it possible to fully utilize the alternative loci of the GRCh38 assembly and potential future graph-based reference genomes. We have made a Python package for representing such intervals on offset-based coordinate systems, available at https://github.com/uio-cels/offsetbasedgraph . An interactive web-tool using this Python package to visualize genes on a graph created from GRCh38 is available at https://github.com/uio-cels/genomicgraphcoords .

  3. Defining the Reference Condition for Wadeable Streams in the Sand Hills Subdivision of the Southeastern Plains Ecoregion, USA

    NASA Astrophysics Data System (ADS)

    Kosnicki, Ely; Sefick, Stephen A.; Paller, Michael H.; Jarrell, Miller S.; Prusha, Blair A.; Sterrett, Sean C.; Tuberville, Tracey D.; Feminella, Jack W.

    2014-09-01

    The Sand Hills subdivision of the Southeastern Plains ecoregion has been impacted by historical land uses over the past two centuries and, with the additive effects of contemporary land use, determining reference condition for streams in this region is a challenge. We identified reference condition based on the combined use of 3 independent selection methods. Method 1 involved use of a multivariate disturbance gradient derived from several stressors, method 2 was based on variation in channel morphology, and method 3 was based on passing 6 of 7 environmental criteria. Sites selected as reference from all 3 methods were considered primary reference, whereas those selected by 2 or 1 methods were considered secondary or tertiary reference, respectively. Sites not selected by any of the methods were considered non-reference. In addition, best professional judgment (BPJ) was used to exclude some sites from any reference class, and comparisons were made to examine the utility of BPJ. Non-metric multidimensional scaling indicated that use of BPJ may help designate non-reference sites when unidentified stressors are present. The macroinvertebrate community measures Ephemeroptera, Plecoptera, Trichoptera richness and North Carolina Biotic Index showed no differences between primary and secondary reference sites when BPJ was ignored. However, there was no significant difference among primary, secondary, and tertiary reference sites when BPJ was used. We underscore the importance of classifying reference conditions, especially in regions that have endured significant anthropogenic activity. We suggest that the use of secondary reference sites may enable construction of models that target a broader set of management interests.

  4. A reference estimator based on composite sensor pattern noise for source device identification

    NASA Astrophysics Data System (ADS)

    Li, Ruizhe; Li, Chang-Tsun; Guan, Yu

    2014-02-01

    It has been proved that Sensor Pattern Noise (SPN) can serve as an imaging device fingerprint for source camera identification. Reference SPN estimation is a very important procedure within the framework of this application. Most previous works built reference SPN by averaging the SPNs extracted from 50 images of blue sky. However, this method can be problematic. Firstly, in practice we may face the problem of source camera identification in the absence of the imaging cameras and reference SPNs, which means only natural images with scene details are available for reference SPN estimation rather than blue sky images. It is challenging because the reference SPN can be severely contaminated by image content. Secondly, the number of available reference images sometimes is too few for existing methods to estimate a reliable reference SPN. In fact, existing methods lack consideration of the number of available reference images as they were designed for the datasets with abundant images to estimate the reference SPN. In order to deal with the aforementioned problem, in this work, a novel reference estimator is proposed. Experimental results show that our proposed method achieves better performance than the methods based on the averaged reference SPN, especially when few reference images used.

  5. Metabarcoding of marine nematodes – evaluation of reference datasets used in tree-based taxonomy assignment approach

    PubMed Central

    2016-01-01

    Abstract Background Metabarcoding is becoming a common tool used to assess and compare diversity of organisms in environmental samples. Identification of OTUs is one of the critical steps in the process and several taxonomy assignment methods were proposed to accomplish this task. This publication evaluates the quality of reference datasets, alongside with several alignment and phylogeny inference methods used in one of the taxonomy assignment methods, called tree-based approach. This approach assigns anonymous OTUs to taxonomic categories based on relative placements of OTUs and reference sequences on the cladogram and support that these placements receive. New information In tree-based taxonomy assignment approach, reliable identification of anonymous OTUs is based on their placement in monophyletic and highly supported clades together with identified reference taxa. Therefore, it requires high quality reference dataset to be used. Resolution of phylogenetic trees is strongly affected by the presence of erroneous sequences as well as alignment and phylogeny inference methods used in the process. Two preparation steps are essential for the successful application of tree-based taxonomy assignment approach. Curated collections of genetic information do include erroneous sequences. These sequences have detrimental effect on the resolution of cladograms used in tree-based approach. They must be identified and excluded from the reference dataset beforehand. Various combinations of multiple sequence alignment and phylogeny inference methods provide cladograms with different topology and bootstrap support. These combinations of methods need to be tested in order to determine the one that gives highest resolution for the particular reference dataset. Completing the above mentioned preparation steps is expected to decrease the number of unassigned OTUs and thus improve the results of the tree-based taxonomy assignment approach. PMID:27932919

  6. Metabarcoding of marine nematodes - evaluation of reference datasets used in tree-based taxonomy assignment approach.

    PubMed

    Holovachov, Oleksandr

    2016-01-01

    Metabarcoding is becoming a common tool used to assess and compare diversity of organisms in environmental samples. Identification of OTUs is one of the critical steps in the process and several taxonomy assignment methods were proposed to accomplish this task. This publication evaluates the quality of reference datasets, alongside with several alignment and phylogeny inference methods used in one of the taxonomy assignment methods, called tree-based approach. This approach assigns anonymous OTUs to taxonomic categories based on relative placements of OTUs and reference sequences on the cladogram and support that these placements receive. In tree-based taxonomy assignment approach, reliable identification of anonymous OTUs is based on their placement in monophyletic and highly supported clades together with identified reference taxa. Therefore, it requires high quality reference dataset to be used. Resolution of phylogenetic trees is strongly affected by the presence of erroneous sequences as well as alignment and phylogeny inference methods used in the process. Two preparation steps are essential for the successful application of tree-based taxonomy assignment approach. Curated collections of genetic information do include erroneous sequences. These sequences have detrimental effect on the resolution of cladograms used in tree-based approach. They must be identified and excluded from the reference dataset beforehand.Various combinations of multiple sequence alignment and phylogeny inference methods provide cladograms with different topology and bootstrap support. These combinations of methods need to be tested in order to determine the one that gives highest resolution for the particular reference dataset.Completing the above mentioned preparation steps is expected to decrease the number of unassigned OTUs and thus improve the results of the tree-based taxonomy assignment approach.

  7. DNA-Based Methods in the Immunohematology Reference Laboratory

    PubMed Central

    Denomme, Gregory A

    2010-01-01

    Although hemagglutination serves the immunohematology reference laboratory well, when used alone, it has limited capability to resolve complex problems. This overview discusses how molecular approaches can be used in the immunohematology reference laboratory. In order to apply molecular approaches to immunohematology, knowledge of genes, DNA-based methods, and the molecular bases of blood groups are required. When applied correctly, DNA-based methods can predict blood groups to resolve ABO/Rh discrepancies, identify variant alleles, and screen donors for antigen-negative units. DNA-based testing in immunohematology is a valuable tool used to resolve blood group incompatibilities and to support patients in their transfusion needs. PMID:21257350

  8. Valid analytical performance specifications for combined analytical bias and imprecision for the use of common reference intervals.

    PubMed

    Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György

    2018-01-01

    Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.

  9. The Kjeldahl method as a primary reference procedure for total protein in certified reference materials used in clinical chemistry. I. A review of Kjeldahl methods adopted by laboratory medicine.

    PubMed

    Chromý, Vratislav; Vinklárková, Bára; Šprongl, Luděk; Bittová, Miroslava

    2015-01-01

    We found previously that albumin-calibrated total protein in certified reference materials causes unacceptable positive bias in analysis of human sera. The simplest way to cure this defect is the use of human-based serum/plasma standards calibrated by the Kjeldahl method. Such standards, commutative with serum samples, will compensate for bias caused by lipids and bilirubin in most human sera. To find a suitable primary reference procedure for total protein in reference materials, we reviewed Kjeldahl methods adopted by laboratory medicine. We found two methods recommended for total protein in human samples: an indirect analysis based on total Kjeldahl nitrogen corrected for its nonprotein nitrogen and a direct analysis made on isolated protein precipitates. The methods found will be assessed in a subsequent article.

  10. Examination of a Method to Determine the Reference Region for Calculating the Specific Binding Ratio in Dopamine Transporter Imaging.

    PubMed

    Watanabe, Ayumi; Inoue, Yusuke; Asano, Yuji; Kikuchi, Kei; Miyatake, Hiroki; Tokushige, Takanobu

    2017-01-01

    The specific binding ratio (SBR) was first reported by Tossici-Bolt et al. for quantitative indicators for dopamine transporter (DAT) imaging. It is defined as the ratio of the specific binding concentration of the striatum to the non-specific binding concentration of the whole brain other than the striatum. The non-specific binding concentration is calculated based on the region of interest (ROI), which is set 20 mm inside the outer contour, defined by a threshold technique. Tossici-Bolt et al. used a 50% threshold, but sometimes we couldn't define the ROI of non-specific binding concentration (reference region) and calculate SBR appropriately with a 50% threshold. Therefore, we sought a new method for determining the reference region when calculating SBR. We used data from 20 patients who had undergone DAT imaging in our hospital, to calculate the non-specific binding concentration by the following methods, the threshold to define a reference region was fixed at some specific values (the fixing method) and reference region was visually optimized by an examiner at every examination (the visual optimization method). First, we assessed the reference region of each method visually, and afterward, we quantitatively compared SBR calculated based on each method. In the visual assessment, the scores of the fixing method at 30% and visual optimization method were higher than the scores of the fixing method at other values, with or without scatter correction. In the quantitative assessment, the SBR obtained by visual optimization of the reference region, based on consensus of three radiological technologists, was used as a baseline (the standard method). The values of SBR showed good agreement between the standard method and both the fixing method at 30% and the visual optimization method, with or without scatter correction. Therefore, the fixing method at 30% and the visual optimization method were equally suitable for determining the reference region.

  11. Fast lossless compression via cascading Bloom filters

    PubMed Central

    2014-01-01

    Background Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. Results We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Conclusions Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space slightly. PMID:25252952

  12. Fast lossless compression via cascading Bloom filters.

    PubMed

    Rozov, Roye; Shamir, Ron; Halperin, Eran

    2014-01-01

    Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space slightly.

  13. An aggregate method to calibrate the reference point of cumulative prospect theory-based route choice model for urban transit network

    NASA Astrophysics Data System (ADS)

    Zhang, Yufeng; Long, Man; Luo, Sida; Bao, Yu; Shen, Hanxia

    2015-12-01

    Transit route choice model is the key technology of public transit systems planning and management. Traditional route choice models are mostly based on expected utility theory which has an evident shortcoming that it cannot accurately portray travelers' subjective route choice behavior for their risk preferences are not taken into consideration. Cumulative prospect theory (CPT), a brand new theory, can be used to describe travelers' decision-making process under the condition of uncertainty of transit supply and risk preferences of multi-type travelers. The method to calibrate the reference point, a key parameter to CPT-based transit route choice model, determines the precision of the model to a great extent. In this paper, a new method is put forward to obtain the value of reference point which combines theoretical calculation and field investigation results. Comparing the proposed method with traditional method, it shows that the new method can promote the quality of CPT-based model by improving the accuracy in simulating travelers' route choice behaviors based on transit trip investigation from Nanjing City, China. The proposed method is of great significance to logical transit planning and management, and to some extent makes up the defect that obtaining the reference point is solely based on qualitative analysis.

  14. Systematic review of serum steroid reference intervals developed using mass spectrometry.

    PubMed

    Tavita, Nevada; Greaves, Ronda F

    2017-12-01

    The aim of this study was to perform a systematic review of the published literature to determine the available serum/plasma steroid reference intervals generated by mass spectrometry (MS) methods across all age groups in healthy subjects and to suggest recommendations to achieve common MS based reference intervals for serum steroids. MEDLINE, EMBASE and PubMed databases were used to conduct a comprehensive search for English language, MS-based reference interval studies for serum/plasma steroids. Selection of steroids to include was based on those listed in the Royal College of Pathologists of Australasia Quality Assurance Programs, Chemical Pathology, Endocrine Program. This methodology has been registered onto the PROSPERO International prospective register of systematic reviews (ID number: CRD42015029637). After accounting for duplicates, a total of 60 manuscripts were identified through the search strategy. Following critical evaluation, a total of 16 studies were selected. Of the 16 studies, 12 reported reference intervals for testosterone, 11 for 17 hydroxy-progesterone, nine for androstenedione, six for cortisol, three for progesterone, two for dihydrotestosterone and only one for aldosterone and dehydroepiandrosterone sulphate. No studies established MS-based reference intervals for oestradiol. As far as we are aware, this report provides the first comparison of the peer reviewed literature for serum/plasma steroid reference intervals generated by MS-based methods. The reference intervals based on these published studies can be used to inform the process to develop common reference intervals, and agreed reporting units for mass spectrometry based steroid methods. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  15. Automatic identification of the reference system based on the fourth ventricular landmarks in T1-weighted MR images.

    PubMed

    Fu, Yili; Gao, Wenpeng; Chen, Xiaoguang; Zhu, Minwei; Shen, Weigao; Wang, Shuguo

    2010-01-01

    The reference system based on the fourth ventricular landmarks (including the fastigial point and ventricular floor plane) is used in medical image analysis of the brain stem. The objective of this study was to develop a rapid, robust, and accurate method for the automatic identification of this reference system on T1-weighted magnetic resonance images. The fully automated method developed in this study consisted of four stages: preprocessing of the data set, expectation-maximization algorithm-based extraction of the fourth ventricle in the region of interest, a coarse-to-fine strategy for identifying the fastigial point, and localization of the base point. The method was evaluated on 27 Brain Web data sets qualitatively and 18 Internet Brain Segmentation Repository data sets and 30 clinical scans quantitatively. The results of qualitative evaluation indicated that the method was robust to rotation, landmark variation, noise, and inhomogeneity. The results of quantitative evaluation indicated that the method was able to identify the reference system with an accuracy of 0.7 +/- 0.2 mm for the fastigial point and 1.1 +/- 0.3 mm for the base point. It took <6 seconds for the method to identify the related landmarks on a personal computer with an Intel Core 2 6300 processor and 2 GB of random-access memory. The proposed method for the automatic identification of the reference system based on the fourth ventricular landmarks was shown to be rapid, robust, and accurate. The method has potentially utility in image registration and computer-aided surgery.

  16. Absolute Radiometric Calibration of Narrow-Swath Imaging Sensors with Reference to Non-Coincident Wide-Swath Sensors

    NASA Technical Reports Server (NTRS)

    McCorkel, Joel; Thome, Kurtis; Lockwood, Ronald

    2012-01-01

    An inter-calibration method is developed to provide absolute radiometric calibration of narrow-swath imaging sensors with reference to non-coincident wide-swath sensors. The method predicts at-sensor radiance using non-coincident imagery from the reference sensor and knowledge of spectral reflectance of the test site. The imagery of the reference sensor is restricted to acquisitions that provide similar view and solar illumination geometry to reduce uncertainties due to directional reflectance effects. Spectral reflectance of the test site is found with a simple iterative radiative transfer method using radiance values of a well-understood wide-swath sensor and spectral shape information based on historical ground-based measurements. At-sensor radiance is calculated for the narrow-swath sensor using this spectral reflectance and atmospheric parameters that are also based on historical in situ measurements. Results of the inter-calibration method show agreement on the 2 5 percent level in most spectral regions with the vicarious calibration technique relying on coincident ground-based measurements referred to as the reflectance-based approach. While the variability of the inter-calibration method based on non-coincident image pairs is significantly larger, results are consistent with techniques relying on in situ measurements. The method is also insensitive to spectral differences between the sensors by transferring to surface spectral reflectance prior to prediction of at-sensor radiance. The utility of this inter-calibration method is made clear by its flexibility to utilize image pairings with acquisition dates differing in excess of 30 days allowing frequent absolute calibration comparisons between wide- and narrow-swath sensors.

  17. Systems and methods that generate height map models for efficient three dimensional reconstruction from depth information

    DOEpatents

    Frahm, Jan-Michael; Pollefeys, Marc Andre Leon; Gallup, David Robert

    2015-12-08

    Methods of generating a three dimensional representation of an object in a reference plane from a depth map including distances from a reference point to pixels in an image of the object taken from a reference point. Weights are assigned to respective voxels in a three dimensional grid along rays extending from the reference point through the pixels in the image based on the distances in the depth map from the reference point to the respective pixels, and a height map including an array of height values in the reference plane is formed based on the assigned weights. An n-layer height map may be constructed by generating a probabilistic occupancy grid for the voxels and forming an n-dimensional height map comprising an array of layer height values in the reference plane based on the probabilistic occupancy grid.

  18. A New Dual-purpose Quality Control Dosimetry Protocol for Diagnostic Reference-level Determination in Computed Tomography.

    PubMed

    Sohrabi, Mehdi; Parsi, Masoumeh; Sina, Sedigheh

    2018-05-17

    A diagnostic reference level is an advisory dose level set by a regulatory authority in a country as an efficient criterion for protection of patients from unwanted medical exposure. In computed tomography, the direct dose measurement and data collection methods are commonly applied for determination of diagnostic reference levels. Recently, a new quality-control-based dose survey method was proposed by the authors to simplify the diagnostic reference-level determination using a retrospective quality control database usually available at a regulatory authority in a country. In line with such a development, a prospective dual-purpose quality control dosimetry protocol is proposed for determination of diagnostic reference levels in a country, which can be simply applied by quality control service providers. This new proposed method was applied to five computed tomography scanners in Shiraz, Iran, and diagnostic reference levels for head, abdomen/pelvis, sinus, chest, and lumbar spine examinations were determined. The results were compared to those obtained by the data collection and quality-control-based dose survey methods, carried out in parallel in this study, and were found to agree well within approximately 6%. This is highly acceptable for quality-control-based methods according to International Atomic Energy Agency tolerance levels (±20%).

  19. Reference-based source separation method for identification of brain regions involved in a reference state from intracerebral EEG

    PubMed Central

    Samadi, Samareh; Amini, Ladan; Cosandier-Rimélé, Delphine; Soltanian-Zadeh, Hamid; Jutten, Christian

    2013-01-01

    In this paper, we present a fast method to extract the sources related to interictal epileptiform state. The method is based on general eigenvalue decomposition using two correlation matrices during: 1) periods including interictal epileptiform discharges (IED) as a reference activation model and 2) periods excluding IEDs or abnormal physiological signals as background activity. After extracting the most similar sources to the reference or IED state, IED regions are estimated by using multiobjective optimization. The method is evaluated using both realistic simulated data and actual intracerebral electroencephalography recordings of patients suffering from focal epilepsy. These patients are seizure-free after the resective surgery. Quantitative comparisons of the proposed IED regions with the visually inspected ictal onset zones by the epileptologist and another method of identification of IED regions reveal good performance. PMID:23428609

  20. [The water content reference material of water saturated octanol].

    PubMed

    Wang, Haifeng; Ma, Kang; Zhang, Wei; Li, Zhanyuan

    2011-03-01

    The national standards of biofuels specify the technique specification and analytical methods. A water content certified reference material based on the water saturated octanol was developed in order to satisfy the needs of the instrument calibration and the methods validation, assure the accuracy and consistency of results in water content measurements of biofuels. Three analytical methods based on different theories were employed to certify the water content of the reference material, including Karl Fischer coulometric titration, Karl Fischer volumetric titration and quantitative nuclear magnetic resonance. The consistency of coulometric and volumetric titration was achieved through the improvement of methods. The accuracy of the certified result was improved by the introduction of the new method of quantitative nuclear magnetic resonance. Finally, the certified value of reference material is 4.76% with an expanded uncertainty of 0.09%.

  1. Method and apparatus for large motor control

    DOEpatents

    Rose, Chris R [Santa Fe, NM; Nelson, Ronald O [White Rock, NM

    2003-08-12

    Apparatus and method for providing digital signal processing method for controlling the speed and phase of a motor involves inputting a reference signal having a frequency and relative phase indicative of a time based signal; modifying the reference signal to introduce a slew-rate limited portion of each cycle of the reference signal; inputting a feedback signal having a frequency and relative phase indicative of the operation of said motor; modifying the feedback signal to introduce a slew-rate limited portion of each cycle of the feedback signal; analyzing the modified reference signal and the modified feedback signal to determine the frequency of the modified reference signal and of the modified feedback signal and said relative phase between said modified reference signal and said modified feedback signal; and outputting control signals to the motor for adjusting said speed and phase of the motor based on the frequency determination and determination of the relative phase.

  2. Using pseudoalignment and base quality to accurately quantify microbial community composition

    PubMed Central

    Novembre, John

    2018-01-01

    Pooled DNA from multiple unknown organisms arises in a variety of contexts, for example microbial samples from ecological or human health research. Determining the composition of pooled samples can be difficult, especially at the scale of modern sequencing data and reference databases. Here we propose a novel method for taxonomic profiling in pooled DNA that combines the speed and low-memory requirements of k-mer based pseudoalignment with a likelihood framework that uses base quality information to better resolve multiply mapped reads. We apply the method to the problem of classifying 16S rRNA reads using a reference database of known organisms, a common challenge in microbiome research. Using simulations, we show the method is accurate across a variety of read lengths, with different length reference sequences, at different sample depths, and when samples contain reads originating from organisms absent from the reference. We also assess performance in real 16S data, where we reanalyze previous genetic association data to show our method discovers a larger number of quantitative trait associations than other widely used methods. We implement our method in the software Karp, for k-mer based analysis of read pools, to provide a novel combination of speed and accuracy that is uniquely suited for enhancing discoveries in microbial studies. PMID:29659582

  3. Determination of the purity of pharmaceutical reference materials by 1H NMR using the standardless PULCON methodology.

    PubMed

    Monakhova, Yulia B; Kohl-Himmelseher, Matthias; Kuballa, Thomas; Lachenmeier, Dirk W

    2014-11-01

    A fast and reliable nuclear magnetic resonance spectroscopic method for quantitative determination (qNMR) of targeted molecules in reference materials has been established using the ERETIC2 methodology (electronic reference to access in vivo concentrations) based on the PULCON principle (pulse length based concentration determination). The developed approach was validated for the analysis of pharmaceutical samples in the context of official medicines control, including ibandronic acid, amantadine, ambroxol and lercanidipine. The PULCON recoveries were above 94.3% and coefficients of variation (CVs) obtained by quantification of different targeted resonances ranged between 0.7% and 2.8%, demonstrating that the qNMR method is a precise tool for rapid quantification (approximately 15min) of reference materials and medicinal products. Generally, the values were within specification (certified values) provided by the manufactures. The results were in agreement with NMR quantification using an internal standard and validated reference HPLC analysis. The PULCON method was found to be a practical alternative with competitive precision and accuracy to the classical internal reference method and it proved to be applicable to different solvent conditions. The method can be recommended for routine use in medicines control laboratories, especially when the availability and costs of reference compounds are problematic. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Estimating clinical chemistry reference values based on an existing data set of unselected animals.

    PubMed

    Dimauro, Corrado; Bonelli, Piero; Nicolussi, Paola; Rassu, Salvatore P G; Cappio-Borlino, Aldo; Pulina, Giuseppe

    2008-11-01

    In an attempt to standardise the determination of biological reference values, the International Federation of Clinical Chemistry (IFCC) has published a series of recommendations on developing reference intervals. The IFCC recommends the use of an a priori sampling of at least 120 healthy individuals. However, such a high number of samples and laboratory analysis is expensive, time-consuming and not always feasible, especially in veterinary medicine. In this paper, an alternative (a posteriori) method is described and is used to determine reference intervals for biochemical parameters of farm animals using an existing laboratory data set. The method used was based on the detection and removal of outliers to obtain a large sample of animals likely to be healthy from the existing data set. This allowed the estimation of reliable reference intervals for biochemical parameters in Sarda dairy sheep. This method may also be useful for the determination of reference intervals for different species, ages and gender.

  5. Adapt-Mix: learning local genetic correlation structure improves summary statistics-based analyses

    PubMed Central

    Park, Danny S.; Brown, Brielin; Eng, Celeste; Huntsman, Scott; Hu, Donglei; Torgerson, Dara G.; Burchard, Esteban G.; Zaitlen, Noah

    2015-01-01

    Motivation: Approaches to identifying new risk loci, training risk prediction models, imputing untyped variants and fine-mapping causal variants from summary statistics of genome-wide association studies are playing an increasingly important role in the human genetics community. Current summary statistics-based methods rely on global ‘best guess’ reference panels to model the genetic correlation structure of the dataset being studied. This approach, especially in admixed populations, has the potential to produce misleading results, ignores variation in local structure and is not feasible when appropriate reference panels are missing or small. Here, we develop a method, Adapt-Mix, that combines information across all available reference panels to produce estimates of local genetic correlation structure for summary statistics-based methods in arbitrary populations. Results: We applied Adapt-Mix to estimate the genetic correlation structure of both admixed and non-admixed individuals using simulated and real data. We evaluated our method by measuring the performance of two summary statistics-based methods: imputation and joint-testing. When using our method as opposed to the current standard of ‘best guess’ reference panels, we observed a 28% decrease in mean-squared error for imputation and a 73.7% decrease in mean-squared error for joint-testing. Availability and implementation: Our method is publicly available in a software package called ADAPT-Mix available at https://github.com/dpark27/adapt_mix. Contact: noah.zaitlen@ucsf.edu PMID:26072481

  6. Reference interval estimation: Methodological comparison using extensive simulations and empirical data.

    PubMed

    Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S

    2017-12-01

    To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  7. Validating fatty acid intake as estimated by an FFQ: how does the 24 h recall perform as reference method compared with the duplicate portion?

    PubMed

    Trijsburg, Laura; de Vries, Jeanne Hm; Hollman, Peter Ch; Hulshof, Paul Jm; van 't Veer, Pieter; Boshuizen, Hendriek C; Geelen, Anouk

    2018-05-08

    To compare the performance of the commonly used 24 h recall (24hR) with the more distinct duplicate portion (DP) as reference method for validation of fatty acid intake estimated with an FFQ. Intakes of SFA, MUFA, n-3 fatty acids and linoleic acid (LA) were estimated by chemical analysis of two DP and by on average five 24hR and two FFQ. Plasma n-3 fatty acids and LA were used to objectively compare ranking of individuals based on DP and 24hR. Multivariate measurement error models were used to estimate validity coefficients and attenuation factors for the FFQ with the DP and 24hR as reference methods. Wageningen, the Netherlands. Ninety-two men and 106 women (aged 20-70 years). Validity coefficients for the fatty acid estimates by the FFQ tended to be lower when using the DP as reference method compared with the 24hR. Attenuation factors for the FFQ tended to be slightly higher based on the DP than those based on the 24hR as reference method. Furthermore, when using plasma fatty acids as reference, the DP showed comparable to slightly better ranking of participants according to their intake of n-3 fatty acids (0·33) and n-3:LA (0·34) than the 24hR (0·22 and 0·24, respectively). The 24hR gives only slightly different results compared with the distinctive but less feasible DP, therefore use of the 24hR seems appropriate as the reference method for FFQ validation of fatty acid intake.

  8. Registration-based segmentation with articulated model from multipostural magnetic resonance images for hand bone motion animation.

    PubMed

    Chen, Hsin-Chen; Jou, I-Ming; Wang, Chien-Kuo; Su, Fong-Chin; Sun, Yung-Nien

    2010-06-01

    The quantitative measurements of hand bones, including volume, surface, orientation, and position are essential in investigating hand kinematics. Moreover, within the measurement stage, bone segmentation is the most important step due to its certain influences on measuring accuracy. Since hand bones are small and tubular in shape, magnetic resonance (MR) imaging is prone to artifacts such as nonuniform intensity and fuzzy boundaries. Thus, greater detail is required for improving segmentation accuracy. The authors then propose using a novel registration-based method on an articulated hand model to segment hand bones from multipostural MR images. The proposed method consists of the model construction and registration-based segmentation stages. Given a reference postural image, the first stage requires construction of a drivable reference model characterized by hand bone shapes, intensity patterns, and articulated joint mechanism. By applying the reference model to the second stage, the authors initially design a model-based registration pursuant to intensity distribution similarity, MR bone intensity properties, and constraints of model geometry to align the reference model to target bone regions of the given postural image. The authors then refine the resulting surface to improve the superimposition between the registered reference model and target bone boundaries. For each subject, given a reference postural image, the proposed method can automatically segment all hand bones from all other postural images. Compared to the ground truth from two experts, the resulting surface image had an average margin of error within 1 mm (mm) only. In addition, the proposed method showed good agreement on the overlap of bone segmentations by dice similarity coefficient and also demonstrated better segmentation results than conventional methods. The proposed registration-based segmentation method can successfully overcome drawbacks caused by inherent artifacts in MR images and obtain more accurate segmentation results automatically. Moreover, realistic hand motion animations can be generated based on the bone segmentation results. The proposed method is found helpful for understanding hand bone geometries in dynamic postures that can be used in simulating 3D hand motion through multipostural MR images.

  9. Reference Intervals of Common Clinical Chemistry Analytes for Adults in Hong Kong.

    PubMed

    Lo, Y C; Armbruster, David A

    2012-04-01

    Defining reference intervals is a major challenge because of the difficulty in recruiting volunteers to participate and testing samples from a significant number of healthy reference individuals. Historical literature citation intervals are often suboptimal because they're be based on obsolete methods and/or only a small number of poorly defined reference samples. Blood donors in Hong Kong gave permission for additional blood to be collected for reference interval testing. The samples were tested for twenty-five routine analytes on the Abbott ARCHITECT clinical chemistry system. Results were analyzed using the Rhoads EP evaluator software program, which is based on the CLSI/IFCC C28-A guideline, and defines the reference interval as the 95% central range. Method specific reference intervals were established for twenty-five common clinical chemistry analytes for a Chinese ethnic population. The intervals were defined for each gender separately and for genders combined. Gender specific or combined gender intervals were adapted as appropriate for each analyte. A large number of healthy, apparently normal blood donors from a local ethnic population were tested to provide current reference intervals for a new clinical chemistry system. Intervals were determined following an accepted international guideline. Laboratories using the same or similar methodologies may adapt these intervals if deemed validated and deemed suitable for their patient population. Laboratories using different methodologies may be able to successfully adapt the intervals for their facilities using the reference interval transference technique based on a method comparison study.

  10. Example-Based Image Colorization Using Locality Consistent Sparse Representation.

    PubMed

    Bo Li; Fuchen Zhao; Zhuo Su; Xiangguo Liang; Yu-Kun Lai; Rosin, Paul L

    2017-11-01

    Image colorization aims to produce a natural looking color image from a given gray-scale image, which remains a challenging problem. In this paper, we propose a novel example-based image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target gray-scale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features, and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation, which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target gray-scale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms the state-of-the-art methods, both visually and quantitatively using a user study.

  11. Pool power control in remelting systems

    DOEpatents

    Williamson, Rodney L [Albuquerque, NM; Melgaard, David K [Albuquerque, NM; Beaman, Joseph J [Austin, TX

    2011-12-13

    An apparatus for and method of controlling a remelting furnace comprising adjusting current supplied to an electrode based upon a predetermined pool power reference value and adjusting the electrode drive speed based upon the predetermined pool power reference value.

  12. Evaluation of Method-Specific Extraction Variability for the Measurement of Fatty Acids in a Candidate Infant/Adult Nutritional Formula Reference Material.

    PubMed

    Place, Benjamin J

    2017-05-01

    To address community needs, the National Institute of Standards and Technology has developed a candidate Standard Reference Material (SRM) for infant/adult nutritional formula based on milk and whey protein concentrates with isolated soy protein called SRM 1869 Infant/Adult Nutritional Formula. One major component of this candidate SRM is the fatty acid content. In this study, multiple extraction techniques were evaluated to quantify the fatty acids in this new material. Extraction methods that were based on lipid extraction followed by transesterification resulted in lower mass fraction values for all fatty acids than the values measured by methods utilizing in situ transesterification followed by fatty acid methyl ester extraction (ISTE). An ISTE method, based on the identified optimal parameters, was used to determine the fatty acid content of the new infant/adult nutritional formula reference material.

  13. Threshold-driven optimization for reference-based auto-planning

    NASA Astrophysics Data System (ADS)

    Long, Troy; Chen, Mingli; Jiang, Steve; Lu, Weiguo

    2018-02-01

    We study threshold-driven optimization methodology for automatically generating a treatment plan that is motivated by a reference DVH for IMRT treatment planning. We present a framework for threshold-driven optimization for reference-based auto-planning (TORA). Commonly used voxel-based quadratic penalties have two components for penalizing under- and over-dosing of voxels: a reference dose threshold and associated penalty weight. Conventional manual- and auto-planning using such a function involves iteratively updating the preference weights while keeping the thresholds constant, an unintuitive and often inconsistent method for planning toward some reference DVH. However, driving a dose distribution by threshold values instead of preference weights can achieve similar plans with less computational effort. The proposed methodology spatially assigns reference DVH information to threshold values, and iteratively improves the quality of that assignment. The methodology effectively handles both sub-optimal and infeasible DVHs. TORA was applied to a prostate case and a liver case as a proof-of-concept. Reference DVHs were generated using a conventional voxel-based objective, then altered to be either infeasible or easy-to-achieve. TORA was able to closely recreate reference DVHs in 5-15 iterations of solving a simple convex sub-problem. TORA has the potential to be effective for auto-planning based on reference DVHs. As dose prediction and knowledge-based planning becomes more prevalent in the clinical setting, incorporating such data into the treatment planning model in a clear, efficient way will be crucial for automated planning. A threshold-focused objective tuning should be explored over conventional methods of updating preference weights for DVH-guided treatment planning.

  14. Extrapolation-Based References Improve Motion and Eddy-Current Correction of High B-Value DWI Data: Application in Parkinson's Disease Dementia.

    PubMed

    Nilsson, Markus; Szczepankiewicz, Filip; van Westen, Danielle; Hansson, Oskar

    2015-01-01

    Conventional motion and eddy-current correction, where each diffusion-weighted volume is registered to a non diffusion-weighted reference, suffers from poor accuracy for high b-value data. An alternative approach is to extrapolate reference volumes from low b-value data. We aim to compare the performance of conventional and extrapolation-based correction of diffusional kurtosis imaging (DKI) data, and to demonstrate the impact of the correction approach on group comparison studies. DKI was performed in patients with Parkinson's disease dementia (PDD), and healthy age-matched controls, using b-values of up to 2750 s/mm2. The accuracy of conventional and extrapolation-based correction methods was investigated. Parameters from DTI and DKI were compared between patients and controls in the cingulum and the anterior thalamic projection tract. Conventional correction resulted in systematic registration errors for high b-value data. The extrapolation-based methods did not exhibit such errors, yielding more accurate tractography and up to 50% lower standard deviation in DKI metrics. Statistically significant differences were found between patients and controls when using the extrapolation-based motion correction that were not detected when using the conventional method. We recommend that conventional motion and eddy-current correction should be abandoned for high b-value data in favour of more accurate methods using extrapolation-based references.

  15. Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.

  16. Reference-free ground truth metric for metal artifact evaluation in CT images.

    PubMed

    Kratz, Bärbel; Ens, Svitlana; Müller, Jan; Buzug, Thorsten M

    2011-07-01

    In computed tomography (CT), metal objects in the region of interest introduce data inconsistencies during acquisition. Reconstructing these data results in an image with star shaped artifacts induced by the metal inconsistencies. To enhance image quality, the influence of the metal objects can be reduced by different metal artifact reduction (MAR) strategies. For an adequate evaluation of new MAR approaches a ground truth reference data set is needed. In technical evaluations, where phantoms can be measured with and without metal inserts, ground truth data can easily be obtained by a second reference data acquisition. Obviously, this is not possible for clinical data. Here, an alternative evaluation method is presented without the need of an additionally acquired reference data set. The proposed metric is based on an inherent ground truth for metal artifacts as well as MAR methods comparison, where no reference information in terms of a second acquisition is needed. The method is based on the forward projection of a reconstructed image, which is compared to the actually measured projection data. The new evaluation technique is performed on phantom and on clinical CT data with and without MAR. The metric results are then compared with methods using a reference data set as well as an expert-based classification. It is shown that the new approach is an adequate quantification technique for artifact strength in reconstructed metal or MAR CT images. The presented method works solely on the original projection data itself, which yields some advantages compared to distance measures in image domain using two data sets. Beside this, no parameters have to be manually chosen. The new metric is a useful evaluation alternative when no reference data are available.

  17. A self-reference PRF-shift MR thermometry method utilizing the phase gradient

    NASA Astrophysics Data System (ADS)

    Langley, Jason; Potter, William; Phipps, Corey; Huang, Feng; Zhao, Qun

    2011-12-01

    In magnetic resonance (MR) imaging, the most widely used and accurate method for measuring temperature is based on the shift in proton resonance frequency (PRF). However, inter-scan motion and bulk magnetic field shifts can lead to inaccurate temperature measurements in the PRF-shift MR thermometry method. The self-reference PRF-shift MR thermometry method was introduced to overcome such problems by deriving a reference image from the heated or treated image, and approximates the reference phase map with low-order polynomial functions. In this note, a new approach is presented to calculate the baseline phase map in self-reference PRF-shift MR thermometry. The proposed method utilizes the phase gradient to remove the phase unwrapping step inherent to other self-reference PRF-shift MR thermometry methods. The performance of the proposed method was evaluated using numerical simulations with temperature distributions following a two-dimensional Gaussian function as well as phantom and in vivo experimental data sets. The results from both the numerical simulations and experimental data show that the proposed method is a promising technique for measuring temperature.

  18. Rapid assessment of urban wetlands: Do hydrogeomorpic classification and reference criteria work?

    EPA Science Inventory

    The Hydrogeomorphic (HGM) functional assessment method is predicated on the ability of a wetland classification method based on hydrology (HGM classification) and a visual assessment of disturbance and alteration to provide reference standards against which functions in individua...

  19. Ballistocardiogram as Proximal Timing Reference for Pulse Transit Time Measurement: Potential for Cuffless Blood Pressure Monitoring

    PubMed Central

    Kim, Chang-Sei; Carek, Andrew M.; Mukkamala, Ramakrishna; Inan, Omer T.; Hahn, Jin-Oh

    2015-01-01

    Goal We tested the hypothesis that the ballistocardiogram (BCG) waveform could yield a viable proximal timing reference for measuring pulse transit time (PTT). Methods From fifteen healthy volunteers, we measured PTT as the time interval between BCG and a non-invasively measured finger blood pressure (BP) waveform. To evaluate the efficacy of the BCG-based PTT in estimating BP, we likewise measured pulse arrival time (PAT) using the electrocardiogram (ECG) as proximal timing reference and compared their correlations to BP. Results BCG-based PTT was correlated with BP reasonably well: the mean correlation coefficient (r) was 0.62 for diastolic (DP), 0.65 for mean (MP) and 0.66 for systolic (SP) pressures when the intersecting tangent method was used as distal timing reference. Comparing four distal timing references (intersecting tangent, maximum second derivative, diastolic minimum and systolic maximum), PTT exhibited the best correlation with BP when the systolic maximum method was used (mean r value was 0.66 for DP, 0.67 for MP and 0.70 for SP). PTT was more strongly correlated with DP than PAT regardless of the distal timing reference: mean r value was 0.62 versus 0.51 (p=0.07) for intersecting tangent, 0.54 versus 0.49 (p=0.17) for maximum second derivative, 0.58 versus 0.52 (p=0.37) for diastolic minimum, and 0.66 versus 0.60 (p=0.10) for systolic maximum methods. The difference between PTT and PAT in estimating DP was significant (p=0.01) when the r values associated with all the distal timing references were compared altogether. However, PAT appeared to outperform PTT in estimating SP (p=0.31 when the r values associated with all the distal timing references were compared altogether). Conclusion We conclude that BCG is an adequate proximal timing reference in deriving PTT, and that BCG-based PTT may be superior to ECG-based PAT in estimating DP. Significance PTT with BCG as proximal timing reference has potential to enable convenient and ubiquitous cuffless BP monitoring. PMID:26054058

  20. A quasiparticle-based multi-reference coupled-cluster method.

    PubMed

    Rolik, Zoltán; Kállay, Mihály

    2014-10-07

    The purpose of this paper is to introduce a quasiparticle-based multi-reference coupled-cluster (MRCC) approach. The quasiparticles are introduced via a unitary transformation which allows us to represent a complete active space reference function and other elements of an orthonormal multi-reference (MR) basis in a determinant-like form. The quasiparticle creation and annihilation operators satisfy the fermion anti-commutation relations. On the basis of these quasiparticles, a generalization of the normal-ordered operator products for the MR case can be introduced as an alternative to the approach of Mukherjee and Kutzelnigg [Recent Prog. Many-Body Theor. 4, 127 (1995); Mukherjee and Kutzelnigg, J. Chem. Phys. 107, 432 (1997)]. Based on the new normal ordering any quasiparticle-based theory can be formulated using the well-known diagram techniques. Beyond the general quasiparticle framework we also present a possible realization of the unitary transformation. The suggested transformation has an exponential form where the parameters, holding exclusively active indices, are defined in a form similar to the wave operator of the unitary coupled-cluster approach. The definition of our quasiparticle-based MRCC approach strictly follows the form of the single-reference coupled-cluster method and retains several of its beneficial properties. Test results for small systems are presented using a pilot implementation of the new approach and compared to those obtained by other MR methods.

  1. The importance of information on relatives for the prediction of genomic breeding values and the implications for the makeup of reference data sets in livestock breeding schemes.

    PubMed

    Clark, Samuel A; Hickey, John M; Daetwyler, Hans D; van der Werf, Julius H J

    2012-02-09

    The theory of genomic selection is based on the prediction of the effects of genetic markers in linkage disequilibrium with quantitative trait loci. However, genomic selection also relies on relationships between individuals to accurately predict genetic value. This study aimed to examine the importance of information on relatives versus that of unrelated or more distantly related individuals on the estimation of genomic breeding values. Simulated and real data were used to examine the effects of various degrees of relationship on the accuracy of genomic selection. Genomic Best Linear Unbiased Prediction (gBLUP) was compared to two pedigree based BLUP methods, one with a shallow one generation pedigree and the other with a deep ten generation pedigree. The accuracy of estimated breeding values for different groups of selection candidates that had varying degrees of relationships to a reference data set of 1750 animals was investigated. The gBLUP method predicted breeding values more accurately than BLUP. The most accurate breeding values were estimated using gBLUP for closely related animals. Similarly, the pedigree based BLUP methods were also accurate for closely related animals, however when the pedigree based BLUP methods were used to predict unrelated animals, the accuracy was close to zero. In contrast, gBLUP breeding values, for animals that had no pedigree relationship with animals in the reference data set, allowed substantial accuracy. An animal's relationship to the reference data set is an important factor for the accuracy of genomic predictions. Animals that share a close relationship to the reference data set had the highest accuracy from genomic predictions. However a baseline accuracy that is driven by the reference data set size and the overall population effective population size enables gBLUP to estimate a breeding value for unrelated animals within a population (breed), using information previously ignored by pedigree based BLUP methods.

  2. Standardization in laboratory medicine: Adoption of common reference intervals to the Croatian population.

    PubMed

    Flegar-Meštrić, Zlata; Perkov, Sonja; Radeljak, Andrea

    2016-03-26

    Considering the fact that the results of laboratory tests provide useful information about the state of health of patients, determination of reference value is considered an intrinsic part in the development of laboratory medicine. There are still huge differences in the analytical methods used as well as in the associated reference intervals which could consequently significantly affect the proper assessment of patient health. In a constant effort to increase the quality of patients' care, there are numerous international initiatives for standardization and/or harmonization of laboratory diagnostics in order to achieve maximum comparability of laboratory test results and improve patient safety. Through the standardization and harmonization processes of analytical methods the ability to create unique reference intervals is achieved. Such reference intervals could be applied globally in all laboratories using methods traceable to the same reference measuring system and analysing the biological samples from the populations with similar socio-demographic and ethnic characteristics. In this review we outlined the results of the harmonization processes in Croatia in the field of population based reference intervals for clinically relevant blood and serum constituents which are in accordance with ongoing activity for worldwide standardization and harmonization based on traceability in laboratory medicine.

  3. Standardization in laboratory medicine: Adoption of common reference intervals to the Croatian population

    PubMed Central

    Flegar-Meštrić, Zlata; Perkov, Sonja; Radeljak, Andrea

    2016-01-01

    Considering the fact that the results of laboratory tests provide useful information about the state of health of patients, determination of reference value is considered an intrinsic part in the development of laboratory medicine. There are still huge differences in the analytical methods used as well as in the associated reference intervals which could consequently significantly affect the proper assessment of patient health. In a constant effort to increase the quality of patients’ care, there are numerous international initiatives for standardization and/or harmonization of laboratory diagnostics in order to achieve maximum comparability of laboratory test results and improve patient safety. Through the standardization and harmonization processes of analytical methods the ability to create unique reference intervals is achieved. Such reference intervals could be applied globally in all laboratories using methods traceable to the same reference measuring system and analysing the biological samples from the populations with similar socio-demographic and ethnic characteristics. In this review we outlined the results of the harmonization processes in Croatia in the field of population based reference intervals for clinically relevant blood and serum constituents which are in accordance with ongoing activity for worldwide standardization and harmonization based on traceability in laboratory medicine. PMID:27019800

  4. Optically transmitted and inductively coupled electric reference to access in vivo concentrations for quantitative proton-decoupled ¹³C magnetic resonance spectroscopy.

    PubMed

    Chen, Xing; Pavan, Matteo; Heinzer-Schweizer, Susanne; Boesiger, Peter; Henning, Anke

    2012-01-01

    This report describes our efforts on quantification of tissue metabolite concentrations in mM by nuclear Overhauser enhanced and proton decoupled (13) C magnetic resonance spectroscopy and the Electric Reference To access In vivo Concentrations (ERETIC) method. Previous work showed that a calibrated synthetic magnetic resonance spectroscopy-like signal transmitted through an optical fiber and inductively coupled into a transmit/receive coil represents a reliable reference standard for in vivo (1) H magnetic resonance spectroscopy quantification on a clinical platform. In this work, we introduce a related implementation that enables simultaneous proton decoupling and ERETIC-based metabolite quantification and hence extends the applicability of the ERETIC method to nuclear Overhauser enhanced and proton decoupled in vivo (13) C magnetic resonance spectroscopy. In addition, ERETIC signal stability under the influence of simultaneous proton decoupling is investigated. The proposed quantification method was cross-validated against internal and external reference standards on human skeletal muscle. The ERETIC signal intensity stability was 100.65 ± 4.18% over 3 months including measurements with and without proton decoupling. Glycogen and unsaturated fatty acid concentrations measured with the ERETIC method were in excellent agreement with internal creatine and external phantom reference methods, showing a difference of 1.85 ± 1.21% for glycogen and 1.84 ± 1.00% for unsaturated fatty acid between ERETIC and creatine-based quantification, whereas the deviations between external reference and creatine-based quantification are 6.95 ± 9.52% and 3.19 ± 2.60%, respectively. Copyright © 2011 Wiley Periodicals, Inc.

  5. HUGO: Hierarchical mUlti-reference Genome cOmpression for aligned reads

    PubMed Central

    Li, Pinghao; Jiang, Xiaoqian; Wang, Shuang; Kim, Jihoon; Xiong, Hongkai; Ohno-Machado, Lucila

    2014-01-01

    Background and objective Short-read sequencing is becoming the standard of practice for the study of structural variants associated with disease. However, with the growth of sequence data largely surpassing reasonable storage capability, the biomedical community is challenged with the management, transfer, archiving, and storage of sequence data. Methods We developed Hierarchical mUlti-reference Genome cOmpression (HUGO), a novel compression algorithm for aligned reads in the sorted Sequence Alignment/Map (SAM) format. We first aligned short reads against a reference genome and stored exactly mapped reads for compression. For the inexact mapped or unmapped reads, we realigned them against different reference genomes using an adaptive scheme by gradually shortening the read length. Regarding the base quality value, we offer lossy and lossless compression mechanisms. The lossy compression mechanism for the base quality values uses k-means clustering, where a user can adjust the balance between decompression quality and compression rate. The lossless compression can be produced by setting k (the number of clusters) to the number of different quality values. Results The proposed method produced a compression ratio in the range 0.5–0.65, which corresponds to 35–50% storage savings based on experimental datasets. The proposed approach achieved 15% more storage savings over CRAM and comparable compression ratio with Samcomp (CRAM and Samcomp are two of the state-of-the-art genome compression algorithms). The software is freely available at https://sourceforge.net/projects/hierachicaldnac/with a General Public License (GPL) license. Limitation Our method requires having different reference genomes and prolongs the execution time for additional alignments. Conclusions The proposed multi-reference-based compression algorithm for aligned reads outperforms existing single-reference based algorithms. PMID:24368726

  6. Reconstruction method for fringe projection profilometry based on light beams.

    PubMed

    Li, Xuexing; Zhang, Zhijiang; Yang, Chen

    2016-12-01

    A novel reconstruction method for fringe projection profilometry, based on light beams, is proposed and verified by experiments. Commonly used calibration techniques require the parameters of projector calibration or the reference planes placed in many known positions. Obviously, introducing the projector calibration can reduce the accuracy of the reconstruction result, and setting the reference planes to many known positions is a time-consuming process. Therefore, in this paper, a reconstruction method without projector's parameters is proposed and only two reference planes are introduced. A series of light beams determined by the subpixel point-to-point map on the two reference planes combined with their reflected light beams determined by the camera model are used to calculate the 3D coordinates of reconstruction points. Furthermore, the bundle adjustment strategy and the complementary gray-code phase-shifting method are utilized to ensure the accuracy and stability. Qualitative and quantitative comparisons as well as experimental tests demonstrate the performance of our proposed approach, and the measurement accuracy can reach about 0.0454 mm.

  7. Reference Value Advisor: a new freeware set of macroinstructions to calculate reference intervals with Microsoft Excel.

    PubMed

    Geffré, Anne; Concordet, Didier; Braun, Jean-Pierre; Trumel, Catherine

    2011-03-01

    International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals [CI]) using a nonparametric method when n≥40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Anderson-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The critical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in controlled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are performed properly, Reference Value Advisor, available as freeware at http://www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including currently unavailable methods. This allows for selection of the most appropriate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available. ©2011 American Society for Veterinary Clinical Pathology.

  8. Comparison of methods for the prediction of human clearance from hepatocyte intrinsic clearance for a set of reference compounds and an external evaluation set.

    PubMed

    Yamagata, Tetsuo; Zanelli, Ugo; Gallemann, Dieter; Perrin, Dominique; Dolgos, Hugues; Petersson, Carl

    2017-09-01

    1. We compared direct scaling, regression model equation and the so-called "Poulin et al." methods to scale clearance (CL) from in vitro intrinsic clearance (CL int ) measured in human hepatocytes using two sets of compounds. One reference set comprised of 20 compounds with known elimination pathways and one external evaluation set based on 17 compounds development in Merck (MS). 2. A 90% prospective confidence interval was calculated using the reference set. This interval was found relevant for the regression equation method. The three outliers identified were justified on the basis of their elimination mechanism. 3. The direct scaling method showed a systematic underestimation of clearance in both the reference and evaluation sets. The "Poulin et al." and the regression equation methods showed no obvious bias in either the reference or evaluation sets. 4. The regression model equation was slightly superior to the "Poulin et al." method in the reference set and showed a better absolute average fold error (AAFE) of value 1.3 compared to 1.6. A larger difference was observed in the evaluation set were the regression method and "Poulin et al." resulted in an AAFE of 1.7 and 2.6, respectively (removing the three compounds with known issues mentioned above). A similar pattern was observed for the correlation coefficient. Based on these data we suggest the regression equation method combined with a prospective confidence interval as the first choice for the extrapolation of human in vivo hepatic metabolic clearance from in vitro systems.

  9. Calibration procedure of Hukseflux SR25 to Establish the Diffuse Reference for the Outdoor Broadband Radiometer Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reda, Ibrahim M.; Andreas, Afshin M.

    2017-08-01

    Accurate pyranometer calibrations, traceable to internationally recognized standards, are critical for solar irradiance measurements. One calibration method is the component summation method, where the pyranometers are calibrated outdoors under clear sky conditions, and the reference global solar irradiance is calculated as the sum of two reference components, the diffuse horizontal and subtended beam solar irradiances. The beam component is measured with pyrheliometers traceable to the World Radiometric Reference, while there is no internationally recognized reference for the diffuse component. In the absence of such a reference, we present a method to consistently calibrate pyranometers for measuring the diffuse component. Themore » method is based on using a modified shade/unshade method and a pyranometer with less than 0.5 W/m2 thermal offset. The calibration result shows that the responsivity of Hukseflux SR25 pyranometer equals 10.98 uV/(W/m2) with +/-0.86 percent uncertainty.« less

  10. Content-based fused off-axis object illumination direct-to-digital holography

    DOEpatents

    Price, Jeffery R.

    2006-05-02

    Systems and methods are described for content-based fused off-axis illumination direct-to-digital holography. A method includes calculating an illumination angle with respect to an optical axis defined by a focusing lens as a function of data representing a Fourier analyzed spatially heterodyne hologram; reflecting a reference beam from a reference mirror at a non-normal angle; reflecting an object beam from an object the object beam incident upon the object at the illumination angle; focusing the reference beam and the object beam at a focal plane of a digital recorder to from the content-based off-axis illuminated spatially heterodyne hologram including spatially heterodyne fringes for Fourier analysis; and digitally recording the content based off-axis illuminated spatially heterodyne hologram including spatially heterodyne fringes for Fourier analysis.

  11. Rapid surface defect detection based on singular value decomposition using steel strips as an example

    NASA Astrophysics Data System (ADS)

    Sun, Qianlai; Wang, Yin; Sun, Zhiyi

    2018-05-01

    For most surface defect detection methods based on image processing, image segmentation is a prerequisite for determining and locating the defect. In our previous work, a method based on singular value decomposition (SVD) was used to determine and approximately locate surface defects on steel strips without image segmentation. For the SVD-based method, the image to be inspected was projected onto its first left and right singular vectors respectively. If there were defects in the image, there would be sharp changes in the projections. Then the defects may be determined and located according sharp changes in the projections of each image to be inspected. This method was simple and practical but the SVD should be performed for each image to be inspected. Owing to the high time complexity of SVD itself, it did not have a significant advantage in terms of time consumption over image segmentation-based methods. Here, we present an improved SVD-based method. In the improved method, a defect-free image is considered as the reference image which is acquired under the same environment as the image to be inspected. The singular vectors of each image to be inspected are replaced by the singular vectors of the reference image, and SVD is performed only once for the reference image off-line before detecting of the defects, thus greatly reducing the time required. The improved method is more conducive to real-time defect detection. Experimental results confirm its validity.

  12. Counting the stunted children in a population: a criticism of old and new approaches and a conciliatory proposal.

    PubMed

    Monteiro, C A

    1991-01-01

    Two methods for estimating the prevalence of growth retardation in a population are evaluated: the classical method, which is based on the proportion of children whose height is more than 2 standard deviations below the expected mean of a reference population; and a new method recently proposed by Mora, which is based on the whole height distribution of observed and reference populations. Application of the classical method to several simulated populations leads to the conclusion that in most situations in developing countries the prevalence of growth retardation is grossly underestimated, and reflects only the presence of severe growth deficits. A second constraint with this method is a marked reduction of the relative differentials between more and less exposed strata. Application of Mora's method to the same simulated populations reduced but did not eliminate these constraints. A novel method for estimating the prevalence of growth retardation, which is based also on the whole height distribution of observed and reference populations, is also described and evaluated. This method produces better estimates of the true prevalence of growth retardation with no reduction in relative differentials.

  13. An automated and objective method for age partitioning of reference intervals based on continuous centile curves.

    PubMed

    Yang, Qian; Lew, Hwee Yeong; Peh, Raymond Hock Huat; Metz, Michael Patrick; Loh, Tze Ping

    2016-10-01

    Reference intervals are the most commonly used decision support tool when interpreting quantitative laboratory results. They may require partitioning to better describe subpopulations that display significantly different reference values. Partitioning by age is particularly important for the paediatric population since there are marked physiological changes associated with growth and maturation. However, most partitioning methods are either technically complex or require prior knowledge of the underlying physiology/biological variation of the population. There is growing interest in the use of continuous centile curves, which provides seamless laboratory reference values as a child grows, as an alternative to rigidly described fixed reference intervals. However, the mathematical functions that describe these curves can be complex and may not be easily implemented in laboratory information systems. Hence, the use of fixed reference intervals is expected to continue for a foreseeable time. We developed a method that objectively proposes optimised age partitions and reference intervals for quantitative laboratory data (http://research.sph.nus.edu.sg/pp/ppResult.aspx), based on the sum of gradient that best describes the underlying distribution of the continuous centile curves. It is hoped that this method may improve the selection of age intervals for partitioning, which is receiving increasing attention in paediatric laboratory medicine. Copyright © 2016 Royal College of Pathologists of Australasia. Published by Elsevier B.V. All rights reserved.

  14. An analysis of methods for gravity determination and their utilization for the calculation of geopotential numbers in the Slovak national levelling network

    NASA Astrophysics Data System (ADS)

    Majkráková, Miroslava; Papčo, Juraj; Zahorec, Pavol; Droščák, Branislav; Mikuška, Ján; Marušiak, Ivan

    2016-09-01

    The vertical reference system in the Slovak Republic is realized by the National Levelling Network (NLN). The normal heights according to Molodensky have been introduced as reference heights in the NLN in 1957. Since then, the gravity correction, which is necessary to determine the reference heights in the NLN, has been obtained by an interpolation either from the simple or complete Bouguer anomalies. We refer to this method as the "original". Currently, the method based on geopotential numbers is the preferred way to unify the European levelling networks. The core of this article is an analysis of different ways to the gravity determination and their application for the calculation of geopotential numbers at the points of the NLN. The first method is based on the calculation of gravity at levelling points from the interpolated values of the complete Bouguer anomaly using the CBA2G_SK software. The second method is based on the global geopotential model EGM2008 improved by the Residual Terrain Model (RTM) approach. The calculated gravity is used to determine the normal heights according to Molodensky along parts of the levelling lines around the EVRF2007 datum point EH-V. Pitelová (UELN-1905325) and the levelling line of the 2nd order NLN to Kráľova hoľa Mountain (the highest point measured by levelling). The results from our analysis illustrate that the method based on the interpolated value of gravity is a better method for gravity determination when we do not know the measured gravity. It was shown that this method is suitable for the determination of geopotential numbers and reference heights in the Slovak national levelling network at the points in which the gravity is not observed directly. We also demonstrated the necessity of using the precise RTM for the refinement of the results derived solely from the EGM2008.

  15. Reference Intervals of Hematology and Clinical Chemistry Analytes for 1-Year-Old Korean Children

    PubMed Central

    Lee, Hye Ryun; Roh, Eun Youn; Chang, Ju Young

    2016-01-01

    Background Reference intervals need to be established according to age. We established reference intervals of hematology and chemistry from community-based healthy 1-yr-old children and analyzed their iron status according to the feeding methods during the first six months after birth. Methods A total of 887 children who received a medical check-up between 2010 and 2014 at Boramae Hospital (Seoul, Korea) were enrolled. A total of 534 children (247 boys and 287 girls) were enrolled as reference individuals after the exclusion of data obtained from children with suspected iron deficiency. Hematology and clinical chemistry analytes were measured, and the reference value of each analyte was estimated by using parametric (mean±2 SD) or nonparametric methods (2.5-97.5th percentile). Iron, total iron-binding capacity, and ferritin were measured, and transferrin saturation was calculated. Results As there were no differences in the mean values between boys and girls, we established the reference intervals for 1-yr-old children regardless of sex. The analysis of serum iron status according to feeding methods during the first six months revealed higher iron, ferritin, and transferrin saturation levels in children exclusively or mainly fed formula than in children exclusively or mainly fed breast milk. Conclusions We established reference intervals of hematology and clinical chemistry analytes from community-based healthy children at one year of age. These reference intervals will be useful for interpreting results of medical check-ups at one year of age. PMID:27374715

  16. Proposed Application of Fast Fourier Transform in Near Infra Red Based Non Invasive Blood Glucose Monitoring System

    NASA Astrophysics Data System (ADS)

    Jenie, R. P.; Iskandar, J.; Kurniawan, A.; Rustami, E.; Syafutra, H.; Nurdin, N. M.; Handoyo, T.; Prabowo, J.; Febryarto, R.; Rahayu, M. S. K.; Damayanthi, E.; Rimbawan; Sukandar, D.; Suryana, Y.; Irzaman; Alatas, H.

    2017-03-01

    Worldwide emergence of glycaemic status related health disorders, such as diabetes and metabolic syndrome, is growing in alarming rate. The objective was to propose new methods for non invasive blood glucose level measurement system, based on implementation of Fast Fourier Transform methods. This was an initial-lab-scale-research. Data on non invasive blood glucose measurement are referred from Scopus, Medline, and Google Scholar, from 2011 until 2016, and was used as design references, combined with in house verification. System was developed in modular fashion, based on aforementioned compiled references. Several preliminary tests to understand relationship between LED and photo-diode responses have been done. Several references were used as non invasive blood glucose measurement tools design basis. Solution is developed in modular fashion. we have proven different sensor responses to water and glucose. Human test for non invasive blood glucose level measurement system is needed.

  17. Serum prolactin revisited: parametric reference intervals and cross platform evaluation of polyethylene glycol precipitation-based methods for discrimination between hyperprolactinemia and macroprolactinemia.

    PubMed

    Overgaard, Martin; Pedersen, Susanne Møller

    2017-10-26

    Hyperprolactinemia diagnosis and treatment is often compromised by the presence of biologically inactive and clinically irrelevant higher-molecular-weight complexes of prolactin, macroprolactin. The objective of this study was to evaluate the performance of two macroprolactin screening regimes across commonly used automated immunoassay platforms. Parametric total and monomeric gender-specific reference intervals were determined for six immunoassay methods using female (n=96) and male sera (n=127) from healthy donors. The reference intervals were validated using 27 hyperprolactinemic and macroprolactinemic sera, whose presence of monomeric and macroforms of prolactin were determined using gel filtration chromatography (GFC). Normative data for six prolactin assays included the range of values (2.5th-97.5th percentiles). Validation sera (hyperprolactinemic and macroprolactinemic; n=27) showed higher discordant classification [mean=2.8; 95% confidence interval (CI) 1.2-4.4] for the monomer reference interval method compared to the post-polyethylene glycol (PEG) recovery cutoff method (mean=1.8; 95% CI 0.8-2.8). The two monomer/macroprolactin discrimination methods did not differ significantly (p=0.089). Among macroprolactinemic sera evaluated by both discrimination methods, the Cobas and Architect/Kryptor prolactin assays showed the lowest and the highest number of misclassifications, respectively. Current automated immunoassays for prolactin testing require macroprolactin screening methods based on PEG precipitation in order to discriminate truly from falsely elevated serum prolactin. While the recovery cutoff and monomeric reference interval macroprolactin screening methods demonstrate similar discriminative ability, the latter method also provides the clinician with an easy interpretable monomeric prolactin concentration along with a monomeric reference interval.

  18. Measuring Symmetry in Children With Unrepaired Cleft Lip: Defining a Standard for the Three-Dimensional Midfacial Reference Plane.

    PubMed

    Wu, Jia; Heike, Carrie; Birgfeld, Craig; Evans, Kelly; Maga, Murat; Morrison, Clinton; Saltzman, Babette; Shapiro, Linda; Tse, Raymond

    2016-11-01

      Quantitative measures of facial form to evaluate treatment outcomes for cleft lip (CL) are currently limited. Computer-based analysis of three-dimensional (3D) images provides an opportunity for efficient and objective analysis. The purpose of this study was to define a computer-based standard of identifying the 3D midfacial reference plane of the face in children with unrepaired cleft lip for measurement of facial symmetry.   The 3D images of 50 subjects (35 with unilateral CL, 10 with bilateral CL, five controls) were included in this study.   Five methods of defining a midfacial plane were applied to each image, including two human-based (Direct Placement, Manual Landmark) and three computer-based (Mirror, Deformation, Learning) methods.   Six blinded raters (three cleft surgeons, two craniofacial pediatricians, and one craniofacial researcher) independently ranked and rated the accuracy of the defined planes.   Among computer-based methods, the Deformation method performed significantly better than the others. Although human-based methods performed best, there was no significant difference compared with the Deformation method. The average correlation coefficient among raters was .4; however, it was .7 and .9 when the angular difference between planes was greater than 6° and 8°, respectively.   Raters can agree on the 3D midfacial reference plane in children with unrepaired CL using digital surface mesh. The Deformation method performed best among computer-based methods evaluated and can be considered a useful tool to carry out automated measurements of facial symmetry in children with unrepaired cleft lip.

  19. Accurate determination of reference materials and natural isolates by means of quantitative (1)h NMR spectroscopy.

    PubMed

    Frank, Oliver; Kreissl, Johanna Karoline; Daschner, Andreas; Hofmann, Thomas

    2014-03-26

    A fast and precise proton nuclear magnetic resonance (qHNMR) method for the quantitative determination of low molecular weight target molecules in reference materials and natural isolates has been validated using ERETIC 2 (Electronic REference To access In vivo Concentrations) based on the PULCON (PULse length based CONcentration determination) methodology and compared to the gravimetric results. Using an Avance III NMR spectrometer (400 MHz) equipped with a broad band observe (BBO) probe, the qHNMR method was validated by determining its linearity, range, precision, and accuracy as well as robustness and limit of quantitation. The linearity of the method was assessed by measuring samples of l-tyrosine, caffeine, or benzoic acid in a concentration range between 0.3 and 16.5 mmol/L (r(2) ≥ 0.99), whereas the interday and intraday precisions were found to be ≤2%. The recovery of a range of reference compounds was ≥98.5%, thus demonstrating the qHNMR method as a precise tool for the rapid quantitation (~15 min) of food-related target compounds in reference materials and natural isolates such as nucleotides, polyphenols, or cyclic peptides.

  20. Reference voltage calculation method based on zero-sequence component optimisation for a regional compensation DVR

    NASA Astrophysics Data System (ADS)

    Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang

    2018-04-01

    This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.

  1. Method modification of the Legipid® Legionella fast detection test kit.

    PubMed

    Albalat, Guillermo Rodríguez; Broch, Begoña Bedrina; Bono, Marisa Jiménez

    2014-01-01

    Legipid(®) Legionella Fast Detection is a test based on combined magnetic immunocapture and enzyme-immunoassay (CEIA) for the detection of Legionella in water. The test is based on the use of anti-Legionella antibodies immobilized on magnetic microspheres. Target microorganism is preconcentrated by filtration. Immunomagnetic analysis is applied on these preconcentrated water samples in a final test portion of 9 mL. The test kit was certified by the AOAC Research Institute as Performance Tested Method(SM) (PTM) No. 111101 in a PTM validation which certifies the performance claims of the test method in comparison to the ISO reference method 11731-1998 and the revision 11731-2004 "Water Quality: Detection and Enumeration of Legionella pneumophila" in potable water, industrial water, and waste water. The modification of this test kit has been approved. The modification includes increasing the target analyte from L. pneumophila to Legionella species and adding an optical reader to the test method. In this study, 71 strains of Legionella spp. other than L. pneumophila were tested to determine its reactivity with the kit based on CEIA. All the strains of Legionella spp. tested by the CEIA test were confirmed positive by reference standard method ISO 11731. This test (PTM 111101) has been modified to include a final optical reading. A methods comparison study was conducted to demonstrate the equivalence of this modification to the reference culture method. Two water matrixes were analyzed. Results show no statistically detectable difference between the test method and the reference culture method for the enumeration of Legionella spp. The relative level of detection was 93 CFU/volume examined (LOD50). For optical reading, the LOD was 40 CFU/volume examined and the LOQ was 60 CFU/volume examined. Results showed that the test Legipid Legionella Fast Detection is equivalent to the reference culture method for the enumeration of Legionella spp.

  2. Selection of reference standard during method development using the analytical hierarchy process.

    PubMed

    Sun, Wan-yang; Tong, Ling; Li, Dong-xiang; Huang, Jing-yi; Zhou, Shui-ping; Sun, Henry; Bi, Kai-shun

    2015-03-25

    Reference standard is critical for ensuring reliable and accurate method performance. One important issue is how to select the ideal one from the alternatives. Unlike the optimization of parameters, the criteria of the reference standard are always immeasurable. The aim of this paper is to recommend a quantitative approach for the selection of reference standard during method development based on the analytical hierarchy process (AHP) as a decision-making tool. Six alternative single reference standards were assessed in quantitative analysis of six phenolic acids from Salvia Miltiorrhiza and its preparations by using ultra-performance liquid chromatography. The AHP model simultaneously considered six criteria related to reference standard characteristics and method performance, containing feasibility to obtain, abundance in samples, chemical stability, accuracy, precision and robustness. The priority of each alternative was calculated using standard AHP analysis method. The results showed that protocatechuic aldehyde is the ideal reference standard, and rosmarinic acid is about 79.8% ability as the second choice. The determination results successfully verified the evaluation ability of this model. The AHP allowed us comprehensive considering the benefits and risks of the alternatives. It was an effective and practical tool for optimization of reference standards during method development. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. [Study on ethnic medicine quantitative reference herb,Tibetan medicine fruits of Capsicum frutescens as a case].

    PubMed

    Zan, Ke; Cui, Gan; Guo, Li-Nong; Ma, Shuang-Cheng; Zheng, Jian

    2018-05-01

    High price and difficult to get of reference substance have become obstacles to HPLC assay of ethnic medicine. A new method based on quantitative reference herb (QRH) was proposed. Specific chromatograms in fruits of Capsicum frutescens were employed to determine peak positions, and HPLC quantitative reference herb was prepared from fruits of C. frutescens. The content of capsaicin and dihydrocapsaicin in the quantitative control herb was determined by HPLC. Eleven batches of fruits of C. frutescens were analyzed with quantitative reference herb and reference substance respectively. The results showed no difference. The present method is feasible for quality control of ethnic medicines and quantitative reference herb is suitable to replace reference substances in assay. Copyright© by the Chinese Pharmaceutical Association.

  4. Delivering Electronic Resources with Web OPACs and Other Web-based Tools: Needs of Reference Librarians.

    ERIC Educational Resources Information Center

    Bordeianu, Sever; Carter, Christina E.; Dennis, Nancy K.

    2000-01-01

    Describes Web-based online public access catalogs (Web OPACs) and other Web-based tools as gateway methods for providing access to library collections. Addresses solutions for overcoming barriers to information, such as through the implementation of proxy servers and other authentication tools for remote users. (Contains 18 references.)…

  5. 'Aussie normals': an a priori study to develop clinical chemistry reference intervals in a healthy Australian population.

    PubMed

    Koerbin, G; Cavanaugh, J A; Potter, J M; Abhayaratna, W P; West, N P; Glasgow, N; Hawkins, C; Armbruster, D; Oakman, C; Hickman, P E

    2015-02-01

    Development of reference intervals is difficult, time consuming, expensive and beyond the scope of most laboratories. The Aussie Normals study is a direct a priori study to determine reference intervals in healthy Australian adults. All volunteers completed a health and lifestyle questionnaire and exclusion was based on conditions such as pregnancy, diabetes, renal or cardiovascular disease. Up to 91 biochemical analyses were undertaken on a variety of analytical platforms using serum samples collected from 1856 volunteers. We report on our findings for 40 of these analytes and two calculated parameters performed on the Abbott ARCHITECTci8200/ci16200 analysers. Not all samples were analysed for all assays due to volume requirements or assay/instrument availability. Results with elevated interference indices and those deemed unsuitable after clinical evaluation were removed from the database. Reference intervals were partitioned based on the method of Harris and Boyd into three scenarios, combined gender, males and females and age and gender. We have performed a detailed reference interval study on a healthy Australian population considering the effects of sex, age and body mass. These reference intervals may be adapted to other manufacturer's analytical methods using method transference.

  6. Analytical Bias Exceeding Desirable Quality Goal in 4 out of 5 Common Immunoassays: Results of a Native Single Serum Sample External Quality Assessment Program for Cobalamin, Folate, Ferritin, Thyroid-Stimulating Hormone, and Free T4 Analyses.

    PubMed

    Kristensen, Gunn B B; Rustad, Pål; Berg, Jens P; Aakre, Kristin M

    2016-09-01

    We undertook this study to evaluate method differences for 5 components analyzed by immunoassays, to explore whether the use of method-dependent reference intervals may compensate for method differences, and to investigate commutability of external quality assessment (EQA) materials. Twenty fresh native single serum samples, a fresh native serum pool, Nordic Federation of Clinical Chemistry Reference Serum X (serum X) (serum pool), and 2 EQA materials were sent to 38 laboratories for measurement of cobalamin, folate, ferritin, free T4, and thyroid-stimulating hormone (TSH) by 5 different measurement procedures [Roche Cobas (n = 15), Roche Modular (n = 4), Abbott Architect (n = 8), Beckman Coulter Unicel (n = 2), and Siemens ADVIA Centaur (n = 9)]. The target value for each component was calculated based on the mean of method means or measured by a reference measurement procedure (free T4). Quality specifications were based on biological variation. Local reference intervals were reported from all laboratories. Method differences that exceeded acceptable bias were found for all components except folate. Free T4 differences from the uncommonly used reference measurement procedure were large. Reference intervals differed between measurement procedures but also within 1 measurement procedure. The serum X material was commutable for all components and measurement procedures, whereas the EQA materials were noncommutable in 13 of 50 occasions (5 components, 5 methods, 2 EQA materials). The bias between the measurement procedures was unacceptably large in 4/5 tested components. Traceability to reference materials as claimed by the manufacturers did not lead to acceptable harmonization. Adjustment of reference intervals in accordance with method differences and use of commutable EQA samples are not implemented commonly. © 2016 American Association for Clinical Chemistry.

  7. Effect of genotyped cows in the reference population on the genomic evaluation of Holstein cattle.

    PubMed

    Uemoto, Y; Osawa, T; Saburi, J

    2017-03-01

    This study evaluated the dependence of reliability and prediction bias on the prediction method, the contribution of including animals (bulls or cows), and the genetic relatedness, when including genotyped cows in the progeny-tested bull reference population. We performed genomic evaluation using a Japanese Holstein population, and assessed the accuracy of genomic enhanced breeding value (GEBV) for three production traits and 13 linear conformation traits. A total of 4564 animals for production traits and 4172 animals for conformation traits were genotyped using Illumina BovineSNP50 array. Single- and multi-step methods were compared for predicting GEBV in genotyped bull-only and genotyped bull-cow reference populations. No large differences in realized reliability and regression coefficient were found between the two reference populations; however, a slight difference was found between the two methods for production traits. The accuracy of GEBV determined by single-step method increased slightly when genotyped cows were included in the bull reference population, but decreased slightly by multi-step method. A validation study was used to evaluate the accuracy of GEBV when 800 additional genotyped bulls (POPbull) or cows (POPcow) were included in the base reference population composed of 2000 genotyped bulls. The realized reliabilities of POPbull were higher than those of POPcow for all traits. For the gain of realized reliability over the base reference population, the average ratios of POPbull gain to POPcow gain for production traits and conformation traits were 2.6 and 7.2, respectively, and the ratios depended on heritabilities of the traits. For regression coefficient, no large differences were found between the results for POPbull and POPcow. Another validation study was performed to investigate the effect of genetic relatedness between cows and bulls in the reference and test populations. The effect of genetic relationship among bulls in the reference population was also assessed. The results showed that it is important to account for relatedness among bulls in the reference population. Our studies indicate that the prediction method, the contribution ratio of including animals, and genetic relatedness could affect the prediction accuracy in genomic evaluation of Holstein cattle, when including genotyped cows in the reference population.

  8. [Application and case analysis on the problem-based teaching of Jingluo Shuxue Xue (Science of Meridian and Acupoint) in reference to the team oriented learning method].

    PubMed

    Ma, Ruijie; Lin, Xianming

    2015-12-01

    The problem based teaching (PBT) has been the main approach to the training in the universities o the world. Combined with the team oriented learning method, PBT will become the method available to the education in medical universities. In the paper, based on the common questions in teaching Jingluo Shuxue Xue (Science of Meridian and Acupoint), the concepts and characters of PBT and the team oriented learning method were analyzed. The implementation steps of PBT were set up in reference to the team oriented learning method. By quoting the original text in Beiji Qianjin Yaofang (Essential recipes for emergent use worth a thousand gold), the case analysis on "the thirteen devil points" was established with PBT.

  9. Reference set for performance testing of pediatric vaccine safety signal detection methods and systems.

    PubMed

    Brauchli Pernus, Yolanda; Nan, Cassandra; Verstraeten, Thomas; Pedenko, Mariia; Osokogu, Osemeke U; Weibel, Daniel; Sturkenboom, Miriam; Bonhoeffer, Jan

    2016-12-12

    Safety signal detection in spontaneous reporting system databases and electronic healthcare records is key to detection of previously unknown adverse events following immunization. Various statistical methods for signal detection in these different datasources have been developed, however none are geared to the pediatric population and none specifically to vaccines. A reference set comprising pediatric vaccine-adverse event pairs is required for reliable performance testing of statistical methods within and across data sources. The study was conducted within the context of the Global Research in Paediatrics (GRiP) project, as part of the seventh framework programme (FP7) of the European Commission. Criteria for the selection of vaccines considered in the reference set were routine and global use in the pediatric population. Adverse events were primarily selected based on importance. Outcome based systematic literature searches were performed for all identified vaccine-adverse event pairs and complemented by expert committee reports, evidence based decision support systems (e.g. Micromedex), and summaries of product characteristics. Classification into positive (PC) and negative control (NC) pairs was performed by two independent reviewers according to a pre-defined algorithm and discussed for consensus in case of disagreement. We selected 13 vaccines and 14 adverse events to be included in the reference set. From a total of 182 vaccine-adverse event pairs, we classified 18 as PC, 113 as NC and 51 as unclassifiable. Most classifications (91) were based on literature review, 45 were based on expert committee reports, and for 46 vaccine-adverse event pairs, an underlying pathomechanism was not plausible classifying the association as NC. A reference set of vaccine-adverse event pairs was developed. We propose its use for comparing signal detection methods and systems in the pediatric population. Published by Elsevier Ltd.

  10. Astrometry for New Reductions: The ANR method

    NASA Astrophysics Data System (ADS)

    Robert, Vincent; Le Poncin-Lafitte, Christophe

    2018-04-01

    Accurate positional measurements of planets and satellites are used to improve our knowledge of their orbits and dynamics, and to infer the accuracy of the planet and satellite ephemerides. With the arrival of the Gaia-DR1 reference star catalog and its complete release afterward, the methods for ground-based astrometry become outdated in terms of their formal accuracy compared to the catalog's which is used. Systematic and zonal errors of the reference stars are eliminated, and the astrometric process now dominates in the error budget. We present a set of algorithms for computing the apparent directions of planets, satellites and stars on any date to micro-arcsecond precision. The expressions are consistent with the ICRS reference system, and define the transformation between theoretical reference data, and ground-based astrometric observables.

  11. [The validation of the effect of correcting spectral background changes based on floating reference method by simulation].

    PubMed

    Wang, Zhu-lou; Zhang, Wan-jie; Li, Chen-xi; Chen, Wen-liang; Xu, Ke-xin

    2015-02-01

    There are some challenges in near-infrared non-invasive blood glucose measurement, such as the low signal to noise ratio of instrument, the unstable measurement conditions, the unpredictable and irregular changes of the measured object, and etc. Therefore, it is difficult to extract the information of blood glucose concentrations from the complicated signals accurately. Reference measurement method is usually considered to be used to eliminate the effect of background changes. But there is no reference substance which changes synchronously with the anylate. After many years of research, our research group has proposed the floating reference method, which is succeeded in eliminating the spectral effects induced by the instrument drifts and the measured object's background variations. But our studies indicate that the reference-point will changes following the changing of measurement location and wavelength. Therefore, the effects of floating reference method should be verified comprehensively. In this paper, keeping things simple, the Monte Carlo simulation employing Intralipid solution with the concentrations of 5% and 10% is performed to verify the effect of floating reference method used into eliminating the consequences of the light source drift. And the light source drift is introduced through varying the incident photon number. The effectiveness of the floating reference method with corresponding reference-points at different wavelengths in eliminating the variations of the light source drift is estimated. The comparison of the prediction abilities of the calibration models with and without using this method shows that the RMSEPs of the method are decreased by about 98.57% (5%Intralipid)and 99.36% (10% Intralipid)for different Intralipid. The results indicate that the floating reference method has obvious effect in eliminating the background changes.

  12. A simple method for HPLC retention time prediction: linear calibration using two reference substances.

    PubMed

    Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng

    2017-01-01

    Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.

  13. Assessment of the Validity of the Research Diagnostic Criteria for Temporomandibular Disorders: Overview and Methodology

    PubMed Central

    Schiffman, Eric L.; Truelove, Edmond L.; Ohrbach, Richard; Anderson, Gary C.; John, Mike T.; List, Thomas; Look, John O.

    2011-01-01

    AIMS The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. An overview is presented, including Axis I and II methodology and descriptive statistics for the study participant sample. This paper details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. Validity testing for the Axis II biobehavioral instruments was based on previously validated reference standards. METHODS The Axis I reference standards were based on the consensus of 2 criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion exam reliability was also assessed within study sites. RESULTS Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas ≥ 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion exam agreement with reference standards was excellent (k ≥ 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). CONCLUSION The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods. PMID:20213028

  14. Evaluation of Alternative Altitude Scaling Methods for Thermal Ice Protection System in NASA Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Lee, Sam; Addy, Harold E. Jr.; Broeren, Andy P.; Orchard, David M.

    2017-01-01

    A test was conducted at NASA Icing Research Tunnel to evaluate altitude scaling methods for thermal ice protection system. Two new scaling methods based on Weber number were compared against a method based on Reynolds number. The results generally agreed with the previous set of tests conducted in NRCC Altitude Icing Wind Tunnel where the three methods of scaling were also tested and compared along with reference (altitude) icing conditions. In those tests, the Weber number-based scaling methods yielded results much closer to those observed at the reference icing conditions than the Reynolds number-based icing conditions. The test in the NASA IRT used a much larger, asymmetric airfoil with an ice protection system that more closely resembled designs used in commercial aircraft. Following the trends observed during the AIWT tests, the Weber number based scaling methods resulted in smaller runback ice than the Reynolds number based scaling, and the ice formed farther upstream. The results show that the new Weber number based scaling methods, particularly the Weber number with water loading scaling, continue to show promise for ice protection system development and evaluation in atmospheric icing tunnels.

  15. Glossary of reference terms for alternative test methods and their validation.

    PubMed

    Ferrario, Daniele; Brustio, Roberta; Hartung, Thomas

    2014-01-01

    This glossary was developed to provide technical references to support work in the field of the alternatives to animal testing. It was compiled from various existing reference documents coming from different sources and is meant to be a point of reference on alternatives to animal testing. Giving the ever-increasing number of alternative test methods and approaches being developed over the last decades, a combination, revision, and harmonization of earlier published collections of terms used in the validation of such methods is required. The need to update previous glossary efforts came from the acknowledgement that new words have emerged with the development of new approaches, while others have become obsolete, and the meaning of some terms has partially changed over time. With this glossary we intend to provide guidance on issues related to the validation of new or updated testing methods consistent with current approaches. Moreover, because of new developments and technologies, a glossary needs to be a living, constantly updated document. An Internet-based version based on this compilation may be found at http://altweb.jhsph.edu/, allowing the addition of new material.

  16. A computerized procedure for teaching the relationship between graphic symbols and their referents.

    PubMed

    Isaacson, Mick; Lloyd, Lyle L

    2013-01-01

    Many individuals with little or no functional speech communicate through graphic symbols. Communication is enhanced when the relationship between symbols and their referents are learned to such a degree that retrieval is effortless, resulting in fluent communication. Developing fluency is a time consuming endeavor for special educators and speech-language pathologists (SLPs). It would be beneficial for these professionals to have an automated procedure based on the most efficacious method for teaching the relationship between symbols and referent. Hence, this study investigated whether a procedure based on the generation effect would promote learning the association between symbols and their referents. Results show that referent generation produces the best long-term retention of this relationship. These findings provide evidence that software based on referent generation would provide special educators and SLPs with an efficacious automated procedure, requiring minimal direct supervision, to facilitate symbol/referent learning and the development of communicative fluency.

  17. Image sharpness assessment based on wavelet energy of edge area

    NASA Astrophysics Data System (ADS)

    Li, Jin; Zhang, Hong; Zhang, Lei; Yang, Yifan; He, Lei; Sun, Mingui

    2018-04-01

    Image quality assessment is needed in multiple image processing areas and blur is one of the key reasons of image deterioration. Although great full-reference image quality assessment metrics have been proposed in the past few years, no-reference method is still an area of current research. Facing this problem, this paper proposes a no-reference sharpness assessment method based on wavelet transformation which focuses on the edge area of image. Based on two simple characteristics of human vision system, weights are introduced to calculate weighted log-energy of each wavelet sub band. The final score is given by the ratio of high-frequency energy to the total energy. The algorithm is tested on multiple databases. Comparing with several state-of-the-art metrics, proposed algorithm has better performance and less runtime consumption.

  18. A Calibration-Free Laser-Induced Breakdown Spectroscopy (CF-LIBS) Quantitative Analysis Method Based on the Auto-Selection of an Internal Reference Line and Optimized Estimation of Plasma Temperature.

    PubMed

    Yang, Jianhong; Li, Xiaomeng; Xu, Jinwu; Ma, Xianghong

    2018-01-01

    The quantitative analysis accuracy of calibration-free laser-induced breakdown spectroscopy (CF-LIBS) is severely affected by the self-absorption effect and estimation of plasma temperature. Herein, a CF-LIBS quantitative analysis method based on the auto-selection of internal reference line and the optimized estimation of plasma temperature is proposed. The internal reference line of each species is automatically selected from analytical lines by a programmable procedure through easily accessible parameters. Furthermore, the self-absorption effect of the internal reference line is considered during the correction procedure. To improve the analysis accuracy of CF-LIBS, the particle swarm optimization (PSO) algorithm is introduced to estimate the plasma temperature based on the calculation results from the Boltzmann plot. Thereafter, the species concentrations of a sample can be calculated according to the classical CF-LIBS method. A total of 15 certified alloy steel standard samples of known compositions and elemental weight percentages were used in the experiment. Using the proposed method, the average relative errors of Cr, Ni, and Fe calculated concentrations were 4.40%, 6.81%, and 2.29%, respectively. The quantitative results demonstrated an improvement compared with the classical CF-LIBS method and the promising potential of in situ and real-time application.

  19. Reconstruction of metabolic pathways by combining probabilistic graphical model-based and knowledge-based methods

    PubMed Central

    2014-01-01

    Automatic reconstruction of metabolic pathways for an organism from genomics and transcriptomics data has been a challenging and important problem in bioinformatics. Traditionally, known reference pathways can be mapped into an organism-specific ones based on its genome annotation and protein homology. However, this simple knowledge-based mapping method might produce incomplete pathways and generally cannot predict unknown new relations and reactions. In contrast, ab initio metabolic network construction methods can predict novel reactions and interactions, but its accuracy tends to be low leading to a lot of false positives. Here we combine existing pathway knowledge and a new ab initio Bayesian probabilistic graphical model together in a novel fashion to improve automatic reconstruction of metabolic networks. Specifically, we built a knowledge database containing known, individual gene / protein interactions and metabolic reactions extracted from existing reference pathways. Known reactions and interactions were then used as constraints for Bayesian network learning methods to predict metabolic pathways. Using individual reactions and interactions extracted from different pathways of many organisms to guide pathway construction is new and improves both the coverage and accuracy of metabolic pathway construction. We applied this probabilistic knowledge-based approach to construct the metabolic networks from yeast gene expression data and compared its results with 62 known metabolic networks in the KEGG database. The experiment showed that the method improved the coverage of metabolic network construction over the traditional reference pathway mapping method and was more accurate than pure ab initio methods. PMID:25374614

  20. Design of two-dimensional zero reference codes with cross-entropy method.

    PubMed

    Chen, Jung-Chieh; Wen, Chao-Kai

    2010-06-20

    We present a cross-entropy (CE)-based method for the design of optimum two-dimensional (2D) zero reference codes (ZRCs) in order to generate a zero reference signal for a grating measurement system and achieve absolute position, a coordinate origin, or a machine home position. In the absence of diffraction effects, the 2D ZRC design problem is known as the autocorrelation approximation. Based on the properties of the autocorrelation function, the design of the 2D ZRC is first formulated as a particular combination optimization problem. The CE method is then applied to search for an optimal 2D ZRC and thus obtain the desirable zero reference signal. Computer simulation results indicate that there are 15.38% and 14.29% reductions in the second maxima value for the 16x16 grating system with n(1)=64 and the 100x100 grating system with n(1)=300, respectively, where n(1) is the number of transparent pixels, compared with those of the conventional genetic algorithm.

  1. Technical Note: Modification of the standard gain correction algorithm to compensate for the number of used reference flat frames in detector performance studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konstantinidis, Anastasios C.; Olivo, Alessandro; Speller, Robert D.

    2011-12-15

    Purpose: The x-ray performance evaluation of digital x-ray detectors is based on the calculation of the modulation transfer function (MTF), the noise power spectrum (NPS), and the resultant detective quantum efficiency (DQE). The flat images used for the extraction of the NPS should not contain any fixed pattern noise (FPN) to avoid contamination from nonstochastic processes. The ''gold standard'' method used for the reduction of the FPN (i.e., the different gain between pixels) in linear x-ray detectors is based on normalization with an average reference flat-field. However, the noise in the corrected image depends on the number of flat framesmore » used for the average flat image. The aim of this study is to modify the standard gain correction algorithm to make it independent on the used reference flat frames. Methods: Many publications suggest the use of 10-16 reference flat frames, while other studies use higher numbers (e.g., 48 frames) to reduce the propagated noise from the average flat image. This study quantifies experimentally the effect of the number of used reference flat frames on the NPS and DQE values and appropriately modifies the gain correction algorithm to compensate for this effect. Results: It is shown that using the suggested gain correction algorithm a minimum number of reference flat frames (i.e., down to one frame) can be used to eliminate the FPN from the raw flat image. This saves computer memory and time during the x-ray performance evaluation. Conclusions: The authors show that the method presented in the study (a) leads to the maximum DQE value that one would have by using the conventional method and very large number of frames and (b) has been compared to an independent gain correction method based on the subtraction of flat-field images, leading to identical DQE values. They believe this provides robust validation of the proposed method.« less

  2. Laser Lights or Dim Bulbs? Evaluating Reference Librarians' Use of Electronic Sources.

    ERIC Educational Resources Information Center

    Welch, Jeanie M.

    1999-01-01

    Discusses the evaluation of academic library reference librarians' effectiveness in providing services to patrons using electronic sources based on experiences at the University of North Carolina at Charlotte. Topics include core technical competencies for subject specialists and reference desk service; the Internet; and methods of evaluation.…

  3. Quantitative estimation of α-PVP metabolites in urine by GC-APCI-QTOFMS with nitrogen chemiluminescence detection based on parent drug calibration.

    PubMed

    Mesihää, Samuel; Rasanen, Ilpo; Ojanperä, Ilkka

    2018-05-01

    Gas chromatography (GC) hyphenated with nitrogen chemiluminescence detection (NCD) and quadrupole time-of-flight mass spectrometry (QTOFMS) was applied for the first time to the quantitative analysis of new psychoactive substances (NPS) in urine, based on the N-equimolar response of NCD. A method was developed and validated to estimate the concentrations of three metabolites of the common stimulant NPS α-pyrrolidinovalerophenone (α-PVP) in spiked urine samples, simulating an analysis having no authentic reference standards for the metabolites and using the parent drug instead for quantitative calibration. The metabolites studied were OH-α-PVP (M1), 2″-oxo-α-PVP (M3), and N,N-bis-dealkyl-PVP (2-amino-1-phenylpentan-1-one; M5). Sample preparation involved liquid-liquid extraction with a mixture of ethyl acetate and butyl chloride at a basic pH and subsequent silylation of the sec-hydroxyl and prim-amino groups of M1 and M5, respectively. Simultaneous compound identification was based on the accurate masses of the protonated molecules for each compound by QTOFMS following atmospheric pressure chemical ionization. The accuracy of quantification of the parent-calibrated NCD method was compared with that of the corresponding parent-calibrated QTOFMS method, as well as with a reference QTOFMS method calibrated with the authentic reference standards. The NCD method produced an equally good accuracy to the reference method for α-PVP, M3 and M5, while a higher negative bias (25%) was obtained for M1, best explainable by recovery and stability issues. The performance of the parent-calibrated QTOFMS method was inferior to the reference method with an especially high negative bias (60%) for M1. The NCD method enabled better quantitative precision than the QTOFMS methods To evaluate the novel approach in casework, twenty post- mortem urine samples previously found positive for α-PVP were analyzed by the parent calibrated NCD method and the reference QTOFMS method. The highest difference in the quantitative results between the two methods was only 33%, and the NCD method's precision as the coefficient of variation was better than 13%. The limit of quantification for the NCD method was approximately 0.25μg/mL in urine, which generally allowed the analysis of α-PVP and the main metabolite M1. However, the sensitivity was not sufficient for the low concentrations of M3 and M5. Consequently, while having potential for instant analysis of NPS and metabolites in moderate concentrations without reference standards, the NCD method should be further developed for improved sensitivity to be more generally applicable. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. X-ray Moiré deflectometry using synthetic reference images

    DOE PAGES

    Stutman, Dan; Valdivia, Maria Pia; Finkenthal, Michael

    2015-06-25

    Moiré fringe deflectometry with grating interferometers is a technique that enables refraction-based x-ray imaging using a single exposure of an object. To obtain the refraction image, the method requires a reference fringe pattern (without the object). Our study shows that, in order to avoid artifacts, the reference pattern must be exactly matched in phase with the object fringe pattern. In experiments, however, it is difficult to produce a perfectly matched reference pattern due to unavoidable interferometer drifts. We present a simple method to obtain matched reference patterns using a phase-scan procedure to generate synthetic Moiré images. As a result, themore » method will enable deflectometric diagnostics of transient phenomena such as laser-produced plasmas and could improve the sensitivity and accuracy of medical phase-contrast imaging.« less

  5. Reference test methods for total water in lint cotton by Karl Fischer Titration and low temperature distillation

    USDA-ARS?s Scientific Manuscript database

    In a study of comparability of total water contents (%) of conditioned cottons by Karl Fischer Titration (KFT) and Low Temperature Distillation (LTD) reference methods, we demonstrated a match of averaged results based on a large number of replications and weighing the test specimens at the same tim...

  6. Performance of the Proposed New Federal Reference Methods for Measuring Ozone Concentrations in Ambient Air

    EPA Science Inventory

    The current Federal Reference Method (FRM) for measuring concentrations of ozone in ambient air, described in EPA regulations at 40 CFR Part 50, Appendix D, is based on the dry, gas-phase, chemiluminescence reaction between ethylene (C2H4) and any ozone (O

  7. Genotype Imputation with Millions of Reference Samples

    PubMed Central

    Browning, Brian L.; Browning, Sharon R.

    2016-01-01

    We present a genotype imputation method that scales to millions of reference samples. The imputation method, based on the Li and Stephens model and implemented in Beagle v.4.1, is parallelized and memory efficient, making it well suited to multi-core computer processors. It achieves fast, accurate, and memory-efficient genotype imputation by restricting the probability model to markers that are genotyped in the target samples and by performing linear interpolation to impute ungenotyped variants. We compare Beagle v.4.1 with Impute2 and Minimac3 by using 1000 Genomes Project data, UK10K Project data, and simulated data. All three methods have similar accuracy but different memory requirements and different computation times. When imputing 10 Mb of sequence data from 50,000 reference samples, Beagle’s throughput was more than 100× greater than Impute2’s throughput on our computer servers. When imputing 10 Mb of sequence data from 200,000 reference samples in VCF format, Minimac3 consumed 26× more memory per computational thread and 15× more CPU time than Beagle. We demonstrate that Beagle v.4.1 scales to much larger reference panels by performing imputation from a simulated reference panel having 5 million samples and a mean marker density of one marker per four base pairs. PMID:26748515

  8. Reference Intervals of Hematology and Clinical Chemistry Analytes for 1-Year-Old Korean Children.

    PubMed

    Lee, Hye Ryun; Shin, Sue; Yoon, Jong Hyun; Roh, Eun Youn; Chang, Ju Young

    2016-09-01

    Reference intervals need to be established according to age. We established reference intervals of hematology and chemistry from community-based healthy 1-yr-old children and analyzed their iron status according to the feeding methods during the first six months after birth. A total of 887 children who received a medical check-up between 2010 and 2014 at Boramae Hospital (Seoul, Korea) were enrolled. A total of 534 children (247 boys and 287 girls) were enrolled as reference individuals after the exclusion of data obtained from children with suspected iron deficiency. Hematology and clinical chemistry analytes were measured, and the reference value of each analyte was estimated by using parametric (mean±2 SD) or nonparametric methods (2.5-97.5th percentile). Iron, total iron-binding capacity, and ferritin were measured, and transferrin saturation was calculated. As there were no differences in the mean values between boys and girls, we established the reference intervals for 1-yr-old children regardless of sex. The analysis of serum iron status according to feeding methods during the first six months revealed higher iron, ferritin, and transferrin saturation levels in children exclusively or mainly fed formula than in children exclusively or mainly fed breast milk. We established reference intervals of hematology and clinical chemistry analytes from community-based healthy children at one year of age. These reference intervals will be useful for interpreting results of medical check-ups at one year of age.

  9. A Comparative Study of Different EEG Reference Choices for Diagnosing Unipolar Depression.

    PubMed

    Mumtaz, Wajid; Malik, Aamir Saeed

    2018-06-02

    The choice of an electroencephalogram (EEG) reference has fundamental importance and could be critical during clinical decision-making because an impure EEG reference could falsify the clinical measurements and subsequent inferences. In this research, the suitability of three EEG references was compared while classifying depressed and healthy brains using a machine-learning (ML)-based validation method. In this research, the EEG data of 30 unipolar depressed subjects and 30 age-matched healthy controls were recorded. The EEG data were analyzed in three different EEG references, the link-ear reference (LE), average reference (AR), and reference electrode standardization technique (REST). The EEG-based functional connectivity (FC) was computed. Also, the graph-based measures, such as the distances between nodes, minimum spanning tree, and maximum flow between the nodes for each channel pair, were calculated. An ML scheme provided a mechanism to compare the performances of the extracted features that involved a general framework such as the feature extraction (graph-based theoretic measures), feature selection, classification, and validation. For comparison purposes, the performance metrics such as the classification accuracies, sensitivities, specificities, and F scores were computed. When comparing the three references, the diagnostic accuracy showed better performances during the REST, while the LE and AR showed less discrimination between the two groups. Based on the results, it can be concluded that the choice of appropriate reference is critical during the clinical scenario. The REST reference is recommended for future applications of EEG-based diagnosis of mental illnesses.

  10. Improved motion correction in PROPELLER by using grouped blades as reference.

    PubMed

    Liu, Zhe; Zhang, Zhe; Ying, Kui; Yuan, Chun; Guo, Hua

    2014-03-01

    To develop a robust reference generation method for improving PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) reconstruction. A new reference generation method, grouped-blade reference (GBR), is proposed for calculating rotation angle and translation shift in PROPELLER. Instead of using a single-blade reference (SBR) or combined-blade reference (CBR), our method classifies blades by their relative correlations and groups similar blades together as the reference to prevent inconsistent data from interfering the correction process. Numerical simulations and in vivo experiments were used to evaluate the performance of GBR for PROPELLER, which was further compared with SBR and CBR in terms of error level and computation cost. Both simulation and in vivo experiments demonstrate that GBR-based PROPELLER provides better correction for random motion or bipolar motion comparing with SBR or CBR. It not only produces images with lower error level but also needs less iteration steps to converge. A grouped-blade for reference selection was investigated for PROPELLER MRI. It helps to improve the accuracy and robustness of motion correction for various motion patterns. Copyright © 2013 Wiley Periodicals, Inc.

  11. A Profilometry-Based Dentifrice Abrasion Method for V8 Brushing Machines Part II: Comparison of RDA-PE and Radiotracer RDA Measures.

    PubMed

    Schneiderman, Eva; Colón, Ellen; White, Donald J; St John, Samuel

    2015-01-01

    The purpose of this study was to compare the abrasivity of commercial dentifrices by two techniques: the conventional gold standard radiotracer-based Radioactive Dentin Abrasivity (RDA) method; and a newly validated technique based on V8 brushing that included a profilometry-based evaluation of dentin wear. This profilometry-based method is referred to as RDA-Profilometry Equivalent, or RDA-PE. A total of 36 dentifrices were sourced from four global dentifrice markets (Asia Pacific [including China], Europe, Latin America, and North America) and tested blindly using both the standard radiotracer (RDA) method and the new profilometry method (RDA-PE), taking care to follow specific details related to specimen preparation and treatment. Commercial dentifrices tested exhibited a wide range of abrasivity, with virtually all falling well under the industry accepted upper limit of 250; that is, 2.5 times the level of abrasion measured using an ISO 11609 abrasivity reference calcium pyrophosphate as the reference control. RDA and RDA-PE comparisons were linear across the entire range of abrasivity (r2 = 0.7102) and both measures exhibited similar reproducibility with replicate assessments. RDA-PE assessments were not just linearly correlated, but were also proportional to conventional RDA measures. The linearity and proportionality of the results of the current study support that both methods (RDA or RDA-PE) provide similar results and justify a rationale for making the upper abrasivity limit of 250 apply to both RDA and RDA-PE.

  12. A global multicenter study on reference values: 1. Assessment of methods for derivation and comparison of reference intervals.

    PubMed

    Ichihara, Kiyoshi; Ozarda, Yesim; Barth, Julian H; Klee, George; Qiu, Ling; Erasmus, Rajiv; Borai, Anwar; Evgina, Svetlana; Ashavaid, Tester; Khan, Dilshad; Schreier, Laura; Rolle, Reynan; Shimizu, Yoshihisa; Kimura, Shogo; Kawano, Reo; Armbruster, David; Mori, Kazuo; Yadav, Binod K

    2017-04-01

    The IFCC Committee on Reference Intervals and Decision Limits coordinated a global multicenter study on reference values (RVs) to explore rational and harmonizable procedures for derivation of reference intervals (RIs) and investigate the feasibility of sharing RIs through evaluation of sources of variation of RVs on a global scale. For the common protocol, rather lenient criteria for reference individuals were adopted to facilitate harmonized recruitment with planned use of the latent abnormal values exclusion (LAVE) method. As of July 2015, 12 countries had completed their study with total recruitment of 13,386 healthy adults. 25 analytes were measured chemically and 25 immunologically. A serum panel with assigned values was measured by all laboratories. RIs were derived by parametric and nonparametric methods. The effect of LAVE methods is prominent in analytes which reflect nutritional status, inflammation and muscular exertion, indicating that inappropriate results are frequent in any country. The validity of the parametric method was confirmed by the presence of analyte-specific distribution patterns and successful Gaussian transformation using the modified Box-Cox formula in all countries. After successful alignment of RVs based on the panel test results, nearly half the analytes showed variable degrees of between-country differences. This finding, however, requires confirmation after adjusting for BMI and other sources of variation. The results are reported in the second part of this paper. The collaborative study enabled us to evaluate rational methods for deriving RIs and comparing the RVs based on real-world datasets obtained in a harmonized manner. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  13. Dynamic updating atlas for heart segmentation with a nonlinear field-based model.

    PubMed

    Cai, Ken; Yang, Rongqian; Yue, Hongwei; Li, Lihua; Ou, Shanxing; Liu, Feng

    2017-09-01

    Segmentation of cardiac computed tomography (CT) images is an effective method for assessing the dynamic function of the heart and lungs. In the atlas-based heart segmentation approach, the quality of segmentation usually relies upon atlas images, and the selection of those reference images is a key step. The optimal goal in this selection process is to have the reference images as close to the target image as possible. This study proposes an atlas dynamic update algorithm using a scheme of nonlinear deformation field. The proposed method is based on the features among double-source CT (DSCT) slices. The extraction of these features will form a base to construct an average model and the created reference atlas image is updated during the registration process. A nonlinear field-based model was used to effectively implement a 4D cardiac segmentation. The proposed segmentation framework was validated with 14 4D cardiac CT sequences. The algorithm achieved an acceptable accuracy (1.0-2.8 mm). Our proposed method that combines a nonlinear field-based model and dynamic updating atlas strategies can provide an effective and accurate way for whole heart segmentation. The success of the proposed method largely relies on the effective use of the prior knowledge of the atlas and the similarity explored among the to-be-segmented DSCT sequences. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Development of an evidence-based approach to external quality assurance for breast cancer hormone receptor immunohistochemistry: comparison of reference values.

    PubMed

    Makretsov, Nikita; Gilks, C Blake; Alaghehbandan, Reza; Garratt, John; Quenneville, Louise; Mercer, Joel; Palavdzic, Dragana; Torlakovic, Emina E

    2011-07-01

    External quality assurance and proficiency testing programs for breast cancer predictive biomarkers are based largely on traditional ad hoc design; at present there is no universal consensus on definition of a standard reference value for samples used in external quality assurance programs. To explore reference values for estrogen receptor and progesterone receptor immunohistochemistry in order to develop an evidence-based analytic platform for external quality assurance. There were 31 participating laboratories, 4 of which were previously designated as "expert" laboratories. Each participant tested a tissue microarray slide with 44 breast carcinomas for estrogen receptor and progesterone receptor and submitted it to the Canadian Immunohistochemistry Quality Control Program for analysis. Nuclear staining in 1% or more of the tumor cells was a positive score. Five methods for determining reference values were compared. All reference values showed 100% agreement for estrogen receptor and progesterone receptor scores, when indeterminate results were excluded. Individual laboratory performance (agreement rates, test sensitivity, test specificity, positive predictive value, negative predictive value, and κ value) was very similar for all reference values. Identification of suboptimal performance by all methods was identical for 30 of 31 laboratories. Estrogen receptor assessment of 1 laboratory was discordant: agreement was less than 90% for 3 of 5 reference values and greater than 90% with the use of 2 other reference values. Various reference values provide equivalent laboratory rating. In addition to descriptive feedback, our approach allows calculation of technical test sensitivity and specificity, positive and negative predictive values, agreement rates, and κ values to guide corrective actions.

  15. Correlative multiple porosimetries for reservoir sandstones with adoption of a new reference-sample-guided computed-tomographic method.

    PubMed

    Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min

    2016-07-22

    One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account.

  16. Correlative multiple porosimetries for reservoir sandstones with adoption of a new reference-sample-guided computed-tomographic method

    PubMed Central

    Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min

    2016-01-01

    One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account. PMID:27445105

  17. Mathematical Practice in Textbooks Analysis: Praxeological Reference Models, the Case of Proportion

    ERIC Educational Resources Information Center

    Wijayanti, Dyana; Winsløw, Carl

    2017-01-01

    We present a new method in textbook analysis, based on so-called praxeological reference models focused on specific content at task level. This method implies that the mathematical contents of a textbook (or textbook part) is analyzed in terms of the tasks and techniques which are exposed to or demanded from readers; this can then be interpreted…

  18. Indirect methods for reference interval determination - review and recommendations.

    PubMed

    Jones, Graham R D; Haeckel, Rainer; Loh, Tze Ping; Sikaris, Ken; Streichert, Thomas; Katayev, Alex; Barth, Julian H; Ozarda, Yesim

    2018-04-19

    Reference intervals are a vital part of the information supplied by clinical laboratories to support interpretation of numerical pathology results such as are produced in clinical chemistry and hematology laboratories. The traditional method for establishing reference intervals, known as the direct approach, is based on collecting samples from members of a preselected reference population, making the measurements and then determining the intervals. An alternative approach is to perform analysis of results generated as part of routine pathology testing and using appropriate statistical techniques to determine reference intervals. This is known as the indirect approach. This paper from a working group of the International Federation of Clinical Chemistry (IFCC) Committee on Reference Intervals and Decision Limits (C-RIDL) aims to summarize current thinking on indirect approaches to reference intervals. The indirect approach has some major potential advantages compared with direct methods. The processes are faster, cheaper and do not involve patient inconvenience, discomfort or the risks associated with generating new patient health information. Indirect methods also use the same preanalytical and analytical techniques used for patient management and can provide very large numbers for assessment. Limitations to the indirect methods include possible effects of diseased subpopulations on the derived interval. The IFCC C-RIDL aims to encourage the use of indirect methods to establish and verify reference intervals, to promote publication of such intervals with clear explanation of the process used and also to support the development of improved statistical techniques for these studies.

  19. Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location

    NASA Astrophysics Data System (ADS)

    Zhao, A. H.

    2014-12-01

    Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.

  20. The effects of user factors and symbol referents on public symbol design using the stereotype production method.

    PubMed

    Ng, Annie W Y; Siu, Kin Wai Michael; Chan, Chetwyn C H

    2012-01-01

    This study investigated the influence of user factors and symbol referents on public symbol design among older people, using the stereotype production method for collecting user ideas during the symbol design process. Thirty-one older adults were asked to draw images based on 28 public symbol referents and to indicate their familiarity with and ease with which they visualised each referent. Differences were found between the pictorial solutions generated by males and females. However, symbol design was not influenced by participants' education level, vividness of visual imagery, object imagery preference or spatial imagery preference. Both familiar and unfamiliar referents were illustrated pictorially without much difficulty by users. The more visual the referent, the less difficulty the users had in illustrating it. The findings of this study should aid the optimisation of the stereotype production method for user-involved symbol design. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  1. Assessing noninferiority in a three-arm trial using the Bayesian approach.

    PubMed

    Ghosh, Pulak; Nathoo, Farouk; Gönen, Mithat; Tiwari, Ram C

    2011-07-10

    Non-inferiority trials, which aim to demonstrate that a test product is not worse than a competitor by more than a pre-specified small amount, are of great importance to the pharmaceutical community. As a result, methodology for designing and analyzing such trials is required, and developing new methods for such analysis is an important area of statistical research. The three-arm trial consists of a placebo, a reference and an experimental treatment, and simultaneously tests the superiority of the reference over the placebo along with comparing this reference to an experimental treatment. In this paper, we consider the analysis of non-inferiority trials using Bayesian methods which incorporate both parametric as well as semi-parametric models. The resulting testing approach is both flexible and robust. The benefit of the proposed Bayesian methods is assessed via simulation, based on a study examining home-based blood pressure interventions. Copyright © 2011 John Wiley & Sons, Ltd.

  2. Liquid chromatography with absorbance detection and with isotope-dilution mass spectrometry for determination of isoflavones in soy standard reference materials.

    PubMed

    Phillips, Melissa M; Bedner, Mary; Reitz, Manuela; Burdette, Carolyn Q; Nelson, Michael A; Yen, James H; Sander, Lane C; Rimmer, Catherine A

    2017-02-01

    Two independent analytical approaches, based on liquid chromatography with absorbance detection and liquid chromatography with mass spectrometric detection, have been developed for determination of isoflavones in soy materials. These two methods yield comparable results for a variety of soy-based foods and dietary supplements. Four Standard Reference Materials (SRMs) have been produced by the National Institute of Standards and Technology to assist the food and dietary supplement community in method validation and have been assigned values for isoflavone content using both methods. These SRMs include SRM 3234 Soy Flour, SRM 3236 Soy Protein Isolate, SRM 3237 Soy Protein Concentrate, and SRM 3238 Soy-Containing Solid Oral Dosage Form. A fifth material, SRM 3235 Soy Milk, was evaluated using the methods and found to be inhomogeneous for isoflavones and unsuitable for value assignment. Graphical Abstract Separation of six isoflavone aglycones and glycosides found in Standard Reference Material (SRM) 3236 Soy Protein Isolate.

  3. Multicomponent quantitative spectroscopic analysis without reference substances based on ICA modelling.

    PubMed

    Monakhova, Yulia B; Mushtakova, Svetlana P

    2017-05-01

    A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.

  4. Assessing the distinguishable cluster approximation based on the triple bond-breaking in the nitrogen molecule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rishi, Varun; Perera, Ajith; Bartlett, Rodney J., E-mail: bartlett@qtp.ufl.edu

    2016-03-28

    Obtaining the correct potential energy curves for the dissociation of multiple bonds is a challenging problem for ab initio methods which are affected by the choice of a spin-restricted reference function. Coupled cluster (CC) methods such as CCSD (coupled cluster singles and doubles model) and CCSD(T) (CCSD + perturbative triples) correctly predict the geometry and properties at equilibrium but the process of bond dissociation, particularly when more than one bond is simultaneously broken, is much more complicated. New modifications of CC theory suggest that the deleterious role of the reference function can be diminished, provided a particular subset of termsmore » is retained in the CC equations. The Distinguishable Cluster (DC) approach of Kats and Manby [J. Chem. Phys. 139, 021102 (2013)], seemingly overcomes the deficiencies for some bond-dissociation problems and might be of use in quasi-degenerate situations in general. DC along with other approximate coupled cluster methods such as ACCD (approximate coupled cluster doubles), ACP-D45, ACP-D14, 2CC, and pCCSD(α, β) (all defined in text) falls under a category of methods that are basically obtained by the deletion of some quadratic terms in the double excitation amplitude equation for CCD/CCSD (coupled cluster doubles model/coupled cluster singles and doubles model). Here these approximate methods, particularly those based on the DC approach, are studied in detail for the nitrogen molecule bond-breaking. The N{sub 2} problem is further addressed with conventional single reference methods but based on spatial symmetry-broken restricted Hartree–Fock (HF) solutions to assess the use of these references for correlated calculations in the situation where CC methods using fully symmetry adapted SCF solutions fail. The distinguishable cluster method is generalized: 1) to different orbitals for different spins (unrestricted HF based DCD and DCSD), 2) by adding triples correction perturbatively (DCSD(T)) and iteratively (DCSDT-n), and 3) via an excited state approximation through the equation of motion (EOM) approach (EOM-DCD, EOM-DCSD). The EOM-CC method is used to identify lower-energy CC solutions to overcome singularities in the CC potential energy curves. It is also shown that UHF based CC and DC methods behave very similarly in bond-breaking of N{sub 2}, and that using spatially broken but spin preserving SCF references makes the CCSD solutions better than those for DCSD.« less

  5. Joint Transform Correlation for face tracking: elderly fall detection application

    NASA Astrophysics Data System (ADS)

    Katz, Philippe; Aron, Michael; Alfalou, Ayman

    2013-03-01

    In this paper, an iterative tracking algorithm based on a non-linear JTC (Joint Transform Correlator) architecture and enhanced by a digital image processing method is proposed and validated. This algorithm is based on the computation of a correlation plane where the reference image is updated at each frame. For that purpose, we use the JTC technique in real time to track a patient (target image) in a room fitted with a video camera. The correlation plane is used to localize the target image in the current video frame (frame i). Then, the reference image to be exploited in the next frame (frame i+1) is updated according to the previous one (frame i). In an effort to validate our algorithm, our work is divided into two parts: (i) a large study based on different sequences with several situations and different JTC parameters is achieved in order to quantify their effects on the tracking performances (decimation, non-linearity coefficient, size of the correlation plane, size of the region of interest...). (ii) the tracking algorithm is integrated into an application of elderly fall detection. The first reference image is a face detected by means of Haar descriptors, and then localized into the new video image thanks to our tracking method. In order to avoid a bad update of the reference frame, a method based on a comparison of image intensity histograms is proposed and integrated in our algorithm. This step ensures a robust tracking of the reference frame. This article focuses on face tracking step optimisation and evalutation. A supplementary step of fall detection, based on vertical acceleration and position, will be added and studied in further work.

  6. A Rapid Identification Method for Calamine Using Near-Infrared Spectroscopy Based on Multi-Reference Correlation Coefficient Method and Back Propagation Artificial Neural Network.

    PubMed

    Sun, Yangbo; Chen, Long; Huang, Bisheng; Chen, Keli

    2017-07-01

    As a mineral, the traditional Chinese medicine calamine has a similar shape to many other minerals. Investigations of commercially available calamine samples have shown that there are many fake and inferior calamine goods sold on the market. The conventional identification method for calamine is complicated, therefore as a result of the large scale of calamine samples, a rapid identification method is needed. To establish a qualitative model using near-infrared (NIR) spectroscopy for rapid identification of various calamine samples, large quantities of calamine samples including crude products, counterfeits and processed products were collected and correctly identified using the physicochemical and powder X-ray diffraction method. The NIR spectroscopy method was used to analyze these samples by combining the multi-reference correlation coefficient (MRCC) method and the error back propagation artificial neural network algorithm (BP-ANN), so as to realize the qualitative identification of calamine samples. The accuracy rate of the model based on NIR and MRCC methods was 85%; in addition, the model, which took comprehensive multiple factors into consideration, can be used to identify crude calamine products, its counterfeits and processed products. Furthermore, by in-putting the correlation coefficients of multiple references as the spectral feature data of samples into BP-ANN, a BP-ANN model of qualitative identification was established, of which the accuracy rate was increased to 95%. The MRCC method can be used as a NIR-based method in the process of BP-ANN modeling.

  7. Statistical considerations for harmonization of the global multicenter study on reference values.

    PubMed

    Ichihara, Kiyoshi

    2014-05-15

    The global multicenter study on reference values coordinated by the Committee on Reference Intervals and Decision Limits (C-RIDL) of the IFCC was launched in December 2011, targeting 45 commonly tested analytes with the following objectives: 1) to derive reference intervals (RIs) country by country using a common protocol, and 2) to explore regionality/ethnicity of reference values by aligning test results among the countries. To achieve these objectives, it is crucial to harmonize 1) the protocol for recruitment and sampling, 2) statistical procedures for deriving the RI, and 3) test results through measurement of a panel of sera in common. For harmonized recruitment, very lenient inclusion/exclusion criteria were adopted in view of differences in interpretation of what constitutes healthiness by different cultures and investigators. This policy may require secondary exclusion of individuals according to the standard of each country at the time of deriving RIs. An iterative optimization procedure, called the latent abnormal values exclusion (LAVE) method, can be applied to automate the process of refining the choice of reference individuals. For global comparison of reference values, test results must be harmonized, based on the among-country, pair-wise linear relationships of test values for the panel. Traceability of reference values can be ensured based on values assigned indirectly to the panel through collaborative measurement of certified reference materials. The validity of the adopted strategies is discussed in this article, based on interim results obtained to date from five countries. Special considerations are made for dissociation of RIs by parametric and nonparametric methods and between-country difference in the effect of body mass index on reference values. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Accuracy and the Effect of Possible Subject-Based Confounders of Magnitude-Based MRI for Estimating Hepatic Proton Density Fat Fraction in Adults, Using MR Spectroscopy as Reference

    PubMed Central

    Heba, Elhamy R.; Desai, Ajinkya; Zand, Kevin A.; Hamilton, Gavin; Wolfson, Tanya; Schlein, Alexandra N.; Gamst, Anthony; Loomba, Rohit; Sirlin, Claude B.; Middleton, Michael S.

    2016-01-01

    Purpose To determine the accuracy and the effect of possible subject-based confounders of magnitude-based magnetic resonance imaging (MRI) for estimating hepatic proton density fat fraction (PDFF) for different numbers of echoes in adults with known or suspected nonalcoholic fatty liver disease, using MR spectroscopy (MRS) as a reference. Materials and Methods In this retrospective analysis of 506 adults, hepatic PDFF was estimated by unenhanced 3.0T MRI, using right-lobe MRS as reference. Regions of interest placed on source images and on six-echo parametric PDFF maps were colocalized to MRS voxel location. Accuracy using different numbers of echoes was assessed by regression and Bland–Altman analysis; slope, intercept, average bias, and R2 were calculated. The effect of age, sex, and body mass index (BMI) on hepatic PDFF accuracy was investigated using multivariate linear regression analyses. Results MRI closely agreed with MRS for all tested methods. For three- to six-echo methods, slope, regression intercept, average bias, and R2 were 1.01–0.99, 0.11–0.62%, 0.24–0.56%, and 0.981–0.982, respectively. Slope was closest to unity for the five-echo method. The two-echo method was least accurate, underestimating PDFF by an average of 2.93%, compared to an average of 0.23–0.69% for the other methods. Statistically significant but clinically nonmeaningful effects on PDFF error were found for subject BMI (P range: 0.0016 to 0.0783), male sex (P range: 0.015 to 0.037), and no statistically significant effect was found for subject age (P range: 0.18–0.24). Conclusion Hepatic magnitude-based MRI PDFF estimates using three, four, five, and six echoes, and six-echo parametric maps are accurate compared to reference MRS values, and that accuracy is not meaningfully confounded by age, sex, or BMI. PMID:26201284

  9. Establishment of Traceability of Reference Grade Hydrometers at National Physical Laboratory, India (npli)

    NASA Astrophysics Data System (ADS)

    Kumar, Anil; Kumar, Harish; Mandal, Goutam; Das, M. B.; Sharma, D. C.

    The present paper discusses the establishment of traceability of reference grade hydrometers at National Physical Laboratory, India (NPLI). The reference grade hydrometers are calibrated and traceable to the primary solid density standard. The calibration has been done according to standard procedure based on Cuckow's Method and the reference grade hydrometers calibrated covers a wide range. The uncertainty of the reference grade hydrometers has been computed and corrections are also calculated for the scale readings, at which observations are taken.

  10. Identification of reliable gridded reference data for statistical downscaling methods in Alberta

    NASA Astrophysics Data System (ADS)

    Eum, H. I.; Gupta, A.

    2017-12-01

    Climate models provide essential information to assess impacts of climate change at regional and global scales. However, statistical downscaling methods have been applied to prepare climate model data for various applications such as hydrologic and ecologic modelling at a watershed scale. As the reliability and (spatial and temporal) resolution of statistically downscaled climate data mainly depend on a reference data, identifying the most reliable reference data is crucial for statistical downscaling. A growing number of gridded climate products are available for key climate variables which are main input data to regional modelling systems. However, inconsistencies in these climate products, for example, different combinations of climate variables, varying data domains and data lengths and data accuracy varying with physiographic characteristics of the landscape, have caused significant challenges in selecting the most suitable reference climate data for various environmental studies and modelling. Employing various observation-based daily gridded climate products available in public domain, i.e. thin plate spline regression products (ANUSPLIN and TPS), inverse distance method (Alberta Townships), and numerical climate model (North American Regional Reanalysis) and an optimum interpolation technique (Canadian Precipitation Analysis), this study evaluates the accuracy of the climate products at each grid point by comparing with the Adjusted and Homogenized Canadian Climate Data (AHCCD) observations for precipitation, minimum and maximum temperature over the province of Alberta. Based on the performance of climate products at AHCCD stations, we ranked the reliability of these publically available climate products corresponding to the elevations of stations discretized into several classes. According to the rank of climate products for each elevation class, we identified the most reliable climate products based on the elevation of target points. A web-based system was developed to allow users to easily select the most reliable reference climate data at each target point based on the elevation of grid cell. By constructing the best combination of reference data for the study domain, the accurate and reliable statistically downscaled climate projections could be significantly improved.

  11. Retinal status analysis method based on feature extraction and quantitative grading in OCT images.

    PubMed

    Fu, Dongmei; Tong, Hejun; Zheng, Shuang; Luo, Ling; Gao, Fulin; Minar, Jiri

    2016-07-22

    Optical coherence tomography (OCT) is widely used in ophthalmology for viewing the morphology of the retina, which is important for disease detection and assessing therapeutic effect. The diagnosis of retinal diseases is based primarily on the subjective analysis of OCT images by trained ophthalmologists. This paper describes an OCT images automatic analysis method for computer-aided disease diagnosis and it is a critical part of the eye fundus diagnosis. This study analyzed 300 OCT images acquired by Optovue Avanti RTVue XR (Optovue Corp., Fremont, CA). Firstly, the normal retinal reference model based on retinal boundaries was presented. Subsequently, two kinds of quantitative methods based on geometric features and morphological features were proposed. This paper put forward a retinal abnormal grading decision-making method which was used in actual analysis and evaluation of multiple OCT images. This paper showed detailed analysis process by four retinal OCT images with different abnormal degrees. The final grading results verified that the analysis method can distinguish abnormal severity and lesion regions. This paper presented the simulation of the 150 test images, where the results of analysis of retinal status showed that the sensitivity was 0.94 and specificity was 0.92.The proposed method can speed up diagnostic process and objectively evaluate the retinal status. This paper aims on studies of retinal status automatic analysis method based on feature extraction and quantitative grading in OCT images. The proposed method can obtain the parameters and the features that are associated with retinal morphology. Quantitative analysis and evaluation of these features are combined with reference model which can realize the target image abnormal judgment and provide a reference for disease diagnosis.

  12. Self-Supporting, Hydrophobic, Ionic Liquid-Based Reference Electrodes Prepared by Polymerization-Induced Microphase Separation.

    PubMed

    Chopade, Sujay A; Anderson, Evan L; Schmidt, Peter W; Lodge, Timothy P; Hillmyer, Marc A; Bühlmann, Philippe

    2017-10-27

    Interfaces of ionic liquids and aqueous solutions exhibit stable electrical potentials over a wide range of aqueous electrolyte concentrations. This makes ionic liquids suitable as bridge materials that separate in electroanalytical measurements the reference electrode from samples with low and/or unknown ionic strengths. However, methods for the preparation of ionic liquid-based reference electrodes have not been explored widely. We have designed a convenient and reliable synthesis of ionic liquid-based reference electrodes by polymerization-induced microphase separation. This technique allows for a facile, single-pot synthesis of ready-to-use reference electrodes that incorporate ion conducting nanochannels filled with either 1-octyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide or 1-dodecyl-3-methylimidazolium bis(trifluoromethyl sulfonyl)imide as ionic liquid, supported by a mechanically robust cross-linked polystyrene phase. This synthesis procedure allows for the straightforward design of various reference electrode geometries. These reference electrodes exhibit a low resistance as well as good reference potential stability and reproducibility when immersed into aqueous solutions varying from deionized, purified water to 100 mM KCl, while requiring no correction for liquid junction potentials.

  13. Genotype Imputation with Millions of Reference Samples.

    PubMed

    Browning, Brian L; Browning, Sharon R

    2016-01-07

    We present a genotype imputation method that scales to millions of reference samples. The imputation method, based on the Li and Stephens model and implemented in Beagle v.4.1, is parallelized and memory efficient, making it well suited to multi-core computer processors. It achieves fast, accurate, and memory-efficient genotype imputation by restricting the probability model to markers that are genotyped in the target samples and by performing linear interpolation to impute ungenotyped variants. We compare Beagle v.4.1 with Impute2 and Minimac3 by using 1000 Genomes Project data, UK10K Project data, and simulated data. All three methods have similar accuracy but different memory requirements and different computation times. When imputing 10 Mb of sequence data from 50,000 reference samples, Beagle's throughput was more than 100× greater than Impute2's throughput on our computer servers. When imputing 10 Mb of sequence data from 200,000 reference samples in VCF format, Minimac3 consumed 26× more memory per computational thread and 15× more CPU time than Beagle. We demonstrate that Beagle v.4.1 scales to much larger reference panels by performing imputation from a simulated reference panel having 5 million samples and a mean marker density of one marker per four base pairs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  14. An Optimal Control Modification to Model-Reference Adaptive Control for Fast Adaptation

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Krishnakumar, Kalmanje; Boskovic, Jovan

    2008-01-01

    This paper presents a method that can achieve fast adaptation for a class of model-reference adaptive control. It is well-known that standard model-reference adaptive control exhibits high-gain control behaviors when a large adaptive gain is used to achieve fast adaptation in order to reduce tracking error rapidly. High gain control creates high-frequency oscillations that can excite unmodeled dynamics and can lead to instability. The fast adaptation approach is based on the minimization of the squares of the tracking error, which is formulated as an optimal control problem. The necessary condition of optimality is used to derive an adaptive law using the gradient method. This adaptive law is shown to result in uniform boundedness of the tracking error by means of the Lyapunov s direct method. Furthermore, this adaptive law allows a large adaptive gain to be used without causing undesired high-gain control effects. The method is shown to be more robust than standard model-reference adaptive control. Simulations demonstrate the effectiveness of the proposed method.

  15. Development of High-purity Certified Reference Materials for 17 Proteinogenic Amino Acids by Traceable Titration Methods.

    PubMed

    Kato, Megumi; Yamazaki, Taichi; Kato, Hisashi; Eyama, Sakae; Goto, Mari; Yoshioka, Mariko; Takatsu, Akiko

    2015-01-01

    To ensure the reliability of amino acid analyses, the National Metrology Institute of Japan of the National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) has developed high-purity certified reference materials (CRMs) for 17 proteinogenic amino acids. These CRMs are intended for use as primary reference materials to enable the traceable quantification of amino acids. The purity of the present CRMs was determined based on two traceable methods: nonaqueous acidimetric titration and nitrogen determination by the Kjeldahl method. Since neither method could distinguish compounds with similar structures, such as amino acid-related impurities, impurities were thoroughly quantified by combining several HPLC methods, and subtracted from the obtained purity of each method. The property value of each amino acid was calculated as a weighted mean of the corrected purities by the two methods. The uncertainty of the property value was obtained by combining measurement uncertainties of the two methods, a difference between the two methods, the uncertainty from the contribution of impurities, and the uncertainty derived from inhomogeneity. The uncertainty derived from instability was considered to be negligible based on stability monitoring of some CRMs. The certified value of each amino acid, property value with uncertainty, was given for both with or without enantiomeric separation.

  16. Application of solid/liquid extraction for the gravimetric determination of lipids in royal jelly.

    PubMed

    Antinelli, Jean-François; Davico, Renée; Rognone, Catherine; Faucon, Jean-Paul; Lizzani-Cuvelier, Louisette

    2002-04-10

    Gravimetric lipid determination is a major parameter for the characterization and the authentication of royal jelly quality. A solid/liquid extraction was compared to the reference method, which is based on liquid/liquid extraction. The amount of royal jelly and the time of the extraction were optimized in comparison to the reference method. Boiling/rinsing ratio and spread of royal jelly onto the extraction thimble were identified as critical parameters, resulting in good accuracy and precision for the alternative method. Comparison of reproducibility and repeatability of both methods associated with gas chromatographic analysis of the composition of the extracted lipids showed no differences between the two methods. As the intra-laboratory validation tests were comparable to the reference method, while offering rapidity and a decrease in amount of solvent used, it was concluded that the proposed method should be used with no modification of quality criteria and norms established for royal jelly characterization.

  17. Romer Labs RapidChek®Listeria monocytogenes Test System for the Detection of L. monocytogenes on Selected Foods and Environmental Surfaces.

    PubMed

    Juck, Gregory; Gonzalez, Verapaz; Allen, Ann-Christine Olsson; Sutzko, Meredith; Seward, Kody; Muldoon, Mark T

    2018-04-27

    The Romer Labs RapidChek ® Listeria monocytogenes test system (Performance Tested Method ℠ 011805) was validated against the U.S. Department of Agriculture-Food Safety and Inspection Service Microbiology Laboratory Guidebook (USDA-FSIS/MLG), U.S. Food and Drug Association Bacteriological Analytical Manual (FDA/BAM), and AOAC Official Methods of Analysis ℠ (AOAC/OMA) cultural reference methods for the detection of L. monocytogenes on selected foods including hot dogs, frozen cooked breaded chicken, frozen cooked shrimp, cured ham, and ice cream, and environmental surfaces including stainless steel and plastic in an unpaired study design. The RapidChek method uses a proprietary enrichment media system, a 44-48 h enrichment at 30 ± 1°C, and detects L. monocytogenes on an immunochromatographic lateral flow device within 10 min. Different L. monocytogenes strains were used to spike each of the matrixes. Samples were confirmed based on the reference method confirmations and an alternate confirmation method. A total of 140 low-level spiked samples were tested by the RapidChek method after enrichment for 44-48 h in parallel with the cultural reference method. There were 88 RapidChek presumptive positives. One of the presumptive positives was not confirmed culturally. Additionally, one of the culturally confirmed samples did not exhibit a presumptive positive. No difference between the alternate confirmation method and reference confirmation method was observed. The respective cultural reference methods (USDA-FSIS/MLG, FDA/BAM, and AOAC/OMA) produced a total of 63 confirmed positive results. Nonspiked samples from all foods were reported as negative for L. monocytogenes by all methods. Probability of detection analysis demonstrated no significant differences in the number of positive samples detected by the RapidChek method and the respective cultural reference method.

  18. Methods, systems and apparatus for controlling third harmonic voltage when operating a multi-space machine in an overmodulation region

    DOEpatents

    Perisic, Milun; Kinoshita, Michael H; Ranson, Ray M; Gallegos-Lopez, Gabriel

    2014-06-03

    Methods, system and apparatus are provided for controlling third harmonic voltages when operating a multi-phase machine in an overmodulation region. The multi-phase machine can be, for example, a five-phase machine in a vector controlled motor drive system that includes a five-phase PWM controlled inverter module that drives the five-phase machine. Techniques for overmodulating a reference voltage vector are provided. For example, when the reference voltage vector is determined to be within the overmodulation region, an angle of the reference voltage vector can be modified to generate a reference voltage overmodulation control angle, and a magnitude of the reference voltage vector can be modified, based on the reference voltage overmodulation control angle, to generate a modified magnitude of the reference voltage vector. By modifying the reference voltage vector, voltage command signals that control a five-phase inverter module can be optimized to increase output voltages generated by the five-phase inverter module.

  19. System and method of detecting cavitation in pumps

    DOEpatents

    Lu, Bin; Sharma, Santosh Kumar; Yan, Ting; Dimino, Steven A.

    2017-10-03

    A system and method for detecting cavitation in pumps for fixed and variable supply frequency applications is disclosed. The system includes a controller having a processor programmed to repeatedly receive real-time operating current data from a motor driving a pump, generate a current frequency spectrum from the current data, and analyze current data within a pair of signature frequency bands of the current frequency spectrum. The processor is further programmed to repeatedly determine fault signatures as a function of the current data within the pair of signature frequency bands, repeatedly determine fault indices based on the fault signatures and a dynamic reference signature, compare the fault indices to a reference index, and identify a cavitation condition in a pump based on a comparison between the reference index and a current fault index.

  20. STANDARD REFERENCE MATERIALS FOR THE POLYMERS INDUSTRY.

    PubMed

    McDonough, Walter G; Orski, Sara V; Guttman, Charles M; Migler, Kalman D; Beers, Kathryn L

    2016-01-01

    The National Institute of Standards and Technology (NIST) provides science, industry, and government with a central source of well-characterized materials certified for chemical composition or for some chemical or physical property. These materials are designated Standard Reference Materials ® (SRMs) and are used to calibrate measuring instruments, to evaluate methods and systems, or to produce scientific data that can be referred readily to a common base. In this paper, we discuss the history of polymer based SRMs, their current status, and challenges and opportunities to develop new standards to address industrial measurement challenges.

  1. A new methodology for automatic detection of reference points in 3D cephalometry: A pilot study.

    PubMed

    Ed-Dhahraouy, Mohammed; Riri, Hicham; Ezzahmouly, Manal; Bourzgui, Farid; El Moutaoukkil, Abdelmajid

    2018-04-05

    The aim of this study was to develop a new method for an automatic detection of reference points in 3D cephalometry to overcome the limits of 2D cephalometric analyses. A specific application was designed using the C++ language for automatic and manual identification of 21 (reference) points on the craniofacial structures. Our algorithm is based on the implementation of an anatomical and geometrical network adapted to the craniofacial structure. This network was constructed based on the anatomical knowledge of the 3D cephalometric (reference) points. The proposed algorithm was tested on five CBCT images. The proposed approach for the automatic 3D cephalometric identification was able to detect 21 points with a mean error of 2.32mm. In this pilot study, we propose an automated methodology for the identification of the 3D cephalometric (reference) points. A larger sample will be implemented in the future to assess the method validity and reliability. Copyright © 2018 CEO. Published by Elsevier Masson SAS. All rights reserved.

  2. The characterization and certification of a quantitative reference material for Legionella detection and quantification by qPCR.

    PubMed

    Baume, M; Garrelly, L; Facon, J P; Bouton, S; Fraisse, P O; Yardin, C; Reyrolle, M; Jarraud, S

    2013-06-01

    The characterization and certification of a Legionella DNA quantitative reference material as a primary measurement standard for Legionella qPCR. Twelve laboratories participated in a collaborative certification campaign. A candidate reference DNA material was analysed through PCR-based limiting dilution assays (LDAs). The validated data were used to statistically assign both a reference value and an associated uncertainty to the reference material. This LDA method allowed for the direct quantification of the amount of Legionella DNA per tube in genomic units (GU) and the determination of the associated uncertainties. This method could be used for the certification of all types of microbiological standards for qPCR. The use of this primary standard will improve the accuracy of Legionella qPCR measurements and the overall consistency of these measurements among different laboratories. The extensive use of this certified reference material (CRM) has been integrated in the French standard NF T90-471 (April 2010) and in the ISO Technical Specification 12 869 (Anon 2012 International Standardisation Organisation) for validating qPCR methods and ensuring the reliability of these methods. © 2013 The Society for Applied Microbiology.

  3. Perturbative universal state-selective correction for state-specific multi-reference coupled cluster methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brabec, Jiri; Banik, Subrata; Kowalski, Karol

    2016-10-28

    The implementation details of the universal state-selective (USS) multi-reference coupled cluster (MRCC) formalism with singles and doubles (USS(2)) are discussed on the example of several benchmark systems. We demonstrate that the USS(2) formalism is capable of improving accuracies of state specific multi-reference coupled-cluster (MRCC) methods based on the Brillouin-Wigner and Mukherjee’s sufficiency conditions. Additionally, it is shown that the USS(2) approach significantly alleviates problems associated with the lack of invariance of MRCC theories upon the rotation of active orbitals. We also discuss the perturbative USS(2) formulations that significantly reduce numerical overhead of the full USS(2) method.

  4. A comparative proteomics method for multiple samples based on a 18O-reference strategy and a quantitation and identification-decoupled strategy.

    PubMed

    Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin

    2017-08-15

    Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. TH-CD-202-05: DECT Based Tissue Segmentation as Input to Monte Carlo Simulations for Proton Treatment Verification Using PET Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berndt, B; Wuerl, M; Dedes, G

    Purpose: To improve agreement of predicted and measured positron emitter yields in patients, after proton irradiation for PET-based treatment verification, using a novel dual energy CT (DECT) tissue segmentation approach, overcoming known deficiencies from single energy CT (SECT). Methods: DECT head scans of 5 trauma patients were segmented and compared to existing decomposition methods with a first focus on the brain. For validation purposes, three brain equivalent solutions [water, white matter (WM) and grey matter (GM) – equivalent with respect to their reference carbon and oxygen contents and CT numbers at 90kVp and 150kVp] were prepared from water, ethanol, sucrosemore » and salt. The activities of all brain solutions, measured during a PET scan after uniform proton irradiation, were compared to Monte Carlo simulations. Simulation inputs were various solution compositions obtained from different segmentation approaches from DECT, SECT scans, and known reference composition. Virtual GM solution salt concentration corrections were applied based on DECT measurements of solutions with varying salt concentration. Results: The novel tissue segmentation showed qualitative improvements in %C for patient brain scans (ground truth unavailable). The activity simulations based on reference solution compositions agree with the measurement within 3–5% (4–8Bq/ml). These reference simulations showed an absolute activity difference between WM (20%C) and GM (10%C) to H2O (0%C) of 43 Bq/ml and 22 Bq/ml, respectively. Activity differences between reference simulations and segmented ones varied from −6 to 1 Bq/ml for DECT and −79 to 8 Bq/ml for SECT. Conclusion: Compared to the conventionally used SECT segmentation, the DECT based segmentation indicates a qualitative and quantitative improvement. In controlled solutions, a MC input based on DECT segmentation leads to better agreement with the reference. Future work will address the anticipated improvement of quantification accuracy in patients, comparing different tissue decomposition methods with an MR brain segmentation. Acknowledgement: DFG-MAP and HIT-Heidelberg Deutsche Forschungsgemeinschaft (MAP); Bundesministerium fur Bildung und Forschung (01IB13001)« less

  6. Estimating patient-specific and anatomically correct reference model for craniomaxillofacial deformity via sparse representation

    PubMed Central

    Wang, Li; Ren, Yi; Gao, Yaozong; Tang, Zhen; Chen, Ken-Chung; Li, Jianfu; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Xia, James J.; Shen, Dinggang

    2015-01-01

    Purpose: A significant number of patients suffer from craniomaxillofacial (CMF) deformity and require CMF surgery in the United States. The success of CMF surgery depends on not only the surgical techniques but also an accurate surgical planning. However, surgical planning for CMF surgery is challenging due to the absence of a patient-specific reference model. Currently, the outcome of the surgery is often subjective and highly dependent on surgeon’s experience. In this paper, the authors present an automatic method to estimate an anatomically correct reference shape of jaws for orthognathic surgery, a common type of CMF surgery. Methods: To estimate a patient-specific jaw reference model, the authors use a data-driven method based on sparse shape composition. Given a dictionary of normal subjects, the authors first use the sparse representation to represent the midface of a patient by the midfaces of the normal subjects in the dictionary. Then, the derived sparse coefficients are used to reconstruct a patient-specific reference jaw shape. Results: The authors have validated the proposed method on both synthetic and real patient data. Experimental results show that the authors’ method can effectively reconstruct the normal shape of jaw for patients. Conclusions: The authors have presented a novel method to automatically estimate a patient-specific reference model for the patient suffering from CMF deformity. PMID:26429255

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hohimer, J.P.

    The use of laser-based analytical methods in nuclear-fuel processing plants is considered. The species and locations for accountability, process control, and effluent control measurements in the Coprocessing, Thorex, and reference Purex fuel processing operations are identified and the conventional analytical methods used for these measurements are summarized. The laser analytical methods based upon Raman, absorption, fluorescence, and nonlinear spectroscopy are reviewed and evaluated for their use in fuel processing plants. After a comparison of the capabilities of the laser-based and conventional analytical methods, the promising areas of application of the laser-based methods in fuel processing plants are identified.

  8. Model-based control strategies for systems with constraints of the program type

    NASA Astrophysics Data System (ADS)

    Jarzębowska, Elżbieta

    2006-08-01

    The paper presents a model-based tracking control strategy for constrained mechanical systems. Constraints we consider can be material and non-material ones referred to as program constraints. The program constraint equations represent tasks put upon system motions and they can be differential equations of orders higher than one or two, and be non-integrable. The tracking control strategy relies upon two dynamic models: a reference model, which is a dynamic model of a system with arbitrary order differential constraints and a dynamic control model. The reference model serves as a motion planner, which generates inputs to the dynamic control model. It is based upon a generalized program motion equations (GPME) method. The method enables to combine material and program constraints and merge them both into the motion equations. Lagrange's equations with multipliers are the peculiar case of the GPME, since they can be applied to systems with constraints of first orders. Our tracking strategy referred to as a model reference program motion tracking control strategy enables tracking of any program motion predefined by the program constraints. It extends the "trajectory tracking" to the "program motion tracking". We also demonstrate that our tracking strategy can be extended to a hybrid program motion/force tracking.

  9. High-resolution global grids of revised Priestley-Taylor and Hargreaves-Samani coefficients for assessing ASCE-standardized reference crop evapotranspiration and solar radiation

    NASA Astrophysics Data System (ADS)

    Aschonitis, Vassilis G.; Papamichail, Dimitris; Demertzi, Kleoniki; Colombani, Nicolo; Mastrocicco, Micol; Ghirardini, Andrea; Castaldelli, Giuseppe; Fano, Elisa-Anna

    2017-08-01

    The objective of the study is to provide global grids (0.5°) of revised annual coefficients for the Priestley-Taylor (P-T) and Hargreaves-Samani (H-S) evapotranspiration methods after calibration based on the ASCE (American Society of Civil Engineers)-standardized Penman-Monteith method (the ASCE method includes two reference crops: short-clipped grass and tall alfalfa). The analysis also includes the development of a global grid of revised annual coefficients for solar radiation (Rs) estimations using the respective Rs formula of H-S. The analysis was based on global gridded climatic data of the period 1950-2000. The method for deriving annual coefficients of the P-T and H-S methods was based on partial weighted averages (PWAs) of their mean monthly values. This method estimates the annual values considering the amplitude of the parameter under investigation (ETo and Rs) giving more weight to the monthly coefficients of the months with higher ETo values (or Rs values for the case of the H-S radiation formula). The method also eliminates the effect of unreasonably high or low monthly coefficients that may occur during periods where ETo and Rs fall below a specific threshold. The new coefficients were validated based on data from 140 stations located in various climatic zones of the USA and Australia with expanded observations up to 2016. The validation procedure for ETo estimations of the short reference crop showed that the P-T and H-S methods with the new revised coefficients outperformed the standard methods reducing the estimated root mean square error (RMSE) in ETo values by 40 and 25 %, respectively. The estimations of Rs using the H-S formula with revised coefficients reduced the RMSE by 28 % in comparison to the standard H-S formula. Finally, a raster database was built consisting of (a) global maps for the mean monthly ETo values estimated by ASCE-standardized method for both reference crops, (b) global maps for the revised annual coefficients of the P-T and H-S evapotranspiration methods for both reference crops and a global map for the revised annual coefficient of the H-S radiation formula and (c) global maps that indicate the optimum locations for using the standard P-T and H-S methods and their possible annual errors based on reference values. The database can support estimations of ETo and solar radiation for locations where climatic data are limited and it can support studies which require such estimations on larger scales (e.g. country, continent, world). The datasets produced in this study are archived in the PANGAEA database (https://doi.org/10.1594/PANGAEA.868808) and in the ESRN database (http://www.esrn-database.org or http://esrn-database.weebly.com).

  10. Time-efficient high-resolution whole-brain three-dimensional macromolecular proton fraction mapping

    PubMed Central

    Yarnykh, Vasily L.

    2015-01-01

    Purpose Macromolecular proton fraction (MPF) mapping is a quantitative MRI method that reconstructs parametric maps of a relative amount of macromolecular protons causing the magnetization transfer (MT) effect and provides a biomarker of myelination in neural tissues. This study aimed to develop a high-resolution whole-brain MPF mapping technique utilizing a minimal possible number of source images for scan time reduction. Methods The described technique is based on replacement of an actually acquired reference image without MT saturation by a synthetic one reconstructed from R1 and proton density maps, thus requiring only three source images. This approach enabled whole-brain three-dimensional MPF mapping with isotropic 1.25×1.25×1.25 mm3 voxel size and scan time of 20 minutes. The synthetic reference method was validated against standard MPF mapping with acquired reference images based on data from 8 healthy subjects. Results Mean MPF values in segmented white and gray matter appeared in close agreement with no significant bias and small within-subject coefficients of variation (<2%). High-resolution MPF maps demonstrated sharp white-gray matter contrast and clear visualization of anatomical details including gray matter structures with high iron content. Conclusions Synthetic reference method improves resolution of MPF mapping and combines accurate MPF measurements with unique neuroanatomical contrast features. PMID:26102097

  11. Position detectors, methods of detecting position, and methods of providing positional detectors

    DOEpatents

    Weinberg, David M.; Harding, L. Dean; Larsen, Eric D.

    2002-01-01

    Position detectors, welding system position detectors, methods of detecting various positions, and methods of providing position detectors are described. In one embodiment, a welding system positional detector includes a base that is configured to engage and be moved along a curved surface of a welding work piece. At least one position detection apparatus is provided and is connected with the base and configured to measure angular position of the detector relative to a reference vector. In another embodiment, a welding system positional detector includes a weld head and at least one inclinometer mounted on the weld head. The one inclinometer is configured to develop positional data relative to a reference vector and the position of the weld head on a non-planar weldable work piece.

  12. A Real-Time Infrared Ultra-Spectral Signature Classification Method via Spatial Pyramid Matching

    PubMed Central

    Mei, Xiaoguang; Ma, Yong; Li, Chang; Fan, Fan; Huang, Jun; Ma, Jiayi

    2015-01-01

    The state-of-the-art ultra-spectral sensor technology brings new hope for high precision applications due to its high spectral resolution. However, it also comes with new challenges, such as the high data dimension and noise problems. In this paper, we propose a real-time method for infrared ultra-spectral signature classification via spatial pyramid matching (SPM), which includes two aspects. First, we introduce an infrared ultra-spectral signature similarity measure method via SPM, which is the foundation of the matching-based classification method. Second, we propose the classification method with reference spectral libraries, which utilizes the SPM-based similarity for the real-time infrared ultra-spectral signature classification with robustness performance. Specifically, instead of matching with each spectrum in the spectral library, our method is based on feature matching, which includes a feature library-generating phase. We calculate the SPM-based similarity between the feature of the spectrum and that of each spectrum of the reference feature library, then take the class index of the corresponding spectrum having the maximum similarity as the final result. Experimental comparisons on two publicly-available datasets demonstrate that the proposed method effectively improves the real-time classification performance and robustness to noise. PMID:26205263

  13. Comparison of a New Cobinamide-Based Method to a Standard Laboratory Method for Measuring Cyanide in Human Blood

    PubMed Central

    Swezey, Robert; Shinn, Walter; Green, Carol; Drover, David R.; Hammer, Gregory B.; Schulman, Scott R.; Zajicek, Anne; Jett, David A.; Boss, Gerry R.

    2013-01-01

    Most hospital laboratories do not measure blood cyanide concentrations, and samples must be sent to reference laboratories. A simple method is needed for measuring cyanide in hospitals. The authors previously developed a method to quantify cyanide based on the high binding affinity of the vitamin B12 analog, cobinamide, for cyanide and a major spectral change observed for cyanide-bound cobinamide. This method is now validated in human blood, and the findings include a mean inter-assay accuracy of 99.1%, precision of 8.75% and a lower limit of quantification of 3.27 µM cyanide. The method was applied to blood samples from children treated with sodium nitroprusside and it yielded measurable results in 88 of 172 samples (51%), whereas the reference laboratory yielded results in only 19 samples (11%). In all 19 samples, the cobinamide-based method also yielded measurable results. The two methods showed reasonable agreement when analyzed by linear regression, but not when analyzed by a standard error of the estimate or paired t-test. Differences in results between the two methods may be because samples were assayed at different times on different sample types. The cobinamide-based method is applicable to human blood, and can be used in hospital laboratories and emergency rooms. PMID:23653045

  14. Assessing the accuracy of TDR-based water leak detection system

    NASA Astrophysics Data System (ADS)

    Fatemi Aghda, S. M.; GanjaliPour, K.; Nabiollahi, K.

    2018-03-01

    The use of TDR system to detect leakage locations in underground pipes has been developed in recent years. In this system, a bi-wire is installed in parallel with the underground pipes and is considered as a TDR sensor. This approach greatly covers the limitations arisen with using the traditional method of acoustic leak positioning. TDR based leak detection method is relatively accurate when the TDR sensor is in contact with water in just one point. Researchers have been working to improve the accuracy of this method in recent years. In this study, the ability of TDR method was evaluated in terms of the appearance of multi leakage points simultaneously. For this purpose, several laboratory tests were conducted. In these tests in order to simulate leakage points, the TDR sensor was put in contact with water at some points, then the number and the dimension of the simulated leakage points were gradually increased. The results showed that with the increase in the number and dimension of the leakage points, the error rate of the TDR-based water leak detection system increases. The authors tried, according to the results obtained from the laboratory tests, to develop a method to improve the accuracy of the TDR-based leak detection systems. To do that, they defined a few reference points on the TDR sensor. These points were created via increasing the distance between two conductors of TDR sensor and were easily identifiable in the TDR waveform. The tests were repeated again using the TDR sensor having reference points. In order to calculate the exact distance of the leakage point, the authors developed an equation in accordance to the reference points. A comparison between the results obtained from both tests (with and without reference points) showed that using the method and equation developed by the authors can significantly improve the accuracy of positioning the leakage points.

  15. Reference intervals for urinary renal injury biomarkers KIM-1 and NGAL in healthy children

    PubMed Central

    McWilliam, Stephen J; Antoine, Daniel J; Sabbisetti, Venkata; Pearce, Robin E; Jorgensen, Andrea L; Lin, Yvonne; Leeder, J Steven; Bonventre, Joseph V; Smyth, Rosalind L; Pirmohamed, Munir

    2014-01-01

    Aim The aim of this study was to establish reference intervals in healthy children for two novel urinary biomarkers of acute kidney injury, kidney injury molecule-1 (KIM-1) and neutrophil gelatinase-associated lipocalin (NGAL). Materials & Methods Urinary biomarkers were determined in samples from children in the UK (n = 120) and the USA (n = 171) using both Meso Scale Discovery (MSD) and Luminex-based analytical approaches. Results 95% reference intervals for each biomarker in each cohort are presented and stratified by sex or ethnicity where necessary, and age-related variability is explored using quantile regression. We identified consistently higher NGAL concentrations in females than males (p < 0.0001), and lower KIM-1 concentrations in African–Americans than Caucasians (p = 0.02). KIM-1 demonstrated diurnal variation, with higher concentrations in the morning (p < 0.001). Conclusion This is the first report of reference intervals for KIM-1 and NGAL using two analytical methods in a healthy pediatric population in both UK and US-based populations. PMID:24661102

  16. Explicit Low-Thrust Guidance for Reference Orbit Targeting

    NASA Technical Reports Server (NTRS)

    Lam, Try; Udwadia, Firdaus E.

    2013-01-01

    The problem of a low-thrust spacecraft controlled to a reference orbit is addressed in this paper. A simple and explicit low-thrust guidance scheme with constrained thrust magnitude is developed by combining the fundamental equations of motion for constrained systems from analytical dynamics with a Lyapunov-based method. Examples are given for a spacecraft controlled to a reference trajectory in the circular restricted three body problem.

  17. [Establishing biological reference intervals of alanine transaminase for clinical laboratory stored database].

    PubMed

    Guo, Wei; Song, Binbin; Shen, Junfei; Wu, Jiong; Zhang, Chunyan; Wang, Beili; Pan, Baishen

    2015-08-25

    To establish an indirect reference interval based on the test results of alanine aminotransferase stored in a laboratory information system. All alanine aminotransferase results were included for outpatients and physical examinations that were stored in the laboratory information system of Zhongshan Hospital during 2014. The original data were transformed using a Box-Cox transformation to obtain an approximate normal distribution. Outliers were identified and omitted using the Chauvenet and Tukey methods. The indirect reference intervals were obtained by simultaneously applying nonparametric and Hoffmann methods. The reference change value was selected to determine the statistical significance of the observed differences between the calculated and published reference intervals. The indirect reference intervals for alanine aminotransferase of all groups were 12 to 41 U/L (male, outpatient), 12 to 48 U/L (male, physical examination), 9 to 32 U/L (female, outpatient), and 8 to 35 U/L (female, physical examination), respectively. The absolute differences when compared with the direct results were all smaller than the reference change value of alanine aminotransferase. The Box-Cox transformation combined with the Hoffmann and Tukey methods is a simple and reliable technique that should be promoted and used by clinical laboratories.

  18. Strength Analysis on Ship Ladder Using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Budianto; Wahyudi, M. T.; Dinata, U.; Ruddianto; Eko P., M. M.

    2018-01-01

    In designing the ship’s structure, it should refer to the rules in accordance with applicable classification standards. In this case, designing Ladder (Staircase) on a Ferry Ship which is set up, it must be reviewed based on the loads during ship operations, either during sailing or at port operations. The classification rules in ship design refer to the calculation of the structure components described in Classification calculation method and can be analysed using the Finite Element Method. Classification Regulations used in the design of Ferry Ships used BKI (Bureau of Classification Indonesia). So the rules for the provision of material composition in the mechanical properties of the material should refer to the classification of the used vessel. The analysis in this structure used program structure packages based on Finite Element Method. By using structural analysis on Ladder (Ladder), it obtained strength and simulation structure that can withstand load 140 kg both in static condition, dynamic, and impact. Therefore, the result of the analysis included values of safety factors in the ship is to keep the structure safe but the strength of the structure is not excessive.

  19. Multiple template-based fluoroscopic tracking of lung tumor mass without implanted fiducial markers

    NASA Astrophysics Data System (ADS)

    Cui, Ying; Dy, Jennifer G.; Sharp, Gregory C.; Alexander, Brian; Jiang, Steve B.

    2007-10-01

    Precise lung tumor localization in real time is particularly important for some motion management techniques, such as respiratory gating or beam tracking with a dynamic multi-leaf collimator, due to the reduced clinical tumor volume (CTV) to planning target volume (PTV) margin and/or the escalated dose. There might be large uncertainties in deriving tumor position from external respiratory surrogates. While tracking implanted fiducial markers has sufficient accuracy, this procedure may not be widely accepted due to the risk of pneumothorax. Previously, we have developed a technique to generate gating signals from fluoroscopic images without implanted fiducial markers using a template matching method (Berbeco et al 2005 Phys. Med. Biol. 50 4481-90, Cui et al 2007 Phys. Med. Biol. 52 741-55). In this paper, we present an extension of this method to multiple-template matching for directly tracking the lung tumor mass in fluoroscopy video. The basic idea is as follows: (i) during the patient setup session, a pair of orthogonal fluoroscopic image sequences are taken and processed off-line to generate a set of reference templates that correspond to different breathing phases and tumor positions; (ii) during treatment delivery, fluoroscopic images are continuously acquired and processed; (iii) the similarity between each reference template and the processed incoming image is calculated; (iv) the tumor position in the incoming image is then estimated by combining the tumor centroid coordinates in reference templates with proper weights based on the measured similarities. With different handling of image processing and similarity calculation, two such multiple-template tracking techniques have been developed: one based on motion-enhanced templates and Pearson's correlation score while the other based on eigen templates and mean-squared error. The developed techniques have been tested on six sequences of fluoroscopic images from six lung cancer patients against the reference tumor positions manually determined by a radiation oncologist. The tumor centroid coordinates automatically detected using both methods agree well with the manually marked reference locations. The eigenspace tracking method performs slightly better than the motion-enhanced method, with average localization errors less than 2 pixels (1 mm) and the error at a 95% confidence level of about 2-4 pixels (1-2 mm). This work demonstrates the feasibility of direct tracking of a lung tumor mass in fluoroscopic images without implanted fiducial markers using multiple reference templates.

  20. Self-Supporting, Hydrophobic, Ionic Liquid-Based Reference Electrodes Prepared by Polymerization-Induced Microphase Separation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chopade, Sujay A.; Anderson, Evan L.; Schmidt, Peter W.

    Interfaces of ionic liquids and aqueous solutions exhibit stable electrical potentials over a wide range of aqueous electrolyte concentrations. This makes ionic liquids suitable as bridge materials that separate in electroanalytical measurements the reference electrode from samples with low and/or unknown ionic strengths. However, methods for the preparation of ionic liquid-based reference electrodes have not been explored widely. We have designed a convenient and reliable synthesis of ionic liquid-based reference electrodes by polymerization-induced microphase separation. This technique allows for a facile, single-pot synthesis of ready-to-use reference electrodes that incorporate ion conducting nanochannels filled with either 1-octyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide or 1-dodecyl-3-methylimidazolium bis(trifluoromethylmore » sulfonyl)imide as ionic liquid, supported by a mechanically robust cross-linked polystyrene phase. This synthesis procedure allows for the straightforward design of various reference electrode geometries. These reference electrodes exhibit a low resistance as well as good reference potential stability and reproducibility when immersed into aqueous solutions varying from deionized, purified water to 100 mM KCl, while requiring no correction for liquid junction potentials.« less

  1. An improved peak frequency shift method for Q estimation based on generalized seismic wavelet function

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Gao, Jinghuai

    2018-02-01

    As a powerful tool for hydrocarbon detection and reservoir characterization, the quality factor, Q, provides useful information in seismic data processing and interpretation. In this paper, we propose a novel method for Q estimation. The generalized seismic wavelet (GSW) function was introduced to fit the amplitude spectrum of seismic waveforms with two parameters: fractional value and reference frequency. Then we derive an analytical relation between the GSW function and the Q factor of the medium. When a seismic wave propagates through a viscoelastic medium, the GSW function can be employed to fit the amplitude spectrum of the source and attenuated wavelets, then the fractional values and reference frequencies can be evaluated numerically from the discrete Fourier spectrum. After calculating the peak frequency based on the obtained fractional value and reference frequency, the relationship between the GSW function and the Q factor can be built by the conventional peak frequency shift method. Synthetic tests indicate that our method can achieve higher accuracy and be more robust to random noise compared with existing methods. Furthermore, the proposed method is applicable to different types of source wavelet. Field data application also demonstrates the effectiveness of our method in seismic attenuation and the potential in the reservoir characteristic.

  2. Nonparametric spirometry reference values for Hispanic Americans.

    PubMed

    Glenn, Nancy L; Brown, Vanessa M

    2011-02-01

    Recent literature sites ethnic origin as a major factor in developing pulmonary function reference values. Extensive studies established reference values for European and African Americans, but not for Hispanic Americans. The Third National Health and Nutrition Examination Survey defines Hispanic as individuals of Spanish speaking cultures. While no group was excluded from the target population, sample size requirements only allowed inclusion of individuals who identified themselves as Mexican Americans. This research constructs nonparametric reference value confidence intervals for Hispanic American pulmonary function. The method is applicable to all ethnicities. We use empirical likelihood confidence intervals to establish normal ranges for reference values. Its major advantage: it is model free, but shares asymptotic properties of model based methods. Statistical comparisons indicate that empirical likelihood interval lengths are comparable to normal theory intervals. Power and efficiency studies agree with previously published theoretical results.

  3. Accuracy of two geocoding methods for geographic information system-based exposure assessment in epidemiological studies.

    PubMed

    Faure, Elodie; Danjou, Aurélie M N; Clavel-Chapelon, Françoise; Boutron-Ruault, Marie-Christine; Dossus, Laure; Fervers, Béatrice

    2017-02-24

    Environmental exposure assessment based on Geographic Information Systems (GIS) and study participants' residential proximity to environmental exposure sources relies on the positional accuracy of subjects' residences to avoid misclassification bias. Our study compared the positional accuracy of two automatic geocoding methods to a manual reference method. We geocoded 4,247 address records representing the residential history (1990-2008) of 1,685 women from the French national E3N cohort living in the Rhône-Alpes region. We compared two automatic geocoding methods, a free-online geocoding service (method A) and an in-house geocoder (method B), to a reference layer created by manually relocating addresses from method A (method R). For each automatic geocoding method, positional accuracy levels were compared according to the urban/rural status of addresses and time-periods (1990-2000, 2001-2008), using Chi Square tests. Kappa statistics were performed to assess agreement of positional accuracy of both methods A and B with the reference method, overall, by time-periods and by urban/rural status of addresses. Respectively 81.4% and 84.4% of addresses were geocoded to the exact address (65.1% and 61.4%) or to the street segment (16.3% and 23.0%) with methods A and B. In the reference layer, geocoding accuracy was higher in urban areas compared to rural areas (74.4% vs. 10.5% addresses geocoded to the address or interpolated address level, p < 0.0001); no difference was observed according to the period of residence. Compared to the reference method, median positional errors were 0.0 m (IQR = 0.0-37.2 m) and 26.5 m (8.0-134.8 m), with positional errors <100 m for 82.5% and 71.3% of addresses, for method A and method B respectively. Positional agreement of method A and method B with method R was 'substantial' for both methods, with kappa coefficients of 0.60 and 0.61 for methods A and B, respectively. Our study demonstrates the feasibility of geocoding residential addresses in epidemiological studies not initially recorded for environmental exposure assessment, for both recent addresses and residence locations more than 20 years ago. Accuracy of the two automatic geocoding methods was comparable. The in-house method (B) allowed a better control of the geocoding process and was less time consuming.

  4. Construction of a pulse-coupled dipole network capable of fear-like and relief-like responses

    NASA Astrophysics Data System (ADS)

    Lungsi Sharma, B.

    2016-07-01

    The challenge for neuroscience as an interdisciplinary programme is the integration of ideas among the disciplines to achieve a common goal. This paper deals with the problem of deriving a pulse-coupled neural network that is capable of demonstrating behavioural responses (fear-like and relief-like). Current pulse-coupled neural networks are designed mostly for engineering applications, particularly image processing. The discovered neural network was constructed using the method of minimal anatomies approach. The behavioural response of a level-coded activity-based model was used as a reference. Although the spiking-based model and the activity-based model are of different scales, the use of model-reference principle means that the characteristics that is referenced is its functional properties. It is demonstrated that this strategy of dissection and systematic construction is effective in the functional design of pulse-coupled neural network system with nonlinear signalling. The differential equations for the elastic weights in the reference model are replicated in the pulse-coupled network geometrically. The network reflects a possible solution to the problem of punishment and avoidance. The network developed in this work is a new network topology for pulse-coupled neural networks. Therefore, the model-reference principle is a powerful tool in connecting neuroscience disciplines. The continuity of concepts and phenomena is further maintained by systematic construction using methods like the method of minimal anatomies.

  5. System and methods for reducing harmonic distortion in electrical converters

    DOEpatents

    Kajouke, Lateef A; Perisic, Milun; Ransom, Ray M

    2013-12-03

    Systems and methods are provided for delivering energy using an energy conversion module. An exemplary method for delivering energy from an input interface to an output interface using an energy converison module coupled between the input interface and the output interface comprises the steps of determining an input voltage reference for the input interface based on a desired output voltage and a measured voltage and the output interface, determining a duty cycle control value based on a ratio of the input voltage reference and the measured voltage, operating one or more switching elements of the energy conversion module to deliver energy from the input interface to the output interface to the output interface with a duty cycle influenced by the dute cycle control value.

  6. COMPARISON OF TAXONOMIC, COLONY MORPHOTYPE AND PCR-RFLP METHODS TO CHARACTERIZE MICROFUNGAL DIVERSITY

    EPA Science Inventory

    We compared three methods for estimating fungal species diversity in soil samples. A rapid screening method based on gross colony morphological features and color reference standards was compared with traditional fungal taxonomic methods and PCR-RFLP for estimation of ecological ...

  7. A novel method for the activity measurement of large-area beta reference sources.

    PubMed

    Stanga, D; De Felice, P; Keightley, J; Capogni, M; Ioan, M R

    2016-03-01

    A novel method has been developed for the activity measurement of large-area beta reference sources. It makes use of two emission rate measurements and is based on the weak dependence between the source activity and the activity distribution for a given value of transmission coefficient. The method was checked experimentally by measuring the activity of two ((60)Co and (137)Cs) large-area reference sources constructed from anodized aluminum foils. Measurement results were compared with the activity values measured by gamma spectrometry. For each source, they agree within one standard uncertainty and also agree within the same limits with the certified values of the source activity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. [Saarland Growth Study: sampling design].

    PubMed

    Danker-Hopfe, H; Zabransky, S

    2000-01-01

    The use of reference data to evaluate the physical development of children and adolescents is part of the daily routine in the paediatric ambulance. The construction of such reference data is based on the collection of extensive reference data. There are different kinds of reference data: cross sectional references, which are based on data collected from a big representative cross-sectional sample of the population, longitudinal references, which are based on follow-up surveys of usually smaller samples of individuals from birth to maturity, and mixed longitudinal references, which are a combination of longitudinal and cross-sectional reference data. The advantages and disadvantages of the different methods of data collection and the resulting reference data are discussed. The Saarland Growth Study was conducted for several reasons: growth processes are subject to secular changes, there are no specific reference data for children and adolescents from this part of the country and the growth charts in use in the paediatric praxis are possibly not appropriate any more. Therefore, the Saarland Growth Study served two purposes a) to create actual regional reference data and b) to create a database for future studies on secular trends in growth processes of children and adolescents from Saarland. The present contribution focusses on general remarks on the sampling design of (cross-sectional) growth surveys and its inferences for the design of the present study.

  9. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  10. An alternative approach to characterize nonlinear site effects

    USGS Publications Warehouse

    Zhang, R.R.; Hartzell, S.; Liang, J.; Hu, Y.

    2005-01-01

    This paper examines the rationale of a method of nonstationary processing and analysis, referred to as the Hilbert-Huang transform (HHT), for its application to a recording-based approach in quantifying influences of soil nonlinearity in site response. In particular, this paper first summarizes symptoms of soil nonlinearity shown in earthquake recordings, reviews the Fourier-based approach to characterizing nonlinearity, and offers justifications for the HHT in addressing nonlinearity issues. This study then uses the HHT method to analyze synthetic data and recordings from the 1964 Niigata and 2001 Nisqually earthquakes. In doing so, the HHT-based site response is defined as the ratio of marginal Hilbert amplitude spectra, alternative to the Fourier-based response that is the ratio of Fourier amplitude spectra. With the Fourier-based approach in studies of site response as a reference, this study shows that the alternative HHT-based approach is effective in characterizing soil nonlinearity and nonlinear site response.

  11. A method of camera calibration in the measurement process with reference mark for approaching observation space target

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Zeng, Luan

    2017-11-01

    Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.

  12. Magnetic Stirrer Method for the Detection of Trichinella Larvae in Muscle Samples.

    PubMed

    Mayer-Scholl, Anne; Pozio, Edoardo; Gayda, Jennifer; Thaben, Nora; Bahn, Peter; Nöckler, Karsten

    2017-03-03

    Trichinellosis is a debilitating disease in humans and is caused by the consumption of raw or undercooked meat of animals infected with the nematode larvae of the genus Trichinella. The most important sources of human infections worldwide are game meat and pork or pork products. In many countries, the prevention of human trichinellosis is based on the identification of infected animals by means of the artificial digestion of muscle samples from susceptible animal carcasses. There are several methods based on the digestion of meat but the magnetic stirrer method is considered the gold standard. This method allows the detection of Trichinella larvae by microscopy after the enzymatic digestion of muscle samples and subsequent filtration and sedimentation steps. Although this method does not require special and expensive equipment, internal controls cannot be used. Therefore, stringent quality management should be applied throughout the test. The aim of the present work is to provide detailed handling instructions and critical control points of the method to analysts, based on the experience of the European Union Reference Laboratory for Parasites and the National Reference Laboratory of Germany for Trichinella.

  13. Technical note: A simple approach for efficient collection of field reference data for calibrating remote sensing mapping of northern wetlands

    NASA Astrophysics Data System (ADS)

    Gålfalk, Magnus; Karlson, Martin; Crill, Patrick; Bousquet, Philippe; Bastviken, David

    2018-03-01

    The calibration and validation of remote sensing land cover products are highly dependent on accurate field reference data, which are costly and practically challenging to collect. We describe an optical method for collection of field reference data that is a fast, cost-efficient, and robust alternative to field surveys and UAV imaging. A lightweight, waterproof, remote-controlled RGB camera (GoPro HERO4 Silver, GoPro Inc.) was used to take wide-angle images from 3.1 to 4.5 m in altitude using an extendable monopod, as well as representative near-ground (< 1 m) images to identify spectral and structural features that correspond to various land covers in present lighting conditions. A semi-automatic classification was made based on six surface types (graminoids, water, shrubs, dry moss, wet moss, and rock). The method enables collection of detailed field reference data, which is critical in many remote sensing applications, such as satellite-based wetland mapping. The method uses common non-expensive equipment, does not require special skills or training, and is facilitated by a step-by-step manual that is included in the Supplement. Over time a global ground cover database can be built that can be used as reference data for studies of non-forested wetlands from satellites such as Sentinel 1 and 2 (10 m pixel size).

  14. Calibration Methods for a 3D Triangulation Based Camera

    NASA Astrophysics Data System (ADS)

    Schulz, Ulrike; Böhnke, Kay

    A sensor in a camera takes a gray level image (1536 x 512 pixels), which is reflected by a reference body. The reference body is illuminated by a linear laser line. This gray level image can be used for a 3D calibration. The following paper describes how a calibration program calculates the calibration factors. The calibration factors serve to determine the size of an unknown reference body.

  15. Comparison of analytical and predictive methods for water, protein, fat, sugar, and gross energy in marine mammal milk.

    PubMed

    Oftedal, O T; Eisert, R; Barrell, G K

    2014-01-01

    Mammalian milks may differ greatly in composition from cow milk, and these differences may affect the performance of analytical methods. High-fat, high-protein milks with a preponderance of oligosaccharides, such as those produced by many marine mammals, present a particular challenge. We compared the performance of several methods against reference procedures using Weddell seal (Leptonychotes weddellii) milk of highly varied composition (by reference methods: 27-63% water, 24-62% fat, 8-12% crude protein, 0.5-1.8% sugar). A microdrying step preparatory to carbon-hydrogen-nitrogen (CHN) gas analysis slightly underestimated water content and had a higher repeatability relative standard deviation (RSDr) than did reference oven drying at 100°C. Compared with a reference macro-Kjeldahl protein procedure, the CHN (or Dumas) combustion method had a somewhat higher RSDr (1.56 vs. 0.60%) but correlation between methods was high (0.992), means were not different (CHN: 17.2±0.46% dry matter basis; Kjeldahl 17.3±0.49% dry matter basis), there were no significant proportional or constant errors, and predictive performance was high. A carbon stoichiometric procedure based on CHN analysis failed to adequately predict fat (reference: Röse-Gottlieb method) or total sugar (reference: phenol-sulfuric acid method). Gross energy content, calculated from energetic factors and results from reference methods for fat, protein, and total sugar, accurately predicted gross energy as measured by bomb calorimetry. We conclude that the CHN (Dumas) combustion method and calculation of gross energy are acceptable analytical approaches for marine mammal milk, but fat and sugar require separate analysis by appropriate analytic methods and cannot be adequately estimated by carbon stoichiometry. Some other alternative methods-low-temperature drying for water determination; Bradford, Lowry, and biuret methods for protein; the Folch and the Bligh and Dyer methods for fat; and enzymatic and reducing sugar methods for total sugar-appear likely to produce substantial error in marine mammal milks. It is important that alternative analytical methods be properly validated against a reference method before being used, especially for mammalian milks that differ greatly from cow milk in analyte characteristics and concentrations. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. Application of quantitative 1H NMR for the calibration of protoberberine alkaloid reference standards.

    PubMed

    Wu, Yan; He, Yi; He, Wenyi; Zhang, Yumei; Lu, Jing; Dai, Zhong; Ma, Shuangcheng; Lin, Ruichao

    2014-03-01

    Quantitative nuclear magnetic resonance spectroscopy (qNMR) has been developed into an important tool in the drug analysis, biomacromolecule detection, and metabolism study. Compared with mass balance method, qNMR method bears some advantages in the calibration of reference standard (RS): it determines the absolute amount of a sample; other chemical compound and its certified reference material (CRM) can be used as internal standard (IS) to obtain the purity of the sample. Protoberberine alkaloids have many biological activities and have been used as reference standards for the control of many herbal drugs. In present study, the qNMR methods were developed for the calibration of berberine hydrochloride, palmatine hydrochloride, tetrahydropalmatine, and phellodendrine hydrochloride with potassium hydrogen phthalate as IS. Method validation was carried out according to the guidelines for the method validation of Chinese Pharmacopoeia. The results of qNMR were compared with those of mass balance method and the differences between the results of two methods were acceptable based on the analysis of estimated measurement uncertainties. Therefore, qNMR is an effective and reliable analysis method for the calibration of RS and can be used as a good complementarity to the mass balance method. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Quality evaluation of no-reference MR images using multidirectional filters and image statistics.

    PubMed

    Jang, Jinseong; Bang, Kihun; Jang, Hanbyol; Hwang, Dosik

    2018-09-01

    This study aimed to develop a fully automatic, no-reference image-quality assessment (IQA) method for MR images. New quality-aware features were obtained by applying multidirectional filters to MR images and examining the feature statistics. A histogram of these features was then fitted to a generalized Gaussian distribution function for which the shape parameters yielded different values depending on the type of distortion in the MR image. Standard feature statistics were established through a training process based on high-quality MR images without distortion. Subsequently, the feature statistics of a test MR image were calculated and compared with the standards. The quality score was calculated as the difference between the shape parameters of the test image and the undistorted standard images. The proposed IQA method showed a >0.99 correlation with the conventional full-reference assessment methods; accordingly, this proposed method yielded the best performance among no-reference IQA methods for images containing six types of synthetic, MR-specific distortions. In addition, for authentically distorted images, the proposed method yielded the highest correlation with subjective assessments by human observers, thus demonstrating its superior performance over other no-reference IQAs. Our proposed IQA was designed to consider MR-specific features and outperformed other no-reference IQAs designed mainly for photographic images. Magn Reson Med 80:914-924, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  18. Application of positive-real functions in hyperstable discrete model-reference adaptive system design.

    NASA Technical Reports Server (NTRS)

    Karmarkar, J. S.

    1972-01-01

    Proposal of an algorithmic procedure, based on mathematical programming methods, to design compensators for hyperstable discrete model-reference adaptive systems (MRAS). The objective of the compensator is to render the MRAS insensitive to initial parameter estimates within a maximized hypercube in the model parameter space.

  19. ESSG-based global spatial reference frame for datasets interrelation

    NASA Astrophysics Data System (ADS)

    Yu, J. Q.; Wu, L. X.; Jia, Y. J.

    2013-10-01

    To know well about the highly complex earth system, a large volume of, as well as a large variety of, datasets on the planet Earth are being obtained, distributed, and shared worldwide everyday. However, seldom of existing systems concentrates on the distribution and interrelation of different datasets in a common Global Spatial Reference Frame (GSRF), which holds an invisble obstacle to the data sharing and scientific collaboration. Group on Earth Obeservation (GEO) has recently established a new GSRF, named Earth System Spatial Grid (ESSG), for global datasets distribution, sharing and interrelation in its 2012-2015 WORKING PLAN.The ESSG may bridge the gap among different spatial datasets and hence overcome the obstacles. This paper is to present the implementation of the ESSG-based GSRF. A reference spheroid, a grid subdvision scheme, and a suitable encoding system are required to implement it. The radius of ESSG reference spheroid was set to the double of approximated Earth radius to make datasets from different areas of earth system science being covered. The same paramerters of positioning and orienting as Earth Centred Earth Fixed (ECEF) was adopted for the ESSG reference spheroid to make any other GSRFs being freely transformed into the ESSG-based GSRF. Spheroid degenerated octree grid with radius refiment (SDOG-R) and its encoding method were taken as the grid subdvision and encoding scheme for its good performance in many aspects. A triple (C, T, A) model is introduced to represent and link different datasets based on the ESSG-based GSRF. Finally, the methods of coordinate transformation between the ESSGbased GSRF and other GSRFs were presented to make ESSG-based GSRF operable and propagable.

  20. A Network Method of Measuring Affiliation-Based Peer Influence: Assessing the Influences of Teammates' Smoking on Adolescent Smoking

    ERIC Educational Resources Information Center

    Fujimoto, Kayo; Unger, Jennifer B.; Valente, Thomas W.

    2012-01-01

    Using a network analytic framework, this study introduces a new method to measure peer influence based on adolescents' affiliations or 2-mode social network data. Exposure based on affiliations is referred to as the "affiliation exposure model." This study demonstrates the methodology using data on young adolescent smoking being influenced by…

  1. SSVEP recognition using common feature analysis in brain-computer interface.

    PubMed

    Zhang, Yu; Zhou, Guoxu; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej

    2015-04-15

    Canonical correlation analysis (CCA) has been successfully applied to steady-state visual evoked potential (SSVEP) recognition for brain-computer interface (BCI) application. Although the CCA method outperforms the traditional power spectral density analysis through multi-channel detection, it requires additionally pre-constructed reference signals of sine-cosine waves. It is likely to encounter overfitting in using a short time window since the reference signals include no features from training data. We consider that a group of electroencephalogram (EEG) data trials recorded at a certain stimulus frequency on a same subject should share some common features that may bear the real SSVEP characteristics. This study therefore proposes a common feature analysis (CFA)-based method to exploit the latent common features as natural reference signals in using correlation analysis for SSVEP recognition. Good performance of the CFA method for SSVEP recognition is validated with EEG data recorded from ten healthy subjects, in contrast to CCA and a multiway extension of CCA (MCCA). Experimental results indicate that the CFA method significantly outperformed the CCA and the MCCA methods for SSVEP recognition in using a short time window (i.e., less than 1s). The superiority of the proposed CFA method suggests it is promising for the development of a real-time SSVEP-based BCI. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Hyperspectral face recognition using improved inter-channel alignment based on qualitative prediction models.

    PubMed

    Cho, Woon; Jang, Jinbeum; Koschan, Andreas; Abidi, Mongi A; Paik, Joonki

    2016-11-28

    A fundamental limitation of hyperspectral imaging is the inter-band misalignment correlated with subject motion during data acquisition. One way of resolving this problem is to assess the alignment quality of hyperspectral image cubes derived from the state-of-the-art alignment methods. In this paper, we present an automatic selection framework for the optimal alignment method to improve the performance of face recognition. Specifically, we develop two qualitative prediction models based on: 1) a principal curvature map for evaluating the similarity index between sequential target bands and a reference band in the hyperspectral image cube as a full-reference metric; and 2) the cumulative probability of target colors in the HSV color space for evaluating the alignment index of a single sRGB image rendered using all of the bands of the hyperspectral image cube as a no-reference metric. We verify the efficacy of the proposed metrics on a new large-scale database, demonstrating a higher prediction accuracy in determining improved alignment compared to two full-reference and five no-reference image quality metrics. We also validate the ability of the proposed framework to improve hyperspectral face recognition.

  3. Numerical approach to reference identification of Staphylococcus, Stomatococcus, and Micrococcus spp.

    PubMed

    Rhoden, D L; Hancock, G A; Miller, J M

    1993-03-01

    A numerical-code system for the reference identification of Staphylococcus species, Stomatococcus mucilaginosus, and Micrococcus species was established by using a selected panel of conventional biochemicals. Results from 824 cultures (289 eye isolate cultures, 147 reference strains, and 388 known control strains) were used to generate a list of 354 identification code numbers. Each six-digit code number was based on results from 18 conventional biochemical reactions. Seven milliliters of purple agar base with 1% sterile carbohydrate solution added was poured into 60-mm-diameter agar plates. All biochemical tests were inoculated with 1 drop of a heavy broth suspension, incubated at 35 degrees C, and read daily for 3 days. All reactions were read and interpreted by the method of Kloos et al. (G. A. Hebert, C. G. Crowder, G. A. Hancock, W. R. Jarvis, and C. Thornsberry, J. Clin. Microbiol. 26:1939-1949, 1988; W. E. Kloos and D. W. Lambe, Jr., P. 222-237, in A. Balows, W. J. Hansler, Jr., K. L. Herrmann, H. D. Isenberg, and H. J. Shadomy, ed., Manual of Clinical Microbiology, 5th ed., 1991). This modified reference identification method was 96 to 98% accurate and could have value in reference and public health laboratory settings.

  4. Reference point detection for camera-based fingerprint image based on wavelet transformation.

    PubMed

    Khalil, Mohammed S

    2015-04-30

    Fingerprint recognition systems essentially require core-point detection prior to fingerprint matching. The core-point is used as a reference point to align the fingerprint with a template database. When processing a larger fingerprint database, it is necessary to consider the core-point during feature extraction. Numerous core-point detection methods are available and have been reported in the literature. However, these methods are generally applied to scanner-based images. Hence, this paper attempts to explore the feasibility of applying a core-point detection method to a fingerprint image obtained using a camera phone. The proposed method utilizes a discrete wavelet transform to extract the ridge information from a color image. The performance of proposed method is evaluated in terms of accuracy and consistency. These two indicators are calculated automatically by comparing the method's output with the defined core points. The proposed method is tested on two data sets, controlled and uncontrolled environment, collected from 13 different subjects. In the controlled environment, the proposed method achieved a detection rate 82.98%. In uncontrolled environment, the proposed method yield a detection rate of 78.21%. The proposed method yields promising results in a collected-image database. Moreover, the proposed method outperformed compare to existing method.

  5. Diagnostic accuracy of different body weight and height-based definitions of childhood obesity in identifying overfat among Chinese children and adolescents: a cross-sectional study.

    PubMed

    Wang, Lin; Hui, Stanley Sai-chuen

    2015-08-20

    Various body weight and height-based references are used to define obese children and adolescents. However, no study investigating the diagnostic accuracies of the definitions of obesity and overweight in Hong Kong Chinese children and adolescents has been conducted. The current study aims to investigate the diagnostic accuracy of BMI-based definitions and 1993 HK reference in screening excess body fat among Hong Kong Chinese children and adolescents. A total of 2,134 participants (1,135 boys and 999 girls) were recruited from local schools. The foot-to-foot BIA scale was applied to assess %BF using standard methods. The criterion of childhood obesity (i.e., overfat) was defined as over 25 %BF for boys and over 30 %BF for girls. Childhood obesity was also determined from four BMI-based references and the 1993 HK reference. The diagnostic accuracy of these existing definitions for childhood obesity in screening excess body fat was evaluated using diagnostic indices. Overall, %BF was significantly correlated with anthropometry measurements in both genders (in boys, r = 0.747 for BMI 0.766 for PWH; in girls, r = 0.930 for BMI 0.851 for PWH). The prevalence rates of overweight and obesity determined by BMI-based references were similar with the prevalence rates of obesity in the 1993 HK reference in both genders. All definitions for childhood obesity showed low sensitivity (in boys, 0.325-0.761; in girls, 0.128-0.588) in detecting overfat. Specificities were high for cut-offs among all definitions for childhood obesity (in boys, 0.862-0.980; in girls, 0.973-0.998). In conclusion, prevalence rates of childhood obesity or overweight varied widely according to the diagnostic references applied. The diagnostic performance for weight and height-based references for obesity is poorer than expected for both genders among Hong Kong Chinese children and adolescents. In order to improve the diagnosis accuracy of childhood obesity, either cut-off values of body weight and height-based definitions of childhood obesity should be revised to increase the sensitivity or the possibility of using other indirect methods of estimating the %BF should be explored.

  6. FT-IR imaging for quantitative determination of liver fat content in non-alcoholic fatty liver.

    PubMed

    Kochan, K; Maslak, E; Chlopicki, S; Baranska, M

    2015-08-07

    In this work we apply FT-IR imaging of large areas of liver tissue cross-section samples (∼5 cm × 5 cm) for quantitative assessment of steatosis in murine model of Non-Alcoholic Fatty Liver (NAFLD). We quantified the area of liver tissue occupied by lipid droplets (LDs) by FT-IR imaging and Oil Red O (ORO) staining for comparison. Two alternative FT-IR based approaches are presented. The first, straightforward method, was based on average spectra from tissues and provided values of the fat content by using a PLS regression model and the reference method. The second one – the chemometric-based method – enabled us to determine the values of the fat content, independently of the reference method by means of k-means cluster (KMC) analysis. In summary, FT-IR images of large size liver sections may prove to be useful for quantifying liver steatosis without the need of tissue staining.

  7. DETERMINATION OF NATIONAL DIAGNOSTIC REFERENCE LEVELS IN COMPUTED TOMOGRAPHY EXAMINATIONS OF IRAN BY A NEW QUALITY CONTROL-BASED DOSE SURVEY METHOD.

    PubMed

    Sohrabi, Mehdi; Parsi, Masoumeh; Mianji, Fereidoun

    2018-05-01

    National diagnostic reference levels (NDRLs) of Iran were determined for the four most common CT examinations including head, sinus, chest and abdomen/pelvis. A new 'quality control (QC)-based dose survey method', as developed by us, was applied to 157 CT scanners in Iran (2014-15) with different slice classes, models and geographic spread across the country. The NDRLs for head, sinus, chest and abdomen/pelvis examinations are 58, 29, 12 and 14 mGy for CTDIVol and 750, 300, 300 and 650 mGy.cm for DLP, respectively. The 'QC-based dose survey method' was further proven that it is a simple, accurate and practical method for a time and cost-effective NDRLs determination. One effective approach for optimization of the CT examination protocols at the national level is the provision of an adequate standardized training of the radiologists, technicians and medical physicists on the patient radiation protection principles and implementation of the DRL concept in clinical practices.

  8. En Route Spacing System and Method

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz (Inventor); Green, Steven M. (Inventor)

    2002-01-01

    A method of and computer software for minimizing aircraft deviations needed to comply with an en route miles-in-trail spacing requirement imposed during air traffic control operations via establishing a spacing reference geometry, predicting spatial locations of a plurality of aircraft at a predicted time of intersection of a path of a first of said plurality of aircraft with the spacing reference geometry, and determining spacing of each of the plurality of aircraft based on the predicted spatial locations.

  9. En route spacing system and method

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz (Inventor); Green, Steven M. (Inventor)

    2002-01-01

    A method of and computer software for minimizing aircraft deviations needed to comply with an en route miles-in-trail spacing requirement imposed during air traffic control operations via establishing a spacing reference geometry, predicting spatial locations of a plurality of aircraft at a predicted time of intersection of a path of a first of said plurality of aircraft with the spacing reference geometry, and determining spacing of each of the plurality of aircraft based on the predicted spatial locations.

  10. pplacer: linear time maximum-likelihood and Bayesian phylogenetic placement of sequences onto a fixed reference tree

    PubMed Central

    2010-01-01

    Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service. PMID:21034504

  11. A Laboratory-Based Nonlinear Dynamics Course for Science and Engineering Students.

    ERIC Educational Resources Information Center

    Sungar, N.; Sharpe, J. P.; Moelter, M. J.; Fleishon, N.; Morrison, K.; McDill, J.; Schoonover, R.

    2001-01-01

    Describes the implementation of a new laboratory-based, interdisciplinary undergraduate course on linear dynamical systems. Focuses on geometrical methods and data visualization techniques. (Contains 20 references.) (Author/YDS)

  12. A mapping closure for turbulent scalar mixing using a time-evolving reference field

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.

    1992-01-01

    A general mapping-closure approach for modeling scalar mixing in homogeneous turbulence is developed. This approach is different from the previous methods in that the reference field also evolves according to the same equations as the physical scalar field. The use of a time-evolving Gaussian reference field results in a model that is similar to the mapping closure model of Pope (1991), which is based on the methodology of Chen et al. (1989). Both models yield identical relationships between the scalar variance and higher-order moments, which are in good agreement with heat conduction simulation data and can be consistent with any type of epsilon(phi) evolution. The present methodology can be extended to any reference field whose behavior is known. The possibility of a beta-pdf reference field is explored. The shortcomings of the mapping closure methods are discussed, and the limit at which the mapping becomes invalid is identified.

  13. Apparatus and Method to Enable Precision and Fast Laser Frequency Tuning

    NASA Technical Reports Server (NTRS)

    Chen, Jeffrey R. (Inventor); Numata, Kenji (Inventor); Wu, Stewart T. (Inventor); Yang, Guangning (Inventor)

    2015-01-01

    An apparatus and method is provided to enable precision and fast laser frequency tuning. For instance, a fast tunable slave laser may be dynamically offset-locked to a reference laser line using an optical phase-locked loop. The slave laser is heterodyned against a reference laser line to generate a beatnote that is subsequently frequency divided. The phase difference between the divided beatnote and a reference signal may be detected to generate an error signal proportional to the phase difference. The error signal is converted into appropriate feedback signals to phase lock the divided beatnote to the reference signal. The slave laser frequency target may be rapidly changed based on a combination of a dynamically changing frequency of the reference signal, the frequency dividing factor, and an effective polarity of the error signal. Feed-forward signals may be generated to accelerate the slave laser frequency switching through laser tuning ports.

  14. Torque ripple reduction of brushless DC motor based on adaptive input-output feedback linearization.

    PubMed

    Shirvani Boroujeni, M; Markadeh, G R Arab; Soltani, J

    2017-09-01

    Torque ripple reduction of Brushless DC Motors (BLDCs) is an interesting subject in variable speed AC drives. In this paper at first, a mathematical expression for torque ripple harmonics is obtained. Then for a non-ideal BLDC motor with known harmonic contents of back-EMF, calculation of desired reference current amplitudes, which are required to eliminate some selected harmonics of torque ripple, are reviewed. In order to inject the reference harmonic currents to the motor windings, an Adaptive Input-Output Feedback Linearization (AIOFBL) control is proposed, which generates the reference voltages for three phases voltage source inverter in stationary reference frame. Experimental results are presented to show the capability and validity of the proposed control method and are compared with the vector control in Multi-Reference Frame (MRF) and Pseudo-Vector Control (P-VC) method results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  15. A Rapid Segmentation-Insensitive "Digital Biopsy" Method for Radiomic Feature Extraction: Method and Pilot Study Using CT Images of Non-Small Cell Lung Cancer.

    PubMed

    Echegaray, Sebastian; Nair, Viswam; Kadoch, Michael; Leung, Ann; Rubin, Daniel; Gevaert, Olivier; Napel, Sandy

    2016-12-01

    Quantitative imaging approaches compute features within images' regions of interest. Segmentation is rarely completely automatic, requiring time-consuming editing by experts. We propose a new paradigm, called "digital biopsy," that allows for the collection of intensity- and texture-based features from these regions at least 1 order of magnitude faster than the current manual or semiautomated methods. A radiologist reviewed automated segmentations of lung nodules from 100 preoperative volume computed tomography scans of patients with non-small cell lung cancer, and manually adjusted the nodule boundaries in each section, to be used as a reference standard, requiring up to 45 minutes per nodule. We also asked a different expert to generate a digital biopsy for each patient using a paintbrush tool to paint a contiguous region of each tumor over multiple cross-sections, a procedure that required an average of <3 minutes per nodule. We simulated additional digital biopsies using morphological procedures. Finally, we compared the features extracted from these digital biopsies with our reference standard using intraclass correlation coefficient (ICC) to characterize robustness. Comparing the reference standard segmentations to our digital biopsies, we found that 84/94 features had an ICC >0.7; comparing erosions and dilations, using a sphere of 1.5-mm radius, of our digital biopsies to the reference standard segmentations resulted in 41/94 and 53/94 features, respectively, with ICCs >0.7. We conclude that many intensity- and texture-based features remain consistent between the reference standard and our method while substantially reducing the amount of operator time required.

  16. Evaluating the quality of a cell counting measurement process via a dilution series experimental design.

    PubMed

    Sarkar, Sumona; Lund, Steven P; Vyzasatya, Ravi; Vanguri, Padmavathy; Elliott, John T; Plant, Anne L; Lin-Gibson, Sheng

    2017-12-01

    Cell counting measurements are critical in the research, development and manufacturing of cell-based products, yet determining cell quantity with accuracy and precision remains a challenge. Validating and evaluating a cell counting measurement process can be difficult because of the lack of appropriate reference material. Here we describe an experimental design and statistical analysis approach to evaluate the quality of a cell counting measurement process in the absence of appropriate reference materials or reference methods. The experimental design is based on a dilution series study with replicate samples and observations as well as measurement process controls. The statistical analysis evaluates the precision and proportionality of the cell counting measurement process and can be used to compare the quality of two or more counting methods. As an illustration of this approach, cell counting measurement processes (automated and manual methods) were compared for a human mesenchymal stromal cell (hMSC) preparation. For the hMSC preparation investigated, results indicated that the automated method performed better than the manual counting methods in terms of precision and proportionality. By conducting well controlled dilution series experimental designs coupled with appropriate statistical analysis, quantitative indicators of repeatability and proportionality can be calculated to provide an assessment of cell counting measurement quality. This approach does not rely on the use of a reference material or comparison to "gold standard" methods known to have limited assurance of accuracy and precision. The approach presented here may help the selection, optimization, and/or validation of a cell counting measurement process. Published by Elsevier Inc.

  17. Preliminary comparative assessment of PM10 hourly measurement results from new monitoring stations type using stochastic and exploratory methodology and models

    NASA Astrophysics Data System (ADS)

    Czechowski, Piotr Oskar; Owczarek, Tomasz; Badyda, Artur; Majewski, Grzegorz; Rogulski, Mariusz; Ogrodnik, Paweł

    2018-01-01

    The paper presents selected preliminary stage key issues proposed extended equivalence measurement results assessment for new portable devices - the comparability PM10 concentration results hourly series with reference station measurement results with statistical methods. In article presented new portable meters technical aspects. The emphasis was placed on the comparability the results using the stochastic and exploratory methods methodology concept. The concept is based on notice that results series simple comparability in the time domain is insufficient. The comparison of regularity should be done in three complementary fields of statistical modeling: time, frequency and space. The proposal is based on model's results of five annual series measurement results new mobile devices and WIOS (Provincial Environmental Protection Inspectorate) reference station located in Nowy Sacz city. The obtained results indicate both the comparison methodology completeness and the high correspondence obtained new measurements results devices with reference.

  18. Molecular taxonomy of phytopathogenic fungi: a case study in Peronospora.

    PubMed

    Göker, Markus; García-Blázquez, Gema; Voglmayr, Hermann; Tellería, M Teresa; Martín, María P

    2009-07-29

    Inappropriate taxon definitions may have severe consequences in many areas. For instance, biologically sensible species delimitation of plant pathogens is crucial for measures such as plant protection or biological control and for comparative studies involving model organisms. However, delimiting species is challenging in the case of organisms for which often only molecular data are available, such as prokaryotes, fungi, and many unicellular eukaryotes. Even in the case of organisms with well-established morphological characteristics, molecular taxonomy is often necessary to emend current taxonomic concepts and to analyze DNA sequences directly sampled from the environment. Typically, for this purpose clustering approaches to delineate molecular operational taxonomic units have been applied using arbitrary choices regarding the distance threshold values, and the clustering algorithms. Here, we report on a clustering optimization method to establish a molecular taxonomy of Peronospora based on ITS nrDNA sequences. Peronospora is the largest genus within the downy mildews, which are obligate parasites of higher plants, and includes various economically important pathogens. The method determines the distance function and clustering setting that result in an optimal agreement with selected reference data. Optimization was based on both taxonomy-based and host-based reference information, yielding the same outcome. Resampling and permutation methods indicate that the method is robust regarding taxon sampling and errors in the reference data. Tests with newly obtained ITS sequences demonstrate the use of the re-classified dataset in molecular identification of downy mildews. A corrected taxonomy is provided for all Peronospora ITS sequences contained in public databases. Clustering optimization appears to be broadly applicable in automated, sequence-based taxonomy. The method connects traditional and modern taxonomic disciplines by specifically addressing the issue of how to optimally account for both traditional species concepts and genetic divergence.

  19. Molecular Taxonomy of Phytopathogenic Fungi: A Case Study in Peronospora

    PubMed Central

    Göker, Markus; García-Blázquez, Gema; Voglmayr, Hermann; Tellería, M. Teresa; Martín, María P.

    2009-01-01

    Background Inappropriate taxon definitions may have severe consequences in many areas. For instance, biologically sensible species delimitation of plant pathogens is crucial for measures such as plant protection or biological control and for comparative studies involving model organisms. However, delimiting species is challenging in the case of organisms for which often only molecular data are available, such as prokaryotes, fungi, and many unicellular eukaryotes. Even in the case of organisms with well-established morphological characteristics, molecular taxonomy is often necessary to emend current taxonomic concepts and to analyze DNA sequences directly sampled from the environment. Typically, for this purpose clustering approaches to delineate molecular operational taxonomic units have been applied using arbitrary choices regarding the distance threshold values, and the clustering algorithms. Methodology Here, we report on a clustering optimization method to establish a molecular taxonomy of Peronospora based on ITS nrDNA sequences. Peronospora is the largest genus within the downy mildews, which are obligate parasites of higher plants, and includes various economically important pathogens. The method determines the distance function and clustering setting that result in an optimal agreement with selected reference data. Optimization was based on both taxonomy-based and host-based reference information, yielding the same outcome. Resampling and permutation methods indicate that the method is robust regarding taxon sampling and errors in the reference data. Tests with newly obtained ITS sequences demonstrate the use of the re-classified dataset in molecular identification of downy mildews. Conclusions A corrected taxonomy is provided for all Peronospora ITS sequences contained in public databases. Clustering optimization appears to be broadly applicable in automated, sequence-based taxonomy. The method connects traditional and modern taxonomic disciplines by specifically addressing the issue of how to optimally account for both traditional species concepts and genetic divergence. PMID:19641601

  20. Learning to Rank the Severity of Unrepaired Cleft Lip Nasal Deformity on 3D Mesh Data.

    PubMed

    Wu, Jia; Tse, Raymond; Shapiro, Linda G

    2014-08-01

    Cleft lip is a birth defect that results in deformity of the upper lip and nose. Its severity is widely variable and the results of treatment are influenced by the initial deformity. Objective assessment of severity would help to guide prognosis and treatment. However, most assessments are subjective. The purpose of this study is to develop and test quantitative computer-based methods of measuring cleft lip severity. In this paper, a grid-patch based measurement of symmetry is introduced, with which a computer program learns to rank the severity of cleft lip on 3D meshes of human infant faces. Three computer-based methods to define the midfacial reference plane were compared to two manual methods. Four different symmetry features were calculated based upon these reference planes, and evaluated. The result shows that the rankings predicted by the proposed features were highly correlated with the ranking orders provided by experts that were used as the ground truth.

  1. Recommendation for the review of biological reference intervals in medical laboratories.

    PubMed

    Henny, Joseph; Vassault, Anne; Boursier, Guilaine; Vukasovic, Ines; Mesko Brguljan, Pika; Lohmander, Maria; Ghita, Irina; Andreu, Francisco A Bernabeu; Kroupis, Christos; Sprongl, Ludek; Thelen, Marc H M; Vanstapel, Florent J L A; Vodnik, Tatjana; Huisman, Willem; Vaubourdolle, Michel

    2016-12-01

    This document is based on the original recommendation of the Expert Panel on the Theory of Reference Values of the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC), updated guidelines were recently published under the auspices of the IFCC and the Clinical and Laboratory Standards Institute (CLSI). This document summarizes proposals for recommendations on: (i) The terminology, which is often confusing, noticeably concerning the terms of reference limits and decision limits. (ii) The method for the determination of reference limits according to the original procedure and the conditions, which should be used. (iii) A simple procedure allowing the medical laboratories to fulfill the requirements of the regulation and standards. The updated document proposes to verify that published reference limits are applicable to the laboratory involved. Finally, the strengths and limits of the revised recommendations (especially the selection of the reference population, the maintenance of the analytical quality, the choice of the statistical method used…) will be briefly discussed.

  2. High-order Discontinuous Element-based Schemes for the Inviscid Shallow Water Equations: Spectral Multidomain Penalty and Discontinuous Galerkin Methods

    DTIC Science & Technology

    2011-07-19

    multidomain methods, Discontinuous Galerkin methods, interfacial treatment ∗ Jorge A. Escobar-Vargas, School of Civil and Environmental Engineering, Cornell...Click here to view linked References 1. Introduction Geophysical flows exhibit a complex structure and dynamics over a broad range of scales that...hyperbolic problems, where the interfacial patching was implemented with an upwind scheme based on a modified method of characteristics. This approach

  3. Model reference adaptive control (MRAC)-based parameter identification applied to surface-mounted permanent magnet synchronous motor

    NASA Astrophysics Data System (ADS)

    Zhong, Chongquan; Lin, Yaoyao

    2017-11-01

    In this work, a model reference adaptive control-based estimated algorithm is proposed for online multi-parameter identification of surface-mounted permanent magnet synchronous machines. By taking the dq-axis equations of a practical motor as the reference model and the dq-axis estimation equations as the adjustable model, a standard model-reference-adaptive-system-based estimator was established. Additionally, the Popov hyperstability principle was used in the design of the adaptive law to guarantee accurate convergence. In order to reduce the oscillation of identification result, this work introduces a first-order low-pass digital filter to improve precision regarding the parameter estimation. The proposed scheme was then applied to an SPM synchronous motor control system without any additional circuits and implemented using a DSP TMS320LF2812. For analysis, the experimental results reveal the effectiveness of the proposed method.

  4. Idiographic duo-trio tests using a constant-reference based on preference of each consumer: Sample presentation sequence in difference test can be customized for individual consumers to reduce error.

    PubMed

    Kim, Min-A; Sim, Hye-Min; Lee, Hye-Seong

    2016-11-01

    As reformulations and processing changes are increasingly needed in the food industry to produce healthier, more sustainable, and cost effective products while maintaining superior quality, reliable measurements of consumers' sensory perception and discrimination are becoming more critical. Consumer discrimination methods using a preferred-reference duo-trio test design have been shown to be effective in improving the discrimination performance by customizing sample presentation sequences. However, this design can add complexity to the discrimination task for some consumers, resulting in more errors in sensory discrimination. The objective of the present study was to investigate the effects of different types of test instructions using the preference-reference duo-trio test design where a paired-preference test is followed by 6 repeated preferred-reference duo-trio tests, in comparison to the analytical method using the balanced-reference duo-trio. Analyses of d' estimates (product-related measure) and probabilistic sensory discriminators in momentary numbers of subjects showing statistical significance (subject-related measure) revealed that only preferred-reference duo-trio test using affective reference-framing, either by providing no information about the reference or information on a previously preferred sample, improved the sensory discrimination more than the analytical method. No decrease in discrimination performance was observed with any type of instruction, confirming that consumers could handle the test methods. These results suggest that when repeated tests are feasible, using the affective discrimination method would be operationally more efficient as well as ecologically more reliable for measuring consumers' sensory discrimination ability. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. ECHO: A reference-free short-read error correction algorithm

    PubMed Central

    Kao, Wei-Chun; Chan, Andrew H.; Song, Yun S.

    2011-01-01

    Developing accurate, scalable algorithms to improve data quality is an important computational challenge associated with recent advances in high-throughput sequencing technology. In this study, a novel error-correction algorithm, called ECHO, is introduced for correcting base-call errors in short-reads, without the need of a reference genome. Unlike most previous methods, ECHO does not require the user to specify parameters of which optimal values are typically unknown a priori. ECHO automatically sets the parameters in the assumed model and estimates error characteristics specific to each sequencing run, while maintaining a running time that is within the range of practical use. ECHO is based on a probabilistic model and is able to assign a quality score to each corrected base. Furthermore, it explicitly models heterozygosity in diploid genomes and provides a reference-free method for detecting bases that originated from heterozygous sites. On both real and simulated data, ECHO is able to improve the accuracy of previous error-correction methods by several folds to an order of magnitude, depending on the sequence coverage depth and the position in the read. The improvement is most pronounced toward the end of the read, where previous methods become noticeably less effective. Using a whole-genome yeast data set, it is demonstrated here that ECHO is capable of coping with nonuniform coverage. Also, it is shown that using ECHO to perform error correction as a preprocessing step considerably facilitates de novo assembly, particularly in the case of low-to-moderate sequence coverage depth. PMID:21482625

  6. Edge Detection Method Based on Neural Networks for COMS MI Images

    NASA Astrophysics Data System (ADS)

    Lee, Jin-Ho; Park, Eun-Bin; Woo, Sun-Hee

    2016-12-01

    Communication, Ocean And Meteorological Satellite (COMS) Meteorological Imager (MI) images are processed for radiometric and geometric correction from raw image data. When intermediate image data are matched and compared with reference landmark images in the geometrical correction process, various techniques for edge detection can be applied. It is essential to have a precise and correct edged image in this process, since its matching with the reference is directly related to the accuracy of the ground station output images. An edge detection method based on neural networks is applied for the ground processing of MI images for obtaining sharp edges in the correct positions. The simulation results are analyzed and characterized by comparing them with the results of conventional methods, such as Sobel and Canny filters.

  7. Macromolecule mapping of the brain using ultrashort-TE acquisition and reference-based metabolite removal.

    PubMed

    Lam, Fan; Li, Yudu; Clifford, Bryan; Liang, Zhi-Pei

    2018-05-01

    To develop a practical method for mapping macromolecule distribution in the brain using ultrashort-TE MRSI data. An FID-based chemical shift imaging acquisition without metabolite-nulling pulses was used to acquire ultrashort-TE MRSI data that capture the macromolecule signals with high signal-to-noise-ratio (SNR) efficiency. To remove the metabolite signals from the ultrashort-TE data, single voxel spectroscopy data were obtained to determine a set of high-quality metabolite reference spectra. These spectra were then incorporated into a generalized series (GS) model to represent general metabolite spatiospectral distributions. A time-segmented algorithm was developed to back-extrapolate the GS model-based metabolite distribution from truncated FIDs and remove it from the MRSI data. Numerical simulations and in vivo experiments have been performed to evaluate the proposed method. Simulation results demonstrate accurate metabolite signal extrapolation by the proposed method given a high-quality reference. For in vivo experiments, the proposed method is able to produce spatiospectral distributions of macromolecules in the brain with high SNR from data acquired in about 10 minutes. We further demonstrate that the high-dimensional macromolecule spatiospectral distribution resides in a low-dimensional subspace. This finding provides a new opportunity to use subspace models for quantification and accelerated macromolecule mapping. Robustness of the proposed method is also demonstrated using multiple data sets from the same and different subjects. The proposed method is able to obtain macromolecule distributions in the brain from ultrashort-TE acquisitions. It can also be used for acquiring training data to determine a low-dimensional subspace to represent the macromolecule signals for subspace-based MRSI. Magn Reson Med 79:2460-2469, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  8. Method and apparatus for measuring low currents in capacitance devices

    DOEpatents

    Kopp, M.K.; Manning, F.W.; Guerrant, G.C.

    1986-06-04

    A method and apparatus for measuring subnanoampere currents in capacitance devices is reported. The method is based on a comparison of the voltages developed across the capacitance device with that of a reference capacitor in which the current is adjusted by means of a variable current source to produce a stable voltage difference. The current varying means of the variable current source is calibrated to provide a read out of the measured current. Current gain may be provided by using a reference capacitor which is larger than the device capacitance with a corresponding increase in current supplied through the reference capacitor. The gain is then the ratio of the reference capacitance to the device capacitance. In one illustrated embodiment, the invention makes possible a new type of ionizing radiation dose-rate monitor where dose-rate is measured by discharging a reference capacitor with a variable current source at the same rate that radiation is discharging an ionization chamber. The invention eliminates high-megohm resistors and low current ammeters used in low-current measuring instruments.

  9. Muscle parameters estimation based on biplanar radiography.

    PubMed

    Dubois, G; Rouch, P; Bonneau, D; Gennisson, J L; Skalli, W

    2016-11-01

    The evaluation of muscle and joint forces in vivo is still a challenge. Musculo-Skeletal (musculo-skeletal) models are used to compute forces based on movement analysis. Most of them are built from a scaled-generic model based on cadaver measurements, which provides a low level of personalization, or from Magnetic Resonance Images, which provide a personalized model in lying position. This study proposed an original two steps method to access a subject-specific musculo-skeletal model in 30 min, which is based solely on biplanar X-Rays. First, the subject-specific 3D geometry of bones and skin envelopes were reconstructed from biplanar X-Rays radiography. Then, 2200 corresponding control points were identified between a reference model and the subject-specific X-Rays model. Finally, the shape of 21 lower limb muscles was estimated using a non-linear transformation between the control points in order to fit the muscle shape of the reference model to the X-Rays model. Twelfth musculo-skeletal models were reconstructed and compared to their reference. The muscle volume was not accurately estimated with a standard deviation (SD) ranging from 10 to 68%. However, this method provided an accurate estimation the muscle line of action with a SD of the length difference lower than 2% and a positioning error lower than 20 mm. The moment arm was also well estimated with SD lower than 15% for most muscle, which was significantly better than scaled-generic model for most muscle. This method open the way to a quick modeling method for gait analysis based on biplanar radiography.

  10. Weak wide-band signal detection method based on small-scale periodic state of Duffing oscillator

    NASA Astrophysics Data System (ADS)

    Hou, Jian; Yan, Xiao-peng; Li, Ping; Hao, Xin-hong

    2018-03-01

    The conventional Duffing oscillator weak signal detection method, which is based on a strong reference signal, has inherent deficiencies. To address these issues, the characteristics of the Duffing oscillatorʼs phase trajectory in a small-scale periodic state are analyzed by introducing the theory of stopping oscillation system. Based on this approach, a novel Duffing oscillator weak wide-band signal detection method is proposed. In this novel method, the reference signal is discarded, and the to-be-detected signal is directly used as a driving force. By calculating the cosine function of a phase space angle, a single Duffing oscillator can be used for weak wide-band signal detection instead of an array of uncoupled Duffing oscillators. Simulation results indicate that, compared with the conventional Duffing oscillator detection method, this approach performs better in frequency detection intervals, and reduces the signal-to-noise ratio detection threshold, while improving the real-time performance of the system. Project supported by the National Natural Science Foundation of China (Grant No. 61673066).

  11. An Indoor Positioning Technique Based on a Feed-Forward Artificial Neural Network Using Levenberg-Marquardt Learning Method

    NASA Astrophysics Data System (ADS)

    Pahlavani, P.; Gholami, A.; Azimi, S.

    2017-09-01

    This paper presents an indoor positioning technique based on a multi-layer feed-forward (MLFF) artificial neural networks (ANN). Most of the indoor received signal strength (RSS)-based WLAN positioning systems use the fingerprinting technique that can be divided into two phases: the offline (calibration) phase and the online (estimation) phase. In this paper, RSSs were collected for all references points in four directions and two periods of time (Morning and Evening). Hence, RSS readings were sampled at a regular time interval and specific orientation at each reference point. The proposed ANN based model used Levenberg-Marquardt algorithm for learning and fitting the network to the training data. This RSS readings in all references points and the known position of these references points was prepared for training phase of the proposed MLFF neural network. Eventually, the average positioning error for this network using 30% check and validation data was computed approximately 2.20 meter.

  12. Rotor Position Sensorless Control and Its Parameter Sensitivity of Permanent Magnet Motor Based on Model Reference Adaptive System

    NASA Astrophysics Data System (ADS)

    Ohara, Masaki; Noguchi, Toshihiko

    This paper describes a new method for a rotor position sensorless control of a surface permanent magnet synchronous motor based on a model reference adaptive system (MRAS). This method features the MRAS in a current control loop to estimate a rotor speed and position by using only current sensors. This method as well as almost all the conventional methods incorporates a mathematical model of the motor, which consists of parameters such as winding resistances, inductances, and an induced voltage constant. Hence, the important thing is to investigate how the deviation of these parameters affects the estimated rotor position. First, this paper proposes a structure of the sensorless control applied in the current control loop. Next, it proves the stability of the proposed method when motor parameters deviate from the nominal values, and derives the relationship between the estimated position and the deviation of the parameters in a steady state. Finally, some experimental results are presented to show performance and effectiveness of the proposed method.

  13. Image quality evaluation of full reference algorithm

    NASA Astrophysics Data System (ADS)

    He, Nannan; Xie, Kai; Li, Tong; Ye, Yushan

    2018-03-01

    Image quality evaluation is a classic research topic, the goal is to design the algorithm, given the subjective feelings consistent with the evaluation value. This paper mainly introduces several typical reference methods of Mean Squared Error(MSE), Peak Signal to Noise Rate(PSNR), Structural Similarity Image Metric(SSIM) and feature similarity(FSIM) of objective evaluation methods. The different evaluation methods are tested by Matlab, and the advantages and disadvantages of these methods are obtained by analyzing and comparing them.MSE and PSNR are simple, but they are not considered to introduce HVS characteristics into image quality evaluation. The evaluation result is not ideal. SSIM has a good correlation and simple calculation ,because it is considered to the human visual effect into image quality evaluation,However the SSIM method is based on a hypothesis,The evaluation result is limited. The FSIM method can be used for test of gray image and color image test, and the result is better. Experimental results show that the new image quality evaluation algorithm based on FSIM is more accurate.

  14. Use of gelatin gels as a reference material for performance evaluation of meat shear force measurements

    USDA-ARS?s Scientific Manuscript database

    Establishing standards for meat tenderness based on Warner-Bratzler shear force (WBSF) is complicated by the lack of methods for certifying WBSF testing among texture systems or laboratories. The objective of this study was to determine the suitability of using gelatin gels as a reference material ...

  15. (BARS) -- Bibliographic Retrieval System Sandia Shock Compression (SSC) database Shock Physics Index (SPHINX) database. Volume 1: UNIX version query guide customized application for INGRES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herrmann, W.; von Laven, G.M.; Parker, T.

    1993-09-01

    The Bibliographic Retrieval System (BARS) is a data base management system specially designed to retrieve bibliographic references. Two databases are available, (i) the Sandia Shock Compression (SSC) database which contains over 5700 references to the literature related to stress waves in solids and their applications, and (ii) the Shock Physics Index (SPHINX) which includes over 8000 further references to stress waves in solids, material properties at intermediate and low rates, ballistic and hypervelocity impact, and explosive or shock fabrication methods. There is some overlap in the information in the two data bases.

  16. Text-mined phenotype annotation and vector-based similarity to improve identification of similar phenotypes and causative genes in monogenic disease patients.

    PubMed

    Saklatvala, Jake R; Dand, Nick; Simpson, Michael A

    2018-05-01

    The genetic diagnosis of rare monogenic diseases using exome/genome sequencing requires the true causal variant(s) to be identified from tens of thousands of observed variants. Typically a virtual gene panel approach is taken whereby only variants in genes known to cause phenotypes resembling the patient under investigation are considered. With the number of known monogenic gene-disease pairs exceeding 5,000, manual curation of personalized virtual panels using exhaustive knowledge of the genetic basis of the human monogenic phenotypic spectrum is challenging. We present improved probabilistic methods for estimating phenotypic similarity based on Human Phenotype Ontology annotation. A limitation of existing methods for evaluating a disease's similarity to a reference set is that reference diseases are typically represented as a series of binary (present/absent) observations of phenotypic terms. We evaluate a quantified disease reference set, using term frequency in phenotypic text descriptions to approximate term relevance. We demonstrate an improved ability to identify related diseases through the use of a quantified reference set, and that vector space similarity measures perform better than established information content-based measures. These improvements enable the generation of bespoke virtual gene panels, facilitating more accurate and efficient interpretation of genomic variant profiles from individuals with rare Mendelian disorders. These methods are available online at https://atlas.genetics.kcl.ac.uk/~jake/cgi-bin/patient_sim.py. © 2018 Wiley Periodicals, Inc.

  17. A novel no-reference objective stereoscopic video quality assessment method based on visual saliency analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xinyan; Zhao, Wei; Ye, Long; Zhang, Qin

    2017-07-01

    This paper proposes a no-reference objective stereoscopic video quality assessment method with the motivation that making the effect of objective experiments close to that of subjective way. We believe that the image regions with different visual salient degree should not have the same weights when designing an assessment metric. Therefore, we firstly use GBVS algorithm to each frame pairs and separate both the left and right viewing images into the regions with strong, general and week saliency. Besides, local feature information like blockiness, zero-crossing and depth are extracted and combined with a mathematical model to calculate a quality assessment score. Regions with different salient degree are assigned with different weights in the mathematical model. Experiment results demonstrate the superiority of our method compared with the existed state-of-the-art no-reference objective Stereoscopic video quality assessment methods.

  18. Consumer Choice Between Hospital-Based and Freestanding Facilities for Arthroscopy

    PubMed Central

    Robinson, James C.; Brown, Timothy T.; Whaley, Christopher; Bozic, Kevin J.

    2015-01-01

    Background: Hospital-based outpatient departments traditionally charge higher prices for ambulatory procedures, compared with freestanding surgery centers. Under emerging reference-based benefit designs, insurers establish a contribution limit that they will pay, requiring the patient to pay the difference between that contribution limit and the actual price charged by the facility. The purpose of this study was to evaluate the impact of reference-based benefits on consumer choices, facility prices, employer spending, and surgical outcomes for orthopaedic procedures performed at ambulatory surgery centers. Methods: We obtained data on 3962 patients covered by the California Public Employees’ Retirement System (CalPERS) who underwent arthroscopy of the knee or shoulder in the three years prior to the implementation of reference-based benefits in January 2012 and on 2505 patients covered by CalPERS who underwent arthroscopy in the two years after implementation. Control group data were obtained on 57,791 patients who underwent arthroscopy and were not subject to reference-based benefits. The impact of reference-based benefits on consumer choices between hospital-based and freestanding facilities, facility prices, employer spending, and surgical complications was assessed with use of difference-in-differences multivariable regressions to adjust for patient demographic characteristics, comorbidities, and geographic location. Results: By the second year of the program, the shift to reference-based benefits was associated with an increase in the utilization of freestanding ambulatory surgery centers by 14.3 percentage points (95% confidence interval, 8.1 to 20.5 percentage points) for knee arthroscopy and by 9.9 percentage points (95% confidence interval, 3.2 to 16.7 percentage points) for shoulder arthroscopy and a corresponding decrease in the use of hospital-based facilities. The mean price paid by CalPERS fell by 17.6% (95% confidence interval, −24.9% to −9.6%) for knee procedures and by 17.0% (95% confidence interval, −29.3% to −2.5%) for shoulder procedures. The shift to reference-based benefits was not associated with a change in the rate of surgical complications. In the first two years after the implementation of reference-based benefits, CalPERS saved $2.3 million (13%) on these two orthopaedic procedures. Conclusions: Reference-based benefits increase consumer sensitivity to price differences between freestanding and hospital-based surgical facilities. Clinical Relevance: This study shows that the implementation of reference-based benefits does not result in a significant increase in measured complication rates for those subject to reference-based benefits. PMID:26378263

  19. Liquid Chromatography with Absorbance Detection and with Isotope-Dilution Mass Spectrometry for Determination of Isoflavones in Soy Standard Reference Materials

    PubMed Central

    Phillips, Melissa M.; Bedner, Mary; Gradl, Manuela; Burdette, Carolyn Q.; Nelson, Michael A.; Yen, James H.; Sander, Lane C.; Rimmer, Catherine A.

    2017-01-01

    Two independent analytical approaches, based on liquid chromatography with absorbance detection and liquid chromatography with mass spectrometric detection, have been developed for determination of isoflavones in soy materials. These two methods yield comparable results for a variety of soy-based foods and dietary supplements. Four Standard Reference Materials (SRMs) have been produced by the National Institute of Standards and Technology to assist the food and dietary supplement community in method validation and have been assigned values for isoflavone content using both methods. These SRMs include SRM 3234 Soy Flour, SRM 3236 Soy Protein Isolate, SRM 3237 Soy Protein Concentrate, and SRM 3238 Soy-Containing Solid Oral Dosage Form. A fifth material, SRM 3235 Soy Milk, was evaluated using the methods and found to be inhomogeneous for isoflavones and unsuitable for value assignment. PMID:27832301

  20. Accelerometer-Based Method for Extracting Respiratory and Cardiac Gating Information for Dual Gating during Nuclear Medicine Imaging

    PubMed Central

    Pänkäälä, Mikko; Paasio, Ari

    2014-01-01

    Both respiratory and cardiac motions reduce the quality and consistency of medical imaging specifically in nuclear medicine imaging. Motion artifacts can be eliminated by gating the image acquisition based on the respiratory phase and cardiac contractions throughout the medical imaging procedure. Electrocardiography (ECG), 3-axis accelerometer, and respiration belt data were processed and analyzed from ten healthy volunteers. Seismocardiography (SCG) is a noninvasive accelerometer-based method that measures accelerations caused by respiration and myocardial movements. This study was conducted to investigate the feasibility of the accelerometer-based method in dual gating technique. The SCG provides accelerometer-derived respiratory (ADR) data and accurate information about quiescent phases within the cardiac cycle. The correct information about the status of ventricles and atria helps us to create an improved estimate for quiescent phases within a cardiac cycle. The correlation of ADR signals with the reference respiration belt was investigated using Pearson correlation. High linear correlation was observed between accelerometer-based measurement and reference measurement methods (ECG and Respiration belt). Above all, due to the simplicity of the proposed method, the technique has high potential to be applied in dual gating in clinical cardiac positron emission tomography (PET) to obtain motion-free images in the future. PMID:25120563

  1. Detection of illegal transfer of videos over the Internet

    NASA Astrophysics Data System (ADS)

    Chaisorn, Lekha; Sainui, Janya; Manders, Corey

    2010-07-01

    In this paper, a method for detecting infringements or modifications of a video in real-time is proposed. The method first segments a video stream into shots, after which it extracts some reference frames as keyframes. This process is performed employing a Singular Value Decomposition (SVD) technique developed in this work. Next, for each input video (represented by its keyframes), ordinal-based signature and SIFT (Scale Invariant Feature Transform) descriptors are generated. The ordinal-based method employs a two-level bitmap indexing scheme to construct the index for each video signature. The first level clusters all input keyframes into k clusters while the second level converts the ordinal-based signatures into bitmap vectors. On the other hand, the SIFT-based method directly uses the descriptors as the index. Given a suspect video (being streamed or transferred on the Internet), we generate the signature (ordinal and SIFT descriptors) then we compute similarity between its signature and those signatures in the database based on ordinal signature and SIFT descriptors separately. For similarity measure, besides the Euclidean distance, Boolean operators are also utilized during the matching process. We have tested our system by performing several experiments on 50 videos (each about 1/2 hour in duration) obtained from the TRECVID 2006 data set. For experiments set up, we refer to the conditions provided by TRECVID 2009 on "Content-based copy detection" task. In addition, we also refer to the requirements issued in the call for proposals by MPEG standard on the similar task. Initial result shows that our framework is effective and robust. As compared to our previous work, on top of the achievement we obtained by reducing the storage space and time taken in the ordinal based method, by introducing the SIFT features, we could achieve an overall accuracy in F1 measure of about 96% (improved about 8%).

  2. Radiometric calibration of SPOT 2 HRV - A comparison of three methods

    NASA Technical Reports Server (NTRS)

    Biggar, Stuart F.; Dinguirard, Magdeleine C.; Gellman, David I.; Henry, Patrice; Jackson, Ray D.; Moran, M. S.; Slater, Philip N.

    1991-01-01

    Three methods for determining an absolute radiometric calibration of a spacecraft optical sensor are compared. They are the well-known reflectance-based and radiance-based methods and a new method based on measurements of the ratio of diffuse-to-global irradiance at the ground. The latter will be described in detail and the comparison of the three approaches will be made with reference to the SPOT-2 HRV cameras for a field campaign 1990-06-19 through 1990-06-24 at the White Sands Missile Range in New Mexico.

  3. Unconventional tail configurations for transport aircraft

    NASA Astrophysics Data System (ADS)

    Sánchez-Carmona, A.; Cuerno-Rejado, C.; García-Hernández, L.

    2017-06-01

    This article presents the bases of a methodology in order to size unconventional tail configurations for transport aircraft. The case study of this paper is a V-tail con¦guration. Firstly, an aerodynamic study is developed for determining stability derivatives and aerodynamic forces. The objective is to size a tail such as it develops at least the same static stability derivatives than a conventional reference aircraft. The optimum is obtained minimizing its weight. The weight is estimated through two methods: adapted Farrar£s method and a statistical method. The solution reached is heavier than the reference, but it reduces the wetted area.

  4. Passive field reflectance measurements

    NASA Astrophysics Data System (ADS)

    Weber, Christian; Schinca, Daniel C.; Tocho, Jorge O.; Videla, Fabian

    2008-10-01

    The results of reflectance measurements performed with a three-band passive radiometer with independent channels for solar irradiance reference are presented. Comparative operation between the traditional method that uses downward-looking field and reference white panel measurements and the new approach involving duplicated downward- and upward-looking spectral channels (each latter one with its own diffuser) is analyzed. The results indicate that the latter method performs in very good agreement with the standard method and is more suitable for passive sensors under rapidly changing atmospheric conditions (such as clouds, dust, mist, smog and other scatterers), since a more reliable synchronous recording of reference and incident light is achieved. Besides, having separate channels for the reference and the signal allows a better balancing of gains in the amplifiers for each spectral channel. We show the results obtained in the determination of the normalized difference vegetation index (NDVI) corresponding to the period 2004-2007 field experiments concerning weed detection in soybean stubbles and fertilizer level assessment in wheat. The method may be used to refine sensor-based nitrogen fertilizer rate recommendations and to determine suitable zones for herbicide applications.

  5. New decision criteria for selecting delta check methods based on the ratio of the delta difference to the width of the reference range can be generally applicable for each clinical chemistry test item.

    PubMed

    Park, Sang Hyuk; Kim, So-Young; Lee, Woochang; Chun, Sail; Min, Won-Ki

    2012-09-01

    Many laboratories use 4 delta check methods: delta difference, delta percent change, rate difference, and rate percent change. However, guidelines regarding decision criteria for selecting delta check methods have not yet been provided. We present new decision criteria for selecting delta check methods for each clinical chemistry test item. We collected 811,920 and 669,750 paired (present and previous) test results for 27 clinical chemistry test items from inpatients and outpatients, respectively. We devised new decision criteria for the selection of delta check methods based on the ratio of the delta difference to the width of the reference range (DD/RR). Delta check methods based on these criteria were compared with those based on the CV% of the absolute delta difference (ADD) as well as those reported in 2 previous studies. The delta check methods suggested by new decision criteria based on the DD/RR ratio corresponded well with those based on the CV% of the ADD except for only 2 items each in inpatients and outpatients. Delta check methods based on the DD/RR ratio also corresponded with those suggested in the 2 previous studies, except for 1 and 7 items in inpatients and outpatients, respectively. The DD/RR method appears to yield more feasible and intuitive selection criteria and can easily explain changes in the results by reflecting both the biological variation of the test item and the clinical characteristics of patients in each laboratory. We suggest this as a measure to determine delta check methods.

  6. Advanced propulsion for LEO-Moon transport. 1: A method for evaluating advanced propulsion performance

    NASA Technical Reports Server (NTRS)

    Stern, Martin O.

    1992-01-01

    This report describes a study to evaluate the benefits of advanced propulsion technologies for transporting materials between low Earth orbit and the Moon. A relatively conventional reference transportation system, and several other systems, each of which includes one advanced technology component, are compared in terms of how well they perform a chosen mission objective. The evaluation method is based on a pairwise life-cycle cost comparison of each of the advanced systems with the reference system. Somewhat novel and economically important features of the procedure are the inclusion not only of mass payback ratios based on Earth launch costs, but also of repair and capital acquisition costs, and of adjustments in the latter to reflect the technological maturity of the advanced technologies. The required input information is developed by panels of experts. The overall scope and approach of the study are presented in the introduction. The bulk of the paper describes the evaluation method; the reference system and an advanced transportation system, including a spinning tether in an eccentric Earth orbit, are used to illustrate it.

  7. A method of 3D object recognition and localization in a cloud of points

    NASA Astrophysics Data System (ADS)

    Bielicki, Jerzy; Sitnik, Robert

    2013-12-01

    The proposed method given in this article is prepared for analysis of data in the form of cloud of points directly from 3D measurements. It is designed for use in the end-user applications that can directly be integrated with 3D scanning software. The method utilizes locally calculated feature vectors (FVs) in point cloud data. Recognition is based on comparison of the analyzed scene with reference object library. A global descriptor in the form of a set of spatially distributed FVs is created for each reference model. During the detection process, correlation of subsets of reference FVs with FVs calculated in the scene is computed. Features utilized in the algorithm are based on parameters, which qualitatively estimate mean and Gaussian curvatures. Replacement of differentiation with averaging in the curvatures estimation makes the algorithm more resistant to discontinuities and poor quality of the input data. Utilization of the FV subsets allows to detect partially occluded and cluttered objects in the scene, while additional spatial information maintains false positive rate at a reasonably low level.

  8. Looking back on a decade of barcoding crustaceans

    PubMed Central

    Raupach, Michael J.; Radulovici, Adriana E.

    2015-01-01

    Abstract Species identification represents a pivotal component for large-scale biodiversity studies and conservation planning but represents a challenge for many taxa when using morphological traits only. Consequently, alternative identification methods based on molecular markers have been proposed. In this context, DNA barcoding has become a popular and accepted method for the identification of unknown animals across all life stages by comparison to a reference library. In this review we examine the progress of barcoding studies for the Crustacea using the Web of Science data base from 2003 to 2014. All references were classified in terms of taxonomy covered, subject area (identification/library, genetic variability, species descriptions, phylogenetics, methods, pseudogenes/numts), habitat, geographical area, authors, journals, citations, and the use of the Barcode of Life Data Systems (BOLD). Our analysis revealed a total number of 164 barcoding studies for crustaceans with a preference for malacostracan crustaceans, in particular Decapoda, and for building reference libraries in order to identify organisms. So far, BOLD did not establish itself as a popular informatics platform among carcinologists although it offers many advantages for standardized data storage, analyses and publication. PMID:26798245

  9. A novel strategy with standardized reference extract qualification and single compound quantitative evaluation for quality control of Panax notoginseng used as a functional food.

    PubMed

    Li, S P; Qiao, C F; Chen, Y W; Zhao, J; Cui, X M; Zhang, Q W; Liu, X M; Hu, D J

    2013-10-25

    Root of Panax notoginseng (Burk.) F.H. Chen (Sanqi in Chinese) is one of traditional Chinese medicines (TCMs) based functional food. Saponins are the major bioactive components. The shortage of reference compounds or chemical standards is one of the main bottlenecks for quality control of TCMs. A novel strategy, i.e. standardized reference extract based qualification and single calibrated components directly quantitative estimation of multiple analytes, was proposed to easily and effectively control the quality of natural functional foods such as Sanqi. The feasibility and credibility of this methodology were also assessed with a developed fast HPLC method. Five saponins, including ginsenoside Rg1, Re, Rb1, Rd and notoginsenoside R1 were rapidly separated using a conventional HPLC in 20 min. The quantification method was also compared with individual calibration curve method. The strategy is feasible and credible, which is easily and effectively adapted for improving the quality control of natural functional foods such as Sanqi. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Electrophysiological Responses to Expectancy Violations in Semantic and Gambling Tasks: A Comparison of Different EEG Reference Approaches

    PubMed Central

    Li, Ya; Wang, Yongchun; Zhang, Baoqiang; Wang, Yonghui; Zhou, Xiaolin

    2018-01-01

    Dynamically evaluating the outcomes of our actions and thoughts is a fundamental cognitive ability. Given its excellent temporal resolution, the event-related potential (ERP) technology has been used to address this issue. The feedback-related negativity (FRN) component of ERPs has been studied intensively with the averaged linked mastoid reference method (LM). However, it is unknown whether FRN can be induced by an expectancy violation in an antonym relations context and whether LM is the most suitable reference approach. To address these issues, the current research directly compared the ERP components induced by expectancy violations in antonym expectation and gambling tasks with a within-subjects design and investigated the effect of the reference approach on the experimental effects. Specifically, we systematically compared the influence of the LM, reference electrode standardization technique (REST) and average reference (AVE) approaches on the amplitude, scalp distribution and magnitude of ERP effects as a function of expectancy violation type. The expectancy deviation in the antonym expectation task elicited an N400 effect that differed from the FRN effect induced in the gambling task; this difference was confirmed by all the three reference methods. Both the amplitudes of the ERP effects (N400 and FRN) and the magnitude as the expectancy violation increased were greater under the LM approach than those under the REST approach, followed by those under the AVE approach. Based on the statistical results, the electrode sites that showed the N400 and FRN effects critically depended on the reference method, and the results of the REST analysis were consistent with previous ERP studies. Combined with evidence from simulation studies, we suggest that REST is an optional reference method to be used in future ERP data analysis. PMID:29615858

  11. De novo assembly of highly polymorphic metagenomic data using in situ generated reference sequences and a novel BLAST-based assembly pipeline.

    PubMed

    Lin, You-Yu; Hsieh, Chia-Hung; Chen, Jiun-Hong; Lu, Xuemei; Kao, Jia-Horng; Chen, Pei-Jer; Chen, Ding-Shinn; Wang, Hurng-Yi

    2017-04-26

    The accuracy of metagenomic assembly is usually compromised by high levels of polymorphism due to divergent reads from the same genomic region recognized as different loci when sequenced and assembled together. A viral quasispecies is a group of abundant and diversified genetically related viruses found in a single carrier. Current mainstream assembly methods, such as Velvet and SOAPdenovo, were not originally intended for the assembly of such metagenomics data, and therefore demands for new methods to provide accurate and informative assembly results for metagenomic data. In this study, we present a hybrid method for assembling highly polymorphic data combining the partial de novo-reference assembly (PDR) strategy and the BLAST-based assembly pipeline (BBAP). The PDR strategy generates in situ reference sequences through de novo assembly of a randomly extracted partial data set which is subsequently used for the reference assembly for the full data set. BBAP employs a greedy algorithm to assemble polymorphic reads. We used 12 hepatitis B virus quasispecies NGS data sets from a previous study to assess and compare the performance of both PDR and BBAP. Analyses suggest the high polymorphism of a full metagenomic data set leads to fragmentized de novo assembly results, whereas the biased or limited representation of external reference sequences included fewer reads into the assembly with lower assembly accuracy and variation sensitivity. In comparison, the PDR generated in situ reference sequence incorporated more reads into the final PDR assembly of the full metagenomics data set along with greater accuracy and higher variation sensitivity. BBAP assembly results also suggest higher assembly efficiency and accuracy compared to other assembly methods. Additionally, BBAP assembly recovered HBV structural variants that were not observed amongst assembly results of other methods. Together, PDR/BBAP assembly results were significantly better than other compared methods. Both PDR and BBAP independently increased the assembly efficiency and accuracy of highly polymorphic data, and assembly performances were further improved when used together. BBAP also provides nucleotide frequency information. Together, PDR and BBAP provide powerful tools for metagenomic data studies.

  12. Visualizing the movement of the contact between vocal folds during vibration by using array-based transmission ultrasonic glottography

    PubMed Central

    Jing, Bowen; Chigan, Pengju; Ge, Zhengtong; Wu, Liang; Wang, Supin; Wan, Mingxi

    2017-01-01

    For the purpose of noninvasively visualizing the dynamics of the contact between vibrating vocal fold medial surfaces, an ultrasonic imaging method which is referred to as array-based transmission ultrasonic glottography is proposed. An array of ultrasound transducers is used to detect the ultrasound wave transmitted from one side of the vocal folds to the other side through the small-sized contact between the vocal folds. A passive acoustic mapping method is employed to visualize and locate the contact. The results of the investigation using tissue-mimicking phantoms indicate that it is feasible to use the proposed method to visualize and locate the contact between soft tissues. Furthermore, the proposed method was used for investigating the movement of the contact between the vibrating vocal folds of excised canine larynges. The results indicate that the vertical movement of the contact can be visualized as a vertical movement of a high-intensity stripe in a series of images obtained by using the proposed method. Moreover, a visualization and analysis method, which is referred to as array-based ultrasonic kymography, is presented. The velocity of the vertical movement of the contact, which is estimated from the array-based ultrasonic kymogram, could reach 0.8 m/s during the vocal fold vibration. PMID:28599522

  13. Determination of the reference position in the near-infrared non-invasive blood glucose measurement in vivo

    NASA Astrophysics Data System (ADS)

    Han, Guang; Liu, Jin; Liu, Rong; Xu, Kexin

    2016-10-01

    Position-based reference measurement method is taken as one of the most promising method in non-invasive measurement of blood glucose based on spectroscopic methodology. Selecting an appropriate source-detector separation as the reference position is important for deducting the influence of background change and reducing the loss of useful signals. Our group proposed a special source-detector separation named floating-reference position where the signal contains only background change, that is to say, the signal at this source-detector separation is uncorrelated with glucose concentration. The existence of floating-reference position has been verified in a three layer skin by Monte Carlo simulation and in the in vitro experiment. But it is difficult to verify the existence of floating-reference position on the human body because the interference is more complex during in vivo experiment. Aiming at this situation, this paper studies the determination of the best reference position on human body by collecting signals at several source-detector separations on the palm and measuring the true blood glucose levels during oral glucose tolerance test (OGTT) experiments of 3 volunteers. Partial least square (PLS) calibration model is established between the signals at every source-detector separation and its corresponding blood glucose levels. The results shows that the correlation coefficient (R) between 1.32 mm to 1.88 mm is lowest and they can be used as reference for background correction. The signal of this special position is important for improving the accuracy of near-infrared non-invasive blood glucose measurement.

  14. Identification and assembly of genomes and genetic elements in complex metagenomic samples without using reference genomes.

    PubMed

    Nielsen, H Bjørn; Almeida, Mathieu; Juncker, Agnieszka Sierakowska; Rasmussen, Simon; Li, Junhua; Sunagawa, Shinichi; Plichta, Damian R; Gautier, Laurent; Pedersen, Anders G; Le Chatelier, Emmanuelle; Pelletier, Eric; Bonde, Ida; Nielsen, Trine; Manichanh, Chaysavanh; Arumugam, Manimozhiyan; Batto, Jean-Michel; Quintanilha Dos Santos, Marcelo B; Blom, Nikolaj; Borruel, Natalia; Burgdorf, Kristoffer S; Boumezbeur, Fouad; Casellas, Francesc; Doré, Joël; Dworzynski, Piotr; Guarner, Francisco; Hansen, Torben; Hildebrand, Falk; Kaas, Rolf S; Kennedy, Sean; Kristiansen, Karsten; Kultima, Jens Roat; Léonard, Pierre; Levenez, Florence; Lund, Ole; Moumen, Bouziane; Le Paslier, Denis; Pons, Nicolas; Pedersen, Oluf; Prifti, Edi; Qin, Junjie; Raes, Jeroen; Sørensen, Søren; Tap, Julien; Tims, Sebastian; Ussery, David W; Yamada, Takuji; Renault, Pierre; Sicheritz-Ponten, Thomas; Bork, Peer; Wang, Jun; Brunak, Søren; Ehrlich, S Dusko

    2014-08-01

    Most current approaches for analyzing metagenomic data rely on comparisons to reference genomes, but the microbial diversity of many environments extends far beyond what is covered by reference databases. De novo segregation of complex metagenomic data into specific biological entities, such as particular bacterial strains or viruses, remains a largely unsolved problem. Here we present a method, based on binning co-abundant genes across a series of metagenomic samples, that enables comprehensive discovery of new microbial organisms, viruses and co-inherited genetic entities and aids assembly of microbial genomes without the need for reference sequences. We demonstrate the method on data from 396 human gut microbiome samples and identify 7,381 co-abundance gene groups (CAGs), including 741 metagenomic species (MGS). We use these to assemble 238 high-quality microbial genomes and identify affiliations between MGS and hundreds of viruses or genetic entities. Our method provides the means for comprehensive profiling of the diversity within complex metagenomic samples.

  15. Identification procedure for epistemic uncertainties using inverse fuzzy arithmetic

    NASA Astrophysics Data System (ADS)

    Haag, T.; Herrmann, J.; Hanss, M.

    2010-10-01

    For the mathematical representation of systems with epistemic uncertainties, arising, for example, from simplifications in the modeling procedure, models with fuzzy-valued parameters prove to be a suitable and promising approach. In practice, however, the determination of these parameters turns out to be a non-trivial problem. The identification procedure to appropriately update these parameters on the basis of a reference output (measurement or output of an advanced model) requires the solution of an inverse problem. Against this background, an inverse method for the computation of the fuzzy-valued parameters of a model with epistemic uncertainties is presented. This method stands out due to the fact that it only uses feedforward simulations of the model, based on the transformation method of fuzzy arithmetic, along with the reference output. An inversion of the system equations is not necessary. The advancement of the method presented in this paper consists of the identification of multiple input parameters based on a single reference output or measurement. An optimization is used to solve the resulting underdetermined problems by minimizing the uncertainty of the identified parameters. Regions where the identification procedure is reliable are determined by the computation of a feasibility criterion which is also based on the output data of the transformation method only. For a frequency response function of a mechanical system, this criterion allows a restriction of the identification process to some special range of frequency where its solution can be guaranteed. Finally, the practicability of the method is demonstrated by covering the measured output of a fluid-filled piping system by the corresponding uncertain FE model in a conservative way.

  16. Near-edge X-ray refraction fine structure microscopy

    DOE PAGES

    Farmand, Maryam; Celestre, Richard; Denes, Peter; ...

    2017-02-06

    We demonstrate a method for obtaining increased spatial resolution and specificity in nanoscale chemical composition maps through the use of full refractive reference spectra in soft x-ray spectro-microscopy. Using soft x-ray ptychography, we measure both the absorption and refraction of x-rays through pristine reference materials as a function of photon energy and use these reference spectra as the basis for decomposing spatially resolved spectra from a heterogeneous sample, thereby quantifying the composition at high resolution. While conventional instruments are limited to absorption contrast, our novel refraction based method takes advantage of the strongly energy dependent scattering cross-section and can seemore » nearly five-fold improved spatial resolution on resonance.« less

  17. Environmental life cycle assessment of methanol and electricity co-production system based on coal gasification technology.

    PubMed

    Śliwińska, Anna; Burchart-Korol, Dorota; Smoliński, Adam

    2017-01-01

    This paper presents a life cycle assessment (LCA) of greenhouse gas emissions generated through methanol and electricity co-production system based on coal gasification technology. The analysis focuses on polygeneration technologies from which two products are produced, and thus, issues related to an allocation procedure for LCA are addressed in this paper. In the LCA, two methods were used: a 'system expansion' method based on two approaches, the 'avoided burdens approach' and 'direct system enlargement' methods and an 'allocation' method involving proportional partitioning based on physical relationships in a technological process. Cause-effect relationships in the analysed production process were identified, allowing for the identification of allocation factors. The 'system expansion' method involved expanding the analysis to include five additional variants of electricity production technologies in Poland (alternative technologies). This method revealed environmental consequences of implementation for the analysed technologies. It was found that the LCA of polygeneration technologies based on the 'system expansion' method generated a more complete source of information on environmental consequences than the 'allocation' method. The analysis shows that alternative technologies chosen for generating LCA results are crucial. Life cycle assessment was performed for the analysed, reference and variant alternative technologies. Comparative analysis was performed between the analysed technologies of methanol and electricity co-production from coal gasification as well as a reference technology of methanol production from the natural gas reforming process. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Reference gene selection for quantitative gene expression studies during biological invasions: A test on multiple genes and tissues in a model ascidian Ciona savignyi.

    PubMed

    Huang, Xuena; Gao, Yangchun; Jiang, Bei; Zhou, Zunchun; Zhan, Aibin

    2016-01-15

    As invasive species have successfully colonized a wide range of dramatically different local environments, they offer a good opportunity to study interactions between species and rapidly changing environments. Gene expression represents one of the primary and crucial mechanisms for rapid adaptation to local environments. Here, we aim to select reference genes for quantitative gene expression analysis based on quantitative Real-Time PCR (qRT-PCR) for a model invasive ascidian, Ciona savignyi. We analyzed the stability of ten candidate reference genes in three tissues (siphon, pharynx and intestine) under two key environmental stresses (temperature and salinity) in the marine realm based on three programs (geNorm, NormFinder and delta Ct method). Our results demonstrated only minor difference for stability rankings among the three methods. The use of different single reference gene might influence the data interpretation, while multiple reference genes could minimize possible errors. Therefore, reference gene combinations were recommended for different tissues - the optimal reference gene combination for siphon was RPS15 and RPL17 under temperature stress, and RPL17, UBQ and TubA under salinity treatment; for pharynx, TubB, TubA and RPL17 were the most stable genes under temperature stress, while TubB, TubA and UBQ were the best under salinity stress; for intestine, UBQ, RPS15 and RPL17 were the most reliable reference genes under both treatments. Our results suggest that the necessity of selection and test of reference genes for different tissues under varying environmental stresses. The results obtained here are expected to reveal mechanisms of gene expression-mediated invasion success using C. savignyi as a model species. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Validation of a T1 and T2* leakage correction method based on multi-echo DSC-MRI using MION as a reference standard

    PubMed Central

    Stokes, Ashley M.; Semmineh, Natenael; Quarles, C. Chad

    2015-01-01

    Purpose A combined biophysical- and pharmacokinetic-based method is proposed to separate, quantify, and correct for both T1 and T2* leakage effects using dual-echo DSC acquisitions to provide more accurate hemodynamic measures, as validated by a reference intravascular contrast agent (CA). Methods Dual-echo DSC-MRI data were acquired in two rodent glioma models. The T1 leakage effects were removed and also quantified in order to subsequently correct for the remaining T2* leakage effects. Pharmacokinetic, biophysical, and combined biophysical and pharmacokinetic models were used to obtain corrected cerebral blood volume (CBV) and cerebral blood flow (CBF), and these were compared with CBV and CBF from an intravascular CA. Results T1-corrected CBV was significantly overestimated compared to MION CBV, while T1+T2*-correction yielded CBV values closer to the reference values. The pharmacokinetic and simplified biophysical methods showed similar results and underestimated CBV in tumors exhibiting strong T2* leakage effects. The combined method was effective for correcting T1 and T2* leakage effects across tumor types. Conclusions Correcting for both T1 and T2* leakage effects yielded more accurate measures of CBV. The combined correction method yields more reliable CBV measures than either correction method alone, but for certain brain tumor types (e.g., gliomas) the simplified biophysical method may provide a robust and computationally efficient alternative. PMID:26362714

  20. Point cloud registration from local feature correspondences-Evaluation on challenging datasets.

    PubMed

    Petricek, Tomas; Svoboda, Tomas

    2017-01-01

    Registration of laser scans, or point clouds in general, is a crucial step of localization and mapping with mobile robots or in object modeling pipelines. A coarse alignment of the point clouds is generally needed before applying local methods such as the Iterative Closest Point (ICP) algorithm. We propose a feature-based approach to point cloud registration and evaluate the proposed method and its individual components on challenging real-world datasets. For a moderate overlap between the laser scans, the method provides a superior registration accuracy compared to state-of-the-art methods including Generalized ICP, 3D Normal-Distribution Transform, Fast Point-Feature Histograms, and 4-Points Congruent Sets. Compared to the surface normals, the points as the underlying features yield higher performance in both keypoint detection and establishing local reference frames. Moreover, sign disambiguation of the basis vectors proves to be an important aspect in creating repeatable local reference frames. A novel method for sign disambiguation is proposed which yields highly repeatable reference frames.

  1. Marker-free motion correction in weight-bearing cone-beam CT of the knee joint

    PubMed Central

    Berger, M.; Müller, K.; Aichert, A.; Unberath, M.; Thies, J.; Choi, J.-H.; Fahrig, R.; Maier, A.

    2016-01-01

    Purpose: To allow for a purely image-based motion estimation and compensation in weight-bearing cone-beam computed tomography of the knee joint. Methods: Weight-bearing imaging of the knee joint in a standing position poses additional requirements for the image reconstruction algorithm. In contrast to supine scans, patient motion needs to be estimated and compensated. The authors propose a method that is based on 2D/3D registration of left and right femur and tibia segmented from a prior, motion-free reconstruction acquired in supine position. Each segmented bone is first roughly aligned to the motion-corrupted reconstruction of a scan in standing or squatting position. Subsequently, a rigid 2D/3D registration is performed for each bone to each of K projection images, estimating 6 × 4 × K motion parameters. The motion of individual bones is combined into global motion fields using thin-plate-spline extrapolation. These can be incorporated into a motion-compensated reconstruction in the backprojection step. The authors performed visual and quantitative comparisons between a state-of-the-art marker-based (MB) method and two variants of the proposed method using gradient correlation (GC) and normalized gradient information (NGI) as similarity measure for the 2D/3D registration. Results: The authors evaluated their method on four acquisitions under different squatting positions of the same patient. All methods showed substantial improvement in image quality compared to the uncorrected reconstructions. Compared to NGI and MB, the GC method showed increased streaking artifacts due to misregistrations in lateral projection images. NGI and MB showed comparable image quality at the bone regions. Because the markers are attached to the skin, the MB method performed better at the surface of the legs where the authors observed slight streaking of the NGI and GC methods. For a quantitative evaluation, the authors computed the universal quality index (UQI) for all bone regions with respect to the motion-free reconstruction. The authors quantitative evaluation over regions around the bones yielded a mean UQI of 18.4 for no correction, 53.3 and 56.1 for the proposed method using GC and NGI, respectively, and 53.7 for the MB reference approach. In contrast to the authors registration-based corrections, the MB reference method caused slight nonrigid deformations at bone outlines when compared to a motion-free reference scan. Conclusions: The authors showed that their method based on the NGI similarity measure yields reconstruction quality close to the MB reference method. In contrast to the MB method, the proposed method does not require any preparation prior to the examination which will improve the clinical workflow and patient comfort. Further, the authors found that the MB method causes small, nonrigid deformations at the bone outline which indicates that markers may not accurately reflect the internal motion close to the knee joint. Therefore, the authors believe that the proposed method is a promising alternative to MB motion management. PMID:26936708

  2. MRAC Revisited: Guaranteed Performance with Reference Model Modification

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram; Krishnakumar, Kalmaje

    2010-01-01

    This paper presents modification of the conventional model reference adaptive control (MRAC) architecture in order to achieve guaranteed transient performance both in the output and input signals of an uncertain system. The proposed modification is based on the tracking error feedback to the reference model. It is shown that approach guarantees tracking of a given command and the ideal control signal (one that would be designed if the system were known) not only asymptotically but also in transient by a proper selection of the error feedback gain. The method prevents generation of high frequency oscillations that are unavoidable in conventional MRAC systems for large adaptation rates. The provided design guideline makes it possible to track a reference command of any magnitude form any initial position without re-tuning. The benefits of the method are demonstrated in simulations.

  3. A fast and automatic mosaic method for high-resolution satellite images

    NASA Astrophysics Data System (ADS)

    Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing

    2015-12-01

    We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.

  4. Fast auto-focus scheme based on optical defocus fitting model

    NASA Astrophysics Data System (ADS)

    Wang, Yeru; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting; Cen, Min

    2018-04-01

    An optical defocus fitting model-based (ODFM) auto-focus scheme is proposed. Considering the basic optical defocus principle, the optical defocus fitting model is derived to approximate the potential-focus position. By this accurate modelling, the proposed auto-focus scheme can make the stepping motor approach the focal plane more accurately and rapidly. Two fitting positions are first determined for an arbitrary initial stepping motor position. Three images (initial image and two fitting images) at these positions are then collected to estimate the potential-focus position based on the proposed ODFM method. Around the estimated potential-focus position, two reference images are recorded. The auto-focus procedure is then completed by processing these two reference images and the potential-focus image to confirm the in-focus position using a contrast based method. Experimental results prove that the proposed scheme can complete auto-focus within only 5 to 7 steps with good performance even under low-light condition.

  5. The Research Diagnostic Criteria for Temporomandibular Disorders. I: overview and methodology for assessment of validity.

    PubMed

    Schiffman, Eric L; Truelove, Edmond L; Ohrbach, Richard; Anderson, Gary C; John, Mike T; List, Thomas; Look, John O

    2010-01-01

    The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. The aim of this article is to provide an overview of the project's methodology, descriptive statistics, and data for the study participant sample. This article also details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. The Axis I reference standards were based on the consensus of two criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion examination reliability was also assessed within study sites. Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas > or = 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion examiner agreement with reference standards was excellent (k > or = 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods.

  6. Evaluation of the reliability of maize reference assays for GMO quantification.

    PubMed

    Papazova, Nina; Zhang, David; Gruden, Kristina; Vojvoda, Jana; Yang, Litao; Buh Gasparic, Meti; Blejec, Andrej; Fouilloux, Stephane; De Loose, Marc; Taverniers, Isabel

    2010-03-01

    A reliable PCR reference assay for relative genetically modified organism (GMO) quantification must be specific for the target taxon and amplify uniformly along the commercialised varieties within the considered taxon. Different reference assays for maize (Zea mays L.) are used in official methods for GMO quantification. In this study, we evaluated the reliability of eight existing maize reference assays, four of which are used in combination with an event-specific polymerase chain reaction (PCR) assay validated and published by the Community Reference Laboratory (CRL). We analysed the nucleotide sequence variation in the target genomic regions in a broad range of transgenic and conventional varieties and lines: MON 810 varieties cultivated in Spain and conventional varieties from various geographical origins and breeding history. In addition, the reliability of the assays was evaluated based on their PCR amplification performance. A single base pair substitution, corresponding to a single nucleotide polymorphism (SNP) reported in an earlier study, was observed in the forward primer of one of the studied alcohol dehydrogenase 1 (Adh1) (70) assays in a large number of varieties. The SNP presence is consistent with a poor PCR performance observed for this assay along the tested varieties. The obtained data show that the Adh1 (70) assay used in the official CRL NK603 assay is unreliable. Based on our results from both the nucleotide stability study and the PCR performance test, we can conclude that the Adh1 (136) reference assay (T25 and Bt11 assays) as well as the tested high mobility group protein gene assay, which also form parts of CRL methods for quantification, are highly reliable. Despite the observed uniformity in the nucleotide sequence of the invertase gene assay, the PCR performance test reveals that this target sequence might occur in more than one copy. Finally, although currently not forming a part of official quantification methods, zein and SSIIb assays are found to be highly reliable in terms of nucleotide stability and PCR performance and are proposed as good alternative targets for a reference assay for maize.

  7. Automated matching of multiple terrestrial laser scans for stem mapping without the use of artificial references

    NASA Astrophysics Data System (ADS)

    Liu, Jingbin; Liang, Xinlian; Hyyppä, Juha; Yu, Xiaowei; Lehtomäki, Matti; Pyörälä, Jiri; Zhu, Lingli; Wang, Yunsheng; Chen, Ruizhi

    2017-04-01

    Terrestrial laser scanning has been widely used to analyze the 3D structure of a forest in detail and to generate data at the level of a reference plot for forest inventories without destructive measurements. Multi-scan terrestrial laser scanning is more commonly applied to collect plot-level data so that all of the stems can be detected and analyzed. However, it is necessary to match the point clouds of multiple scans to yield a point cloud with automated processing. Mismatches between datasets will lead to errors during the processing of multi-scan data. Classic registration methods based on flat surfaces cannot be directly applied in forest environments; therefore, artificial reference objects have conventionally been used to assist with scan matching. The use of artificial references requires additional labor and expertise, as well as greatly increasing the cost. In this study, we present an automated processing method for plot-level stem mapping that matches multiple scans without artificial references. In contrast to previous studies, the registration method developed in this study exploits the natural geometric characteristics among a set of tree stems in a plot and combines the point clouds of multiple scans into a unified coordinate system. Integrating multiple scans improves the overall performance of stem mapping in terms of the correctness of tree detection, as well as the bias and the root-mean-square errors of forest attributes such as diameter at breast height and tree height. In addition, the automated processing method makes stem mapping more reliable and consistent among plots, reduces the costs associated with plot-based stem mapping, and enhances the efficiency.

  8. Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering

    NASA Astrophysics Data System (ADS)

    Jiang, Lu; Piao, Yan

    2018-04-01

    The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.

  9. Linking in situ LAI and fine resolution remote sensing data to map reference LAI over cropland and grassland using geostatistical regression method

    NASA Astrophysics Data System (ADS)

    He, Yaqian; Bo, Yanchen; Chai, Leilei; Liu, Xiaolong; Li, Aihua

    2016-08-01

    Leaf Area Index (LAI) is an important parameter of vegetation structure. A number of moderate resolution LAI products have been produced in urgent need of large scale vegetation monitoring. High resolution LAI reference maps are necessary to validate these LAI products. This study used a geostatistical regression (GR) method to estimate LAI reference maps by linking in situ LAI and Landsat TM/ETM+ and SPOT-HRV data over two cropland and two grassland sites. To explore the discrepancies of employing different vegetation indices (VIs) on estimating LAI reference maps, this study established the GR models for different VIs, including difference vegetation index (DVI), normalized difference vegetation index (NDVI), and ratio vegetation index (RVI). To further assess the performance of the GR model, the results from the GR and Reduced Major Axis (RMA) models were compared. The results show that the performance of the GR model varies between the cropland and grassland sites. At the cropland sites, the GR model based on DVI provides the best estimation, while at the grassland sites, the GR model based on DVI performs poorly. Compared to the RMA model, the GR model improves the accuracy of reference LAI maps in terms of root mean square errors (RMSE) and bias.

  10. Quantitative Analysis of Qualitative Information from Interviews: A Systematic Literature Review

    ERIC Educational Resources Information Center

    Fakis, Apostolos; Hilliam, Rachel; Stoneley, Helen; Townend, Michael

    2014-01-01

    Background: A systematic literature review was conducted on mixed methods area. Objectives: The overall aim was to explore how qualitative information from interviews has been analyzed using quantitative methods. Methods: A contemporary review was undertaken and based on a predefined protocol. The references were identified using inclusion and…

  11. Unsupervised Segmentation of Head Tissues from Multi-modal MR Images for EEG Source Localization.

    PubMed

    Mahmood, Qaiser; Chodorowski, Artur; Mehnert, Andrew; Gellermann, Johanna; Persson, Mikael

    2015-08-01

    In this paper, we present and evaluate an automatic unsupervised segmentation method, hierarchical segmentation approach (HSA)-Bayesian-based adaptive mean shift (BAMS), for use in the construction of a patient-specific head conductivity model for electroencephalography (EEG) source localization. It is based on a HSA and BAMS for segmenting the tissues from multi-modal magnetic resonance (MR) head images. The evaluation of the proposed method was done both directly in terms of segmentation accuracy and indirectly in terms of source localization accuracy. The direct evaluation was performed relative to a commonly used reference method brain extraction tool (BET)-FMRIB's automated segmentation tool (FAST) and four variants of the HSA using both synthetic data and real data from ten subjects. The synthetic data includes multiple realizations of four different noise levels and several realizations of typical noise with a 20% bias field level. The Dice index and Hausdorff distance were used to measure the segmentation accuracy. The indirect evaluation was performed relative to the reference method BET-FAST using synthetic two-dimensional (2D) multimodal magnetic resonance (MR) data with 3% noise and synthetic EEG (generated for a prescribed source). The source localization accuracy was determined in terms of localization error and relative error of potential. The experimental results demonstrate the efficacy of HSA-BAMS, its robustness to noise and the bias field, and that it provides better segmentation accuracy than the reference method and variants of the HSA. They also show that it leads to a more accurate localization accuracy than the commonly used reference method and suggest that it has potential as a surrogate for expert manual segmentation for the EEG source localization problem.

  12. Marker-free motion correction in weight-bearing cone-beam CT of the knee joint.

    PubMed

    Berger, M; Müller, K; Aichert, A; Unberath, M; Thies, J; Choi, J-H; Fahrig, R; Maier, A

    2016-03-01

    To allow for a purely image-based motion estimation and compensation in weight-bearing cone-beam computed tomography of the knee joint. Weight-bearing imaging of the knee joint in a standing position poses additional requirements for the image reconstruction algorithm. In contrast to supine scans, patient motion needs to be estimated and compensated. The authors propose a method that is based on 2D/3D registration of left and right femur and tibia segmented from a prior, motion-free reconstruction acquired in supine position. Each segmented bone is first roughly aligned to the motion-corrupted reconstruction of a scan in standing or squatting position. Subsequently, a rigid 2D/3D registration is performed for each bone to each of K projection images, estimating 6 × 4 × K motion parameters. The motion of individual bones is combined into global motion fields using thin-plate-spline extrapolation. These can be incorporated into a motion-compensated reconstruction in the backprojection step. The authors performed visual and quantitative comparisons between a state-of-the-art marker-based (MB) method and two variants of the proposed method using gradient correlation (GC) and normalized gradient information (NGI) as similarity measure for the 2D/3D registration. The authors evaluated their method on four acquisitions under different squatting positions of the same patient. All methods showed substantial improvement in image quality compared to the uncorrected reconstructions. Compared to NGI and MB, the GC method showed increased streaking artifacts due to misregistrations in lateral projection images. NGI and MB showed comparable image quality at the bone regions. Because the markers are attached to the skin, the MB method performed better at the surface of the legs where the authors observed slight streaking of the NGI and GC methods. For a quantitative evaluation, the authors computed the universal quality index (UQI) for all bone regions with respect to the motion-free reconstruction. The authors quantitative evaluation over regions around the bones yielded a mean UQI of 18.4 for no correction, 53.3 and 56.1 for the proposed method using GC and NGI, respectively, and 53.7 for the MB reference approach. In contrast to the authors registration-based corrections, the MB reference method caused slight nonrigid deformations at bone outlines when compared to a motion-free reference scan. The authors showed that their method based on the NGI similarity measure yields reconstruction quality close to the MB reference method. In contrast to the MB method, the proposed method does not require any preparation prior to the examination which will improve the clinical workflow and patient comfort. Further, the authors found that the MB method causes small, nonrigid deformations at the bone outline which indicates that markers may not accurately reflect the internal motion close to the knee joint. Therefore, the authors believe that the proposed method is a promising alternative to MB motion management.

  13. Multicomponent blood lipid analysis by means of near infrared spectroscopy, in geese.

    PubMed

    Bazar, George; Eles, Viktoria; Kovacs, Zoltan; Romvari, Robert; Szabo, Andras

    2016-08-01

    This study provides accurate near infrared (NIR) spectroscopic models on some laboratory determined clinicochemical parameters (i.e. total lipid (5.57±1.95 g/l), triglyceride (2.59±1.36 mmol/l), total cholesterol (3.81±0.68 mmol/l), high density lipoprotein (HDL) cholesterol (2.45±0.58 mmol/l)) of blood serum samples of fattened geese. To increase the performance of multivariate chemometrics, samples significantly deviating from the regression models implying laboratory error were excluded from the final calibration datasets. Reference data of excluded samples having outlier spectra in principal component analysis were not marked as false. Samples deviating from the regression models but having non outlier spectra in PCA were identified as having false reference constituent values. Based on the NIR selection methods, 5% of the reference measurement data were rated as doubtful. The achieved models reached R(2) of 0.864, 0.966, 0.850, 0.793, and RMSE of 0.639 g/l, 0.232 mmol/l, 0.210 mmol/l, 0.241 mmol/l for total lipid, triglyceride, total cholesterol and HDL cholesterol, respectively, during independent validation. Classical analytical techniques focus on single constituents and often require chemicals, time-consuming measurements, and experienced technicians. NIR technique provides a quick, cost effective, non-hazardous alternative method for analysis of several constituents based on one single spectrum of each sample, and it also offers the possibility for looking at the laboratory reference data critically. Evaluation of reference data to identify and exclude falsely analyzed samples can provide warning feedback to the reference laboratory, especially in the case of analyses where laboratory methods are not perfectly suited to the subjected material and there is an increased chance of laboratory error. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Collaborative derivation of reference intervals for major clinical laboratory tests in Japan.

    PubMed

    Ichihara, Kiyoshi; Yomamoto, Yoshikazu; Hotta, Taeko; Hosogaya, Shigemi; Miyachi, Hayato; Itoh, Yoshihisa; Ishibashi, Midori; Kang, Dongchon

    2016-05-01

    Three multicentre studies of reference intervals were conducted recently in Japan. The Committee on Common Reference Intervals of the Japan Society of Clinical Chemistry sought to establish common reference intervals for 40 laboratory tests which were measured in common in the three studies and regarded as well harmonized in Japan. The study protocols were comparable with recruitment mostly from hospital workers with body mass index ≤28 and no medications. Age and sex distributions were made equal to obtain a final data size of 6345 individuals. Between-subgroup differences were expressed as the SD ratio (between-subgroup SD divided by SD representing the reference interval). Between-study differences were all within acceptable levels, and thus the three datasets were merged. By adopting SD ratio ≥0.50 as a guide, sex-specific reference intervals were necessary for 12 assays. Age-specific reference intervals for females partitioned at age 45 were required for five analytes. The reference intervals derived by the parametric method resulted in appreciable narrowing of the ranges by applying the latent abnormal values exclusion method in 10 items which were closely associated with prevalent disorders among healthy individuals. Sex- and age-related profiles of reference values, derived from individuals with no abnormal results in major tests, showed peculiar patterns specific to each analyte. Common reference intervals for nationwide use were developed for 40 major tests, based on three multicentre studies by advanced statistical methods. Sex- and age-related profiles of reference values are of great relevance not only for interpreting test results, but for applying clinical decision limits specified in various clinical guidelines. © The Author(s) 2015.

  15. SU-G-BRB-14: Uncertainty of Radiochromic Film Based Relative Dose Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devic, S; Tomic, N; DeBlois, F

    2016-06-15

    Purpose: Due to inherently non-linear dose response, measurement of relative dose distribution with radiochromic film requires measurement of absolute dose using a calibration curve following previously established reference dosimetry protocol. On the other hand, a functional form that converts the inherently non-linear dose response curve of the radiochromic film dosimetry system into linear one has been proposed recently [Devic et al, Med. Phys. 39 4850–4857 (2012)]. However, there is a question what would be the uncertainty of such measured relative dose. Methods: If the relative dose distribution is determined going through the reference dosimetry system (conversion of the response bymore » using calibration curve into absolute dose) the total uncertainty of such determined relative dose will be calculated by summing in quadrature total uncertainties of doses measured at a given and at the reference point. On the other hand, if the relative dose is determined using linearization method, the new response variable is calculated as ζ=a(netOD)n/ln(netOD). In this case, the total uncertainty in relative dose will be calculated by summing in quadrature uncertainties for a new response function (σζ) for a given and the reference point. Results: Except at very low doses, where the measurement uncertainty dominates, the total relative dose uncertainty is less than 1% for the linear response method as compared to almost 2% uncertainty level for the reference dosimetry method. The result is not surprising having in mind that the total uncertainty of the reference dose method is dominated by the fitting uncertainty, which is mitigated in the case of linearization method. Conclusion: Linearization of the radiochromic film dose response provides a convenient and a more precise method for relative dose measurements as it does not require reference dosimetry and creation of calibration curve. However, the linearity of the newly introduced function must be verified. Dave Lewis is inventor and runs a consulting company for radiochromic films.« less

  16. A new experimental method for the determination of the effective orifice area based on the acoustical source term

    NASA Astrophysics Data System (ADS)

    Kadem, L.; Knapp, Y.; Pibarot, P.; Bertrand, E.; Garcia, D.; Durand, L. G.; Rieu, R.

    2005-12-01

    The effective orifice area (EOA) is the most commonly used parameter to assess the severity of aortic valve stenosis as well as the performance of valve substitutes. Particle image velocimetry (PIV) may be used for in vitro estimation of valve EOA. In the present study, we propose a new and simple method based on Howe’s developments of Lighthill’s aero-acoustic theory. This method is based on an acoustical source term (AST) to estimate the EOA from the transvalvular flow velocity measurements obtained by PIV. The EOAs measured by the AST method downstream of three sharp-edged orifices were in excellent agreement with the EOAs predicted from the potential flow theory used as the reference method in this study. Moreover, the AST method was more accurate than other conventional PIV methods based on streamlines, inflexion point or vorticity to predict the theoretical EOAs. The superiority of the AST method is likely due to the nonlinear form of the AST. There was also an excellent agreement between the EOAs measured by the AST method downstream of the three sharp-edged orifices as well as downstream of a bioprosthetic valve with those obtained by the conventional clinical method based on Doppler-echocardiographic measurements of transvalvular velocity. The results of this study suggest that this new simple PIV method provides an accurate estimation of the aortic valve flow EOA. This new method may thus be used as a reference method to estimate the EOA in experimental investigation of the performance of valve substitutes and to validate Doppler-echocardiographic measurements under various physiologic and pathologic flow conditions.

  17. An Automated Method for Navigation Assessment for Earth Survey Sensors Using Island Targets

    NASA Technical Reports Server (NTRS)

    Patt, F. S.; Woodward, R. H.; Gregg, W. W.

    1997-01-01

    An automated method has been developed for performing navigation assessment on satellite-based Earth sensor data. The method utilizes islands as targets which can be readily located in the sensor data and identified with reference locations. The essential elements are an algorithm for classifying the sensor data according to source, a reference catalogue of island locations, and a robust pattern-matching algorithm for island identification. The algorithms were developed and tested for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), an ocean colour sensor. This method will allow navigation error statistics to be automatically generated for large numbers of points, supporting analysis over large spatial and temporal ranges.

  18. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model

    NASA Astrophysics Data System (ADS)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.

  19. Performance of three reflectance calibration methods for airborne hyperspectral spectrometer data.

    PubMed

    Miura, Tomoaki; Huete, Alfredo R

    2009-01-01

    In this study, the performances and accuracies of three methods for converting airborne hyperspectral spectrometer data to reflectance factors were characterized and compared. The "reflectance mode (RM)" method, which calibrates a spectrometer against a white reference panel prior to mounting on an aircraft, resulted in spectral reflectance retrievals that were biased and distorted. The magnitudes of these bias errors and distortions varied significantly, depending on time of day and length of the flight campaign. The "linear-interpolation (LI)" method, which converts airborne spectrometer data by taking a ratio of linearly-interpolated reference values from the preflight and post-flight reference panel readings, resulted in precise, but inaccurate reflectance retrievals. These reflectance spectra were not distorted, but were subject to bias errors of varying magnitudes dependent on the flight duration length. The "continuous panel (CP)" method uses a multi-band radiometer to obtain continuous measurements over a reference panel throughout the flight campaign, in order to adjust the magnitudes of the linear-interpolated reference values from the preflight and post-flight reference panel readings. Airborne hyperspectral reflectance retrievals obtained using this method were found to be the most accurate and reliable reflectance calibration method. The performances of the CP method in retrieving accurate reflectance factors were consistent throughout time of day and for various flight durations. Based on the dataset analyzed in this study, the uncertainty of the CP method has been estimated to be 0.0025 ± 0.0005 reflectance units for the wavelength regions not affected by atmospheric absorptions. The RM method can produce reasonable results only for a very short-term flight (e.g., < 15 minutes) conducted around a local solar noon. The flight duration should be kept shorter than 30 minutes for the LI method to produce results with reasonable accuracies. An important advantage of the CP method is that the method can be used for long-duration flight campaigns (e.g., 1-2 hours). Although this study focused on reflectance calibration of airborne spectrometer data, the methods evaluated in this study and the results obtained are directly applicable to ground spectrometer measurements.

  20. Prior Learning Assessment: How Institutions Use Portfolio Assessments

    ERIC Educational Resources Information Center

    Klein-Collins, Becky; Hain, Patrick

    2009-01-01

    The term Prior Learning Assessment (PLA) refers not to a single kind of assessment but rather an entire family of assessment methods that can be used by institutions. Some of these methods are exam-based. In addition, there are other methods of PLA. One of the more innovative methods of offering PLA, however, is through the development of student…

  1. Stability of operational taxonomic units: an important but neglected property for analyzing microbial diversity.

    PubMed

    He, Yan; Caporaso, J Gregory; Jiang, Xiao-Tao; Sheng, Hua-Fang; Huse, Susan M; Rideout, Jai Ram; Edgar, Robert C; Kopylova, Evguenia; Walters, William A; Knight, Rob; Zhou, Hong-Wei

    2015-01-01

    The operational taxonomic unit (OTU) is widely used in microbial ecology. Reproducibility in microbial ecology research depends on the reliability of OTU-based 16S ribosomal subunit RNA (rRNA) analyses. Here, we report that many hierarchical and greedy clustering methods produce unstable OTUs, with membership that depends on the number of sequences clustered. If OTUs are regenerated with additional sequences or samples, sequences originally assigned to a given OTU can be split into different OTUs. Alternatively, sequences assigned to different OTUs can be merged into a single OTU. This OTU instability affects alpha-diversity analyses such as rarefaction curves, beta-diversity analyses such as distance-based ordination (for example, Principal Coordinate Analysis (PCoA)), and the identification of differentially represented OTUs. Our results show that the proportion of unstable OTUs varies for different clustering methods. We found that the closed-reference method is the only one that produces completely stable OTUs, with the caveat that sequences that do not match a pre-existing reference sequence collection are discarded. As a compromise to the factors listed above, we propose using an open-reference method to enhance OTU stability. This type of method clusters sequences against a database and includes unmatched sequences by clustering them via a relatively stable de novo clustering method. OTU stability is an important consideration when analyzing microbial diversity and is a feature that should be taken into account during the development of novel OTU clustering methods.

  2. A method to improve visual similarity of breast masses for an interactive computer-aided diagnosis environment.

    PubMed

    Zheng, Bin; Lu, Amy; Hardesty, Lara A; Sumkin, Jules H; Hakim, Christiane M; Ganott, Marie A; Gur, David

    2006-01-01

    The purpose of this study was to develop and test a method for selecting "visually similar" regions of interest depicting breast masses from a reference library to be used in an interactive computer-aided diagnosis (CAD) environment. A reference library including 1000 malignant mass regions and 2000 benign and CAD-generated false-positive regions was established. When a suspicious mass region is identified, the scheme segments the region and searches for similar regions from the reference library using a multifeature based k-nearest neighbor (KNN) algorithm. To improve selection of reference images, we added an interactive step. All actual masses in the reference library were subjectively rated on a scale from 1 to 9 as to their "visual margins speculations". When an observer identifies a suspected mass region during a case interpretation he/she first rates the margins and the computerized search is then limited only to regions rated as having similar levels of spiculation (within +/-1 scale difference). In an observer preference study including 85 test regions, two sets of the six "similar" reference regions selected by the KNN with and without the interactive step were displayed side by side with each test region. Four radiologists and five nonclinician observers selected the more appropriate ("similar") reference set in a two alternative forced choice preference experiment. All four radiologists and five nonclinician observers preferred the sets of regions selected by the interactive method with an average frequency of 76.8% and 74.6%, respectively. The overall preference for the interactive method was highly significant (p < 0.001). The study demonstrated that a simple interactive approach that includes subjectively perceived ratings of one feature alone namely, a rating of margin "spiculation," could substantially improve the selection of "visually similar" reference images.

  3. Analysis of vestibular schwannoma size in multiple dimensions: a comparative cohort study of different measurement techniques.

    PubMed

    Varughese, J K; Wentzel-Larsen, T; Vassbotn, F; Moen, G; Lund-Johansen, M

    2010-04-01

    In this volumetric study of the vestibular schwannoma, we evaluated the accuracy and reliability of several approximation methods that are in use, and determined the minimum volume difference that needs to be measured for it to be attributable to an actual difference rather than a retest error. We also found empirical proportionality coefficients for the different methods. DESIGN/SETTING AND PARTICIPANTS: Methodological study with investigation of three different VS measurement methods compared to a reference method that was based on serial slice volume estimates. These volume estimates were based on: (i) one single diameter, (ii) three orthogonal diameters or (iii) the maximal slice area. Altogether 252 T1-weighted MRI images with gadolinium contrast, from 139 VS patients, were examined. The retest errors, in terms of relative percentages, were determined by undertaking repeated measurements on 63 scans for each method. Intraclass correlation coefficients were used to assess the agreement between each of the approximation methods and the reference method. The tendency for approximation methods to systematically overestimate/underestimate different-sized tumours was also assessed, with the help of Bland-Altman plots. The most commonly used approximation method, the maximum diameter, was the least reliable measurement method and has inherent weaknesses that need to be considered. This includes greater retest errors than area-based measurements (25% and 15%, respectively), and that it was the only approximation method that could not easily be converted into volumetric units. Area-based measurements can furthermore be more reliable for smaller volume differences than diameter-based measurements. All our findings suggest that the maximum diameter should not be used as an approximation method. We propose the use of measurement modalities that take into account growth in multiple dimensions instead.

  4. Effect of reference genome selection on the performance of computational methods for genome-wide protein-protein interaction prediction.

    PubMed

    Muley, Vijaykumar Yogesh; Ranjan, Akash

    2012-01-01

    Recent progress in computational methods for predicting physical and functional protein-protein interactions has provided new insights into the complexity of biological processes. Most of these methods assume that functionally interacting proteins are likely to have a shared evolutionary history. This history can be traced out for the protein pairs of a query genome by correlating different evolutionary aspects of their homologs in multiple genomes known as the reference genomes. These methods include phylogenetic profiling, gene neighborhood and co-occurrence of the orthologous protein coding genes in the same cluster or operon. These are collectively known as genomic context methods. On the other hand a method called mirrortree is based on the similarity of phylogenetic trees between two interacting proteins. Comprehensive performance analyses of these methods have been frequently reported in literature. However, very few studies provide insight into the effect of reference genome selection on detection of meaningful protein interactions. We analyzed the performance of four methods and their variants to understand the effect of reference genome selection on prediction efficacy. We used six sets of reference genomes, sampled in accordance with phylogenetic diversity and relationship between organisms from 565 bacteria. We used Escherichia coli as a model organism and the gold standard datasets of interacting proteins reported in DIP, EcoCyc and KEGG databases to compare the performance of the prediction methods. Higher performance for predicting protein-protein interactions was achievable even with 100-150 bacterial genomes out of 565 genomes. Inclusion of archaeal genomes in the reference genome set improves performance. We find that in order to obtain a good performance, it is better to sample few genomes of related genera of prokaryotes from the large number of available genomes. Moreover, such a sampling allows for selecting 50-100 genomes for comparable accuracy of predictions when computational resources are limited.

  5. Reliable Detection of Herpes Simplex Virus Sequence Variation by High-Throughput Resequencing.

    PubMed

    Morse, Alison M; Calabro, Kaitlyn R; Fear, Justin M; Bloom, David C; McIntyre, Lauren M

    2017-08-16

    High-throughput sequencing (HTS) has resulted in data for a number of herpes simplex virus (HSV) laboratory strains and clinical isolates. The knowledge of these sequences has been critical for investigating viral pathogenicity. However, the assembly of complete herpesviral genomes, including HSV, is complicated due to the existence of large repeat regions and arrays of smaller reiterated sequences that are commonly found in these genomes. In addition, the inherent genetic variation in populations of isolates for viruses and other microorganisms presents an additional challenge to many existing HTS sequence assembly pipelines. Here, we evaluate two approaches for the identification of genetic variants in HSV1 strains using Illumina short read sequencing data. The first, a reference-based approach, identifies variants from reads aligned to a reference sequence and the second, a de novo assembly approach, identifies variants from reads aligned to de novo assembled consensus sequences. Of critical importance for both approaches is the reduction in the number of low complexity regions through the construction of a non-redundant reference genome. We compared variants identified in the two methods. Our results indicate that approximately 85% of variants are identified regardless of the approach. The reference-based approach to variant discovery captures an additional 15% representing variants divergent from the HSV1 reference possibly due to viral passage. Reference-based approaches are significantly less labor-intensive and identify variants across the genome where de novo assembly-based approaches are limited to regions where contigs have been successfully assembled. In addition, regions of poor quality assembly can lead to false variant identification in de novo consensus sequences. For viruses with a well-assembled reference genome, a reference-based approach is recommended.

  6. Ice-Accretion Scaling Using Water-Film Thickness Parameters

    NASA Technical Reports Server (NTRS)

    Anderson, David N.; Feo, Alejandro

    2003-01-01

    Studies were performed at INTA in Spain to determine water-film thickness on a stagnation-point probe inserted in a simulated cloud. The measurements were correlated with non-dimensional parameters describing the flow and the cloud conditions. Icing scaling tests in the NASA Glenn Icing Research Tunnel were then conducted using the Ruff scaling method with the scale velocity found by matching scale and reference values of either the INTA non-dimensional water-film thickness or a Weber number based on that film thickness. For comparison, tests were also performed using the constant drop-size Weber number and the average-velocity methods. The reference and scale models were both aluminum, 61-cm-span, NACA 0012 airfoil sections at 0 deg. AOA. The reference had a 53-cm-chord and the scale, 27 cm (1/2 size). Both models were mounted vertically in the center of the IRT test section. Tests covered a freezing fraction range of 0.28 to 1.0. Rime ice (n = 1.0) tests showed the consistency of the IRT calibration over a range of velocities. At a freezing fraction of 0.76, there was no significant difference in the scale ice shapes produced by the different methods. For freezing fractions of 0.40, 0.52 and 0.61, somewhat better agreement with the reference horn angles was typically achieved with the average-velocity and constant-film thickness methods than when either of the two Weber numbers was matched to the reference value. At a freezing fraction of 0.28, the four methods were judged equal in providing simulations of the reference shape.

  7. Dental age estimation in Japanese individuals combining permanent teeth and third molars.

    PubMed

    Ramanan, Namratha; Thevissen, Patrick; Fleuws, Steffen; Willems, G

    2012-12-01

    The study aim was, firstly, to verify the Willems et al. model on a Japanese reference sample. Secondly to develop a Japanese reference model based on the Willems et al. method and to verify it. Thirdly to analyze the age prediction performance adding tooth development information of third molars to permanent teeth. Retrospectively 1877 panoramic radiographs were selected in the age range between 1 and 23 years (1248 children, 629 sub-adults). Dental development was registered applying Demirjian 's stages of the mandibular left permanent teeth in children and Köhler stages on the third molars. The children's data were, firstly, used to validate the Willems et al. model (developed a Belgian reference sample), secondly, split ino a training and a test sample. On the training sample a Japanese reference model was developed based on the Willems method. The developed model and the Willems et al; model were verified on the test sample. Regression analysis was used to detect the age prediction performance adding third molar scores to permanent tooth scores. The validated Willems et al. model provided a mean absolute error of 0.85 and 0.75 years in females and males, respectively. The mean absolute error in the verified Willems et al. and the developed Japanese reference model was 0.85, 0.77 and 0.79, 0.75 years in females and males, respectively. On average a negligible change in root mean square error values was detected adding third molar scores to permanent teeth scores. The Belgian sample could be used as a reference model to estimate the age of the Japanese individuals. Combining information from the third molars and permanent teeth was not providing clinically significant improvement of age predictions based on permanent teeth information alone.

  8. Block correlated second order perturbation theory with a generalized valence bond reference function.

    PubMed

    Xu, Enhua; Li, Shuhua

    2013-11-07

    The block correlated second-order perturbation theory with a generalized valence bond (GVB) reference (GVB-BCPT2) is proposed. In this approach, each geminal in the GVB reference is considered as a "multi-orbital" block (a subset of spin orbitals), and each occupied or virtual spin orbital is also taken as a single block. The zeroth-order Hamiltonian is set to be the summation of the individual Hamiltonians of all blocks (with explicit two-electron operators within each geminal) so that the GVB reference function and all excited configuration functions are its eigenfunctions. The GVB-BCPT2 energy can be directly obtained without iteration, just like the second order Mo̸ller-Plesset perturbation method (MP2), both of which are size consistent. We have applied this GVB-BCPT2 method to investigate the equilibrium distances and spectroscopic constants of 7 diatomic molecules, conformational energy differences of 8 small molecules, and bond-breaking potential energy profiles in 3 systems. GVB-BCPT2 is demonstrated to have noticeably better performance than MP2 for systems with significant multi-reference character, and provide reasonably accurate results for some systems with large active spaces, which are beyond the capability of all CASSCF-based methods.

  9. Personal identification based on blood vessels of retinal fundus images

    NASA Astrophysics Data System (ADS)

    Fukuta, Keisuke; Nakagawa, Toshiaki; Hayashi, Yoshinori; Hatanaka, Yuji; Hara, Takeshi; Fujita, Hiroshi

    2008-03-01

    Biometric technique has been implemented instead of conventional identification methods such as password in computer, automatic teller machine (ATM), and entrance and exit management system. We propose a personal identification (PI) system using color retinal fundus images which are unique to each individual. The proposed procedure for identification is based on comparison of an input fundus image with reference fundus images in the database. In the first step, registration between the input image and the reference image is performed. The step includes translational and rotational movement. The PI is based on the measure of similarity between blood vessel images generated from the input and reference images. The similarity measure is defined as the cross-correlation coefficient calculated from the pixel values. When the similarity is greater than a predetermined threshold, the input image is identified. This means both the input and the reference images are associated to the same person. Four hundred sixty-two fundus images including forty-one same-person's image pairs were used for the estimation of the proposed technique. The false rejection rate and the false acceptance rate were 9.9×10 -5% and 4.3×10 -5%, respectively. The results indicate that the proposed method has a higher performance than other biometrics except for DNA. To be used for practical application in the public, the device which can take retinal fundus images easily is needed. The proposed method is applied to not only the PI but also the system which warns about misfiling of fundus images in medical facilities.

  10. Comparative study of an argon plasma and an argon copper plasma produced by an ICP torch at atmospheric pressure based on spectroscopic methods

    NASA Astrophysics Data System (ADS)

    Bussière, W.; Vacher, D.; Menecier, S.; André, P.

    2012-04-01

    In three places in this paper, the citation of reference [43] quotes a further reference [61] cited therein. This in fact should be the reference [45] cited in [43]. This occurs in the caption of figure 3; at the top of the right-hand column on page 6; and in the paragraph headed Influence of the Biberman factor value on page 18. In each case, the text: '[43] (especially reference [61] therein)' should be replaced by '[43] (especially reference [45] therein)'. The authors would like to thank Dr L G D'yachkov for having pointed out the error.

  11. Electronic-projecting Moire method applying CBR-technology

    NASA Astrophysics Data System (ADS)

    Kuzyakov, O. N.; Lapteva, U. V.; Andreeva, M. A.

    2018-01-01

    Electronic-projecting method based on Moire effect for examining surface topology is suggested. Conditions of forming Moire fringes and their parameters’ dependence on reference parameters of object and virtual grids are analyzed. Control system structure and decision-making subsystem are elaborated. Subsystem execution includes CBR-technology, based on applying case base. The approach related to analysing and forming decision for each separate local area with consequent formation of common topology map is applied.

  12. Collaborative localization in wireless sensor networks via pattern recognition in radio irregularity using omnidirectional antennas.

    PubMed

    Jiang, Joe-Air; Chuang, Cheng-Long; Lin, Tzu-Shiang; Chen, Chia-Pang; Hung, Chih-Hung; Wang, Jiing-Yi; Liu, Chang-Wang; Lai, Tzu-Yun

    2010-01-01

    In recent years, various received signal strength (RSS)-based localization estimation approaches for wireless sensor networks (WSNs) have been proposed. RSS-based localization is regarded as a low-cost solution for many location-aware applications in WSNs. In previous studies, the radiation patterns of all sensor nodes are assumed to be spherical, which is an oversimplification of the radio propagation model in practical applications. In this study, we present an RSS-based cooperative localization method that estimates unknown coordinates of sensor nodes in a network. Arrangement of two external low-cost omnidirectional dipole antennas is developed by using the distance-power gradient model. A modified robust regression is also proposed to determine the relative azimuth and distance between a sensor node and a fixed reference node. In addition, a cooperative localization scheme that incorporates estimations from multiple fixed reference nodes is presented to improve the accuracy of the localization. The proposed method is tested via computer-based analysis and field test. Experimental results demonstrate that the proposed low-cost method is a useful solution for localizing sensor nodes in unknown or changing environments.

  13. Measurement of optical to electrical and electrical to optical delays with ps-level uncertainty.

    PubMed

    Peek, H Z; Pinkert, T J; Jansweijer, P P M; Koelemeij, J C J

    2018-05-28

    We present a new measurement principle to determine the absolute time delay of a waveform from an optical reference plane to an electrical reference plane and vice versa. We demonstrate a method based on this principle with 2 ps uncertainty. This method can be used to perform accurate time delay determinations of optical transceivers used in fiber-optic time-dissemination equipment. As a result the time scales in optical and electrical domain can be related to each other with the same uncertainty. We expect this method will be a new breakthrough in high-accuracy time transfer and absolute calibration of time-transfer equipment.

  14. sscMap: an extensible Java application for connecting small-molecule drugs using gene-expression signatures.

    PubMed

    Zhang, Shu-Dong; Gant, Timothy W

    2009-07-31

    Connectivity mapping is a process to recognize novel pharmacological and toxicological properties in small molecules by comparing their gene expression signatures with others in a database. A simple and robust method for connectivity mapping with increased specificity and sensitivity was recently developed, and its utility demonstrated using experimentally derived gene signatures. This paper introduces sscMap (statistically significant connections' map), a Java application designed to undertake connectivity mapping tasks using the recently published method. The software is bundled with a default collection of reference gene-expression profiles based on the publicly available dataset from the Broad Institute Connectivity Map 02, which includes data from over 7000 Affymetrix microarrays, for over 1000 small-molecule compounds, and 6100 treatment instances in 5 human cell lines. In addition, the application allows users to add their custom collections of reference profiles and is applicable to a wide range of other 'omics technologies. The utility of sscMap is two fold. First, it serves to make statistically significant connections between a user-supplied gene signature and the 6100 core reference profiles based on the Broad Institute expanded dataset. Second, it allows users to apply the same improved method to custom-built reference profiles which can be added to the database for future referencing. The software can be freely downloaded from http://purl.oclc.org/NET/sscMap.

  15. Domain-based prediction of the human isoform interactome provides insights into the functional impact of alternative splicing.

    PubMed

    Ghadie, Mohamed Ali; Lambourne, Luke; Vidal, Marc; Xia, Yu

    2017-08-01

    Alternative splicing is known to remodel protein-protein interaction networks ("interactomes"), yet large-scale determination of isoform-specific interactions remains challenging. We present a domain-based method to predict the isoform interactome from the reference interactome. First, we construct the domain-resolved reference interactome by mapping known domain-domain interactions onto experimentally-determined interactions between reference proteins. Then, we construct the isoform interactome by predicting that an isoform loses an interaction if it loses the domain mediating the interaction. Our prediction framework is of high-quality when assessed by experimental data. The predicted human isoform interactome reveals extensive network remodeling by alternative splicing. Protein pairs interacting with different isoforms of the same gene tend to be more divergent in biological function, tissue expression, and disease phenotype than protein pairs interacting with the same isoforms. Our prediction method complements experimental efforts, and demonstrates that integrating structural domain information with interactomes provides insights into the functional impact of alternative splicing.

  16. Domain-based prediction of the human isoform interactome provides insights into the functional impact of alternative splicing

    PubMed Central

    Lambourne, Luke; Vidal, Marc

    2017-01-01

    Alternative splicing is known to remodel protein-protein interaction networks (“interactomes”), yet large-scale determination of isoform-specific interactions remains challenging. We present a domain-based method to predict the isoform interactome from the reference interactome. First, we construct the domain-resolved reference interactome by mapping known domain-domain interactions onto experimentally-determined interactions between reference proteins. Then, we construct the isoform interactome by predicting that an isoform loses an interaction if it loses the domain mediating the interaction. Our prediction framework is of high-quality when assessed by experimental data. The predicted human isoform interactome reveals extensive network remodeling by alternative splicing. Protein pairs interacting with different isoforms of the same gene tend to be more divergent in biological function, tissue expression, and disease phenotype than protein pairs interacting with the same isoforms. Our prediction method complements experimental efforts, and demonstrates that integrating structural domain information with interactomes provides insights into the functional impact of alternative splicing. PMID:28846689

  17. A non-parametric method for automatic determination of P-wave and S-wave arrival times: application to local micro earthquakes

    NASA Astrophysics Data System (ADS)

    Rawles, Christopher; Thurber, Clifford

    2015-08-01

    We present a simple, fast, and robust method for automatic detection of P- and S-wave arrivals using a nearest neighbours-based approach. The nearest neighbour algorithm is one of the most popular time-series classification methods in the data mining community and has been applied to time-series problems in many different domains. Specifically, our method is based on the non-parametric time-series classification method developed by Nikolov. Instead of building a model by estimating parameters from the data, the method uses the data itself to define the model. Potential phase arrivals are identified based on their similarity to a set of reference data consisting of positive and negative sets, where the positive set contains examples of analyst identified P- or S-wave onsets and the negative set contains examples that do not contain P waves or S waves. Similarity is defined as the square of the Euclidean distance between vectors representing the scaled absolute values of the amplitudes of the observed signal and a given reference example in time windows of the same length. For both P waves and S waves, a single pass is done through the bandpassed data, producing a score function defined as the ratio of the sum of similarity to positive examples over the sum of similarity to negative examples for each window. A phase arrival is chosen as the centre position of the window that maximizes the score function. The method is tested on two local earthquake data sets, consisting of 98 known events from the Parkfield region in central California and 32 known events from the Alpine Fault region on the South Island of New Zealand. For P-wave picks, using a reference set containing two picks from the Parkfield data set, 98 per cent of Parkfield and 94 per cent of Alpine Fault picks are determined within 0.1 s of the analyst pick. For S-wave picks, 94 per cent and 91 per cent of picks are determined within 0.2 s of the analyst picks for the Parkfield and Alpine Fault data set, respectively. For the Parkfield data set, our method picks 3520 P-wave picks and 3577 S-wave picks out of 4232 station-event pairs. For the Alpine Fault data set, the method picks 282 P-wave picks and 311 S-wave picks out of a total of 344 station-event pairs. For our testing, we note that the vast majority of station-event pairs have analyst picks, although some analyst picks are excluded based on an accuracy assessment. Finally, our tests suggest that the method is portable, allowing the use of a reference set from one region on data from a different region using relatively few reference picks.

  18. Object-oriented feature extraction approach for mapping supraglacial debris in Schirmacher Oasis using very high-resolution satellite data

    NASA Astrophysics Data System (ADS)

    Jawak, Shridhar D.; Jadhav, Ajay; Luis, Alvarinho J.

    2016-05-01

    Supraglacial debris was mapped in the Schirmacher Oasis, east Antarctica, by using WorldView-2 (WV-2) high resolution optical remote sensing data consisting of 8-band calibrated Gram Schmidt (GS)-sharpened and atmospherically corrected WV-2 imagery. This study is a preliminary attempt to develop an object-oriented rule set to extract supraglacial debris for Antarctic region using 8-spectral band imagery. Supraglacial debris was manually digitized from the satellite imagery to generate the ground reference data. Several trials were performed using few existing traditional pixel-based classification techniques and color-texture based object-oriented classification methods to extract supraglacial debris over a small domain of the study area. Multi-level segmentation and attributes such as scale, shape, size, compactness along with spectral information from the data were used for developing the rule set. The quantitative analysis of error was carried out against the manually digitized reference data to test the practicability of our approach over the traditional pixel-based methods. Our results indicate that OBIA-based approach (overall accuracy: 93%) for extracting supraglacial debris performed better than all the traditional pixel-based methods (overall accuracy: 80-85%). The present attempt provides a comprehensive improved method for semiautomatic feature extraction in supraglacial environment and a new direction in the cryospheric research.

  19. Dsm Based Orientation of Large Stereo Satellite Image Blocks

    NASA Astrophysics Data System (ADS)

    d'Angelo, P.; Reinartz, P.

    2012-07-01

    High resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on CARTOSAT-1 imagery is presented, with emphasis on fully automated georeferencing. The proposed system processes level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The RPC are derived from orbit and attitude information and have a much lower accuracy than the ground resolution of approximately 2.5 m. In order to use the images for orthorectification or DSM generation, an affine RPC correction is required. In this paper, GCP are automatically derived from lower resolution reference datasets (Landsat ETM+ Geocover and SRTM DSM). The traditional method of collecting the lateral position from a reference image and interpolating the corresponding height from the DEM ignores the higher lateral accuracy of the SRTM dataset. Our method avoids this drawback by using a RPC correction based on DSM alignment, resulting in improved geolocation of both DSM and ortho images. Scene based method and a bundle block adjustment based correction are developed and evaluated for a test site covering the nothern part of Italy, for which 405 Cartosat-1 Stereopairs are available. Both methods are tested against independent ground truth. Checks against this ground truth indicate a lateral error of 10 meters.

  20. Video-based respiration monitoring with automatic region of interest detection.

    PubMed

    Janssen, Rik; Wang, Wenjin; Moço, Andreia; de Haan, Gerard

    2016-01-01

    Vital signs monitoring is ubiquitous in clinical environments and emerging in home-based healthcare applications. Still, since current monitoring methods require uncomfortable sensors, respiration rate remains the least measured vital sign. In this paper, we propose a video-based respiration monitoring method that automatically detects a respiratory region of interest (RoI) and signal using a camera. Based on the observation that respiration induced chest/abdomen motion is an independent motion system in a video, our basic idea is to exploit the intrinsic properties of respiration to find the respiratory RoI and extract the respiratory signal via motion factorization. We created a benchmark dataset containing 148 video sequences obtained on adults under challenging conditions and also neonates in the neonatal intensive care unit (NICU). The measurements obtained by the proposed video respiration monitoring (VRM) method are not significantly different from the reference methods (guided breathing or contact-based ECG; p-value  =  0.6), and explain more than 99% of the variance of the reference values with low limits of agreement (-2.67 to 2.81 bpm). VRM seems to provide a valid solution to ECG in confined motion scenarios, though precision may be reduced for neonates. More studies are needed to validate VRM under challenging recording conditions, including upper-body motion types.

  1. Non-invasive diagnostics of the maxillary and frontal sinuses based on diode laser gas spectroscopy.

    PubMed

    Lewander, Märta; Lindberg, Sven; Svensson, Tomas; Siemund, Roger; Svanberg, Katarina; Svanberg, Sune

    2012-03-01

    Suspected, but objectively absent, rhinosinusitis constitutes a major cause of visits to the doctor, high health care costs, and the over-prescription of antibiotics, contributing to the serious problem of resistant bacteria. This situation is largely due to a lack of reliable and widely applicable diagnostic methods. A novel method for the diagnosis of rhinosinusitis based on non-intrusive diode laser gas spectroscopy is presented. The technique is based on light absorption by free gas (oxygen and water vapour) inside the sinuses, and has the potential to be a complementary diagnostic tool in primary health care. The method was evaluated on 40 patients with suspected sinus problems, referred to the diagnostic radiology clinic for low-dose computed tomography (CT), which was used as the reference technique. The data obtained with the new laser-based method correlated well with the grading of opacification and ventilation using CT. The sensitivity and specificity were estimated to be 93% and 61%, respectively, for the maxillary sinuses, and 94% and 86%, respectively, for the frontal sinuses. Good reproducibility was shown. The laser-based technique presents real-time clinical data that correlate well to CT findings, while being non-intrusive and avoiding the use of ionizing radiation.

  2. Ghost detection and removal based on super-pixel grouping in exposure fusion

    NASA Astrophysics Data System (ADS)

    Jiang, Shenyu; Xu, Zhihai; Li, Qi; Chen, Yueting; Feng, Huajun

    2014-09-01

    A novel multi-exposure images fusion method for dynamic scenes is proposed. The commonly used techniques for high dynamic range (HDR) imaging are based on the combination of multiple differently exposed images of the same scene. The drawback of these methods is that ghosting artifacts will be introduced into the final HDR image if the scene is not static. In this paper, a super-pixel grouping based method is proposed to detect the ghost in the image sequences. We introduce the zero mean normalized cross correlation (ZNCC) as a measure of similarity between a given exposure image and the reference. The calculation of ZNCC is implemented in super-pixel level, and the super-pixels which have low correlation with the reference are excluded by adjusting the weight maps for fusion. Without any prior information on camera response function or exposure settings, the proposed method generates low dynamic range (LDR) images which can be shown on conventional display devices directly with details preserving and ghost effects reduced. Experimental results show that the proposed method generates high quality images which have less ghost artifacts and provide a better visual quality than previous approaches.

  3. A standards-based method for compositional analysis by energy dispersive X-ray spectrometry using multivariate statistical analysis: application to multicomponent alloys.

    PubMed

    Rathi, Monika; Ahrenkiel, S P; Carapella, J J; Wanlass, M W

    2013-02-01

    Given an unknown multicomponent alloy, and a set of standard compounds or alloys of known composition, can one improve upon popular standards-based methods for energy dispersive X-ray (EDX) spectrometry to quantify the elemental composition of the unknown specimen? A method is presented here for determining elemental composition of alloys using transmission electron microscopy-based EDX with appropriate standards. The method begins with a discrete set of related reference standards of known composition, applies multivariate statistical analysis to those spectra, and evaluates the compositions with a linear matrix algebra method to relate the spectra to elemental composition. By using associated standards, only limited assumptions about the physical origins of the EDX spectra are needed. Spectral absorption corrections can be performed by providing an estimate of the foil thickness of one or more reference standards. The technique was applied to III-V multicomponent alloy thin films: composition and foil thickness were determined for various III-V alloys. The results were then validated by comparing with X-ray diffraction and photoluminescence analysis, demonstrating accuracy of approximately 1% in atomic fraction.

  4. Estimating regional centile curves from mixed data sources and countries.

    PubMed

    van Buuren, Stef; Hayes, Daniel J; Stasinopoulos, D Mikis; Rigby, Robert A; ter Kuile, Feiko O; Terlouw, Dianne J

    2009-10-15

    Regional or national growth distributions can provide vital information on the health status of populations. In most resource poor countries, however, the required anthropometric data from purpose-designed growth surveys are not readily available. We propose a practical method for estimating regional (multi-country) age-conditional weight distributions based on existing survey data from different countries. We developed a two-step method by which one is able to model data with widely different age ranges and sample sizes. The method produces references both at the country level and at the regional (multi-country) level. The first step models country-specific centile curves by Box-Cox t and Box-Cox power exponential distributions implemented in generalized additive model for location, scale and shape through a common model. Individual countries may vary in location and spread. The second step defines the regional reference from a finite mixture of the country distributions, weighted by population size. To demonstrate the method we fitted the weight-for-age distribution of 12 countries in South East Asia and the Western Pacific, based on 273 270 observations. We modeled both the raw body weight and the corresponding Z score, and obtained a good fit between the final models and the original data for both solutions. We briefly discuss an application of the generated regional references to obtain appropriate, region specific, age-based dosing regimens of drugs used in the tropics. The method is an affordable and efficient strategy to estimate regional growth distributions where the standard costly alternatives are not an option. Copyright (c) 2009 John Wiley & Sons, Ltd.

  5. Towards absolute quantification of allergenic proteins in food--lysozyme in wine as a model system for metrologically traceable mass spectrometric methods and certified reference materials.

    PubMed

    Cryar, Adam; Pritchard, Caroline; Burkitt, William; Walker, Michael; O'Connor, Gavin; Burns, Duncan Thorburn; Quaglia, Milena

    2013-01-01

    Current routine food allergen quantification methods, which are based on immunochemistry, offer high sensitivity but can suffer from issues of specificity and significant variability of results. MS approaches have been developed, but currently lack metrological traceability. A feasibility study on the application of metrologically traceable MS-based reference procedures was undertaken. A proof of concept involving proteolytic digestion and isotope dilution MS for quantification of protein allergens in a food matrix was undertaken using lysozyme in wine as a model system. A concentration of lysozyme in wine of 0.95 +/- 0.03 microg/g was calculated based on the concentrations of two peptides, confirming that this type of analysis is viable at allergenically meaningful concentrations. The challenges associated with this promising method were explored; these included peptide stability, chemical modification, enzymatic digestion, and sample cleanup. The method is suitable for the production of allergen in food certified reference materials, which together with the achieved understanding of the effects of sample preparation and of the matrix on the final results, will assist in addressing the bias of the techniques routinely used and improve measurement confidence. Confirmation of the feasibility of MS methods for absolute quantification of an allergenic protein in a food matrix with results traceable to the International System of Units is a step towards meaningful comparison of results for allergen proteins among laboratories. This approach will also underpin risk assessment and risk management of allergens in the food industry, and regulatory compliance of the use of thresholds or action levels when adopted.

  6. A Comparison of Two Out-of-Print Book Buying Methods

    ERIC Educational Resources Information Center

    Kim, Ung Chon

    1973-01-01

    Two out-of-print book buying methods, searching desiderata files against out-of print book catalogs and advertising want lists in The Library Bookseller,'' are compared based on the data collected from a sample of 168 titles. (7 references) (Author)

  7. Toward Worldwide Hepcidin Assay Harmonization: Identification of a Commutable Secondary Reference Material.

    PubMed

    van der Vorm, Lisa N; Hendriks, Jan C M; Laarakkers, Coby M; Klaver, Siem; Armitage, Andrew E; Bamberg, Alison; Geurts-Moespot, Anneke J; Girelli, Domenico; Herkert, Matthias; Itkonen, Outi; Konrad, Robert J; Tomosugi, Naohisa; Westerman, Mark; Bansal, Sukhvinder S; Campostrini, Natascia; Drakesmith, Hal; Fillet, Marianne; Olbina, Gordana; Pasricha, Sant-Rayn; Pitts, Kelly R; Sloan, John H; Tagliaro, Franco; Weykamp, Cas W; Swinkels, Dorine W

    2016-07-01

    Absolute plasma hepcidin concentrations measured by various procedures differ substantially, complicating interpretation of results and rendering reference intervals method dependent. We investigated the degree of equivalence achievable by harmonization and the identification of a commutable secondary reference material to accomplish this goal. We applied technical procedures to achieve harmonization developed by the Consortium for Harmonization of Clinical Laboratory Results. Eleven plasma hepcidin measurement procedures (5 mass spectrometry based and 6 immunochemical based) quantified native individual plasma samples (n = 32) and native plasma pools (n = 8) to assess analytical performance and current and achievable equivalence. In addition, 8 types of candidate reference materials (3 concentrations each, n = 24) were assessed for their suitability, most notably in terms of commutability, to serve as secondary reference material. Absolute hepcidin values and reproducibility (intrameasurement procedure CVs 2.9%-8.7%) differed substantially between measurement procedures, but all were linear and correlated well. The current equivalence (intermeasurement procedure CV 28.6%) between the methods was mainly attributable to differences in calibration and could thus be improved by harmonization with a common calibrator. Linear regression analysis and standardized residuals showed that a candidate reference material consisting of native lyophilized plasma with cryolyoprotectant was commutable for all measurement procedures. Mathematically simulated harmonization with this calibrator resulted in a maximum achievable equivalence of 7.7%. The secondary reference material identified in this study has the potential to substantially improve equivalence between hepcidin measurement procedures and contributes to the establishment of a traceability chain that will ultimately allow standardization of hepcidin measurement results. © 2016 American Association for Clinical Chemistry.

  8. Lock-in amplifier error prediction and correction in frequency sweep measurements.

    PubMed

    Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose

    2007-01-01

    This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.

  9. TU-FG-BRB-03: Basis Vector Model Based Method for Proton Stopping Power Estimation From Experimental Dual Energy CT Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S; Politte, D; O’Sullivan, J

    2016-06-15

    Purpose: This work aims at reducing the uncertainty in proton stopping power (SP) estimation by a novel combination of a linear, separable basis vector model (BVM) for stopping power calculation (Med Phys 43:600) and a statistical, model-based dual-energy CT (DECT) image reconstruction algorithm (TMI 35:685). The method was applied to experimental data. Methods: BVM assumes the photon attenuation coefficients, electron densities, and mean excitation energies (I-values) of unknown materials can be approximated by a combination of the corresponding quantities of two reference materials. The DECT projection data for a phantom with 5 different known materials was collected on a Philipsmore » Brilliance scanner using two scans at 90 kVp and 140 kVp. The line integral alternating minimization (LIAM) algorithm was used to recover the two BVM coefficient images using the measured source spectra. The proton stopping powers are then estimated from the Bethe-Bloch equation using electron densities and I-values derived from the BVM coefficients. The proton stopping powers and proton ranges for the phantom materials estimated via our BVM based DECT method are compared to ICRU reference values and a post-processing DECT analysis (Yang PMB 55:1343) applied to vendorreconstructed images using the Torikoshi parametric fit model (tPFM). Results: For the phantom materials, the average stopping power estimations for 175 MeV protons derived from our method are within 1% of the ICRU reference values (except for Teflon with a 1.48% error), with an average standard deviation of 0.46% over pixels. The resultant proton ranges agree with the reference values within 2 mm. Conclusion: Our principled DECT iterative reconstruction algorithm, incorporating optimal beam hardening and scatter corrections, in conjunction with a simple linear BVM model, achieves more accurate and robust proton stopping power maps than the post-processing, nonlinear tPFM based DECT analysis applied to conventional reconstructions of low and high energy scans. Funding Support: NIH R01CA 75371; NCI grant R01 CA 149305.« less

  10. Using Landsat Surface Reflectance Data as a Reference Target for Multiswath Hyperspectral Data Collected Over Mixed Agricultural Rangeland Areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCann, Cooper; Repasky, Kevin S.; Morin, Mikindra

    Low-cost flight-based hyperspectral imaging systems have the potential to provide important information for ecosystem and environmental studies as well as aide in land management. To realize this potential, methods must be developed to provide large-area surface reflectance data allowing for temporal data sets at the mesoscale. This paper describes a bootstrap method of producing a large-area, radiometrically referenced hyperspectral data set using the Landsat surface reflectance (LaSRC) data product as a reference target. The bootstrap method uses standard hyperspectral processing techniques that are extended to remove uneven illumination conditions between flight passes, allowing for radiometrically self-consistent data after mosaicking. Throughmore » selective spectral and spatial resampling, LaSRC data are used as a radiometric reference target. Advantages of the bootstrap method include the need for minimal site access, no ancillary instrumentation, and automated data processing. Data from two hyperspectral flights over the same managed agricultural and unmanaged range land covering approximately 5.8 km 2 acquired on June 21, 2014 and June 24, 2015 are presented. As a result, data from a flight over agricultural land collected on June 6, 2016 are compared with concurrently collected ground-based reflectance spectra as a means of validation.« less

  11. Using Landsat Surface Reflectance Data as a Reference Target for Multiswath Hyperspectral Data Collected Over Mixed Agricultural Rangeland Areas

    DOE PAGES

    McCann, Cooper; Repasky, Kevin S.; Morin, Mikindra; ...

    2017-07-25

    Low-cost flight-based hyperspectral imaging systems have the potential to provide important information for ecosystem and environmental studies as well as aide in land management. To realize this potential, methods must be developed to provide large-area surface reflectance data allowing for temporal data sets at the mesoscale. This paper describes a bootstrap method of producing a large-area, radiometrically referenced hyperspectral data set using the Landsat surface reflectance (LaSRC) data product as a reference target. The bootstrap method uses standard hyperspectral processing techniques that are extended to remove uneven illumination conditions between flight passes, allowing for radiometrically self-consistent data after mosaicking. Throughmore » selective spectral and spatial resampling, LaSRC data are used as a radiometric reference target. Advantages of the bootstrap method include the need for minimal site access, no ancillary instrumentation, and automated data processing. Data from two hyperspectral flights over the same managed agricultural and unmanaged range land covering approximately 5.8 km 2 acquired on June 21, 2014 and June 24, 2015 are presented. As a result, data from a flight over agricultural land collected on June 6, 2016 are compared with concurrently collected ground-based reflectance spectra as a means of validation.« less

  12. Perturbational treatment of spin-orbit coupling for generally applicable high-level multi-reference methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mai, Sebastian; Marquetand, Philipp; González, Leticia

    2014-08-21

    An efficient perturbational treatment of spin-orbit coupling within the framework of high-level multi-reference techniques has been implemented in the most recent version of the COLUMBUS quantum chemistry package, extending the existing fully variational two-component (2c) multi-reference configuration interaction singles and doubles (MRCISD) method. The proposed scheme follows related implementations of quasi-degenerate perturbation theory (QDPT) model space techniques. Our model space is built either from uncontracted, large-scale scalar relativistic MRCISD wavefunctions or based on the scalar-relativistic solutions of the linear-response-theory-based multi-configurational averaged quadratic coupled cluster method (LRT-MRAQCC). The latter approach allows for a consistent, approximatively size-consistent and size-extensive treatment of spin-orbitmore » coupling. The approach is described in detail and compared to a number of related techniques. The inherent accuracy of the QDPT approach is validated by comparing cuts of the potential energy surfaces of acrolein and its S, Se, and Te analoga with the corresponding data obtained from matching fully variational spin-orbit MRCISD calculations. The conceptual availability of approximate analytic gradients with respect to geometrical displacements is an attractive feature of the 2c-QDPT-MRCISD and 2c-QDPT-LRT-MRAQCC methods for structure optimization and ab inito molecular dynamics simulations.« less

  13. An improved artifact removal in exposure fusion with local linear constraints

    NASA Astrophysics Data System (ADS)

    Zhang, Hai; Yu, Mali

    2018-04-01

    In exposure fusion, it is challenging to remove artifacts because of camera motion and moving objects in the scene. An improved artifact removal method is proposed in this paper, which performs local linear adjustment in artifact removal progress. After determining a reference image, we first perform high-dynamic-range (HDR) deghosting to generate an intermediate image stack from the input image stack. Then, a linear Intensity Mapping Function (IMF) in each window is extracted based on the intensities of intermediate image and reference image, the intensity mean and variance of reference image. Finally, with the extracted local linear constraints, we reconstruct a target image stack, which can be directly used for fusing a single HDR-like image. Some experiments have been implemented and experimental results demonstrate that the proposed method is robust and effective in removing artifacts especially in the saturated regions of the reference image.

  14. Linear time-dependent reference intervals where there is measurement error in the time variable-a parametric approach.

    PubMed

    Gillard, Jonathan

    2015-12-01

    This article re-examines parametric methods for the calculation of time specific reference intervals where there is measurement error present in the time covariate. Previous published work has commonly been based on the standard ordinary least squares approach, weighted where appropriate. In fact, this is an incorrect method when there are measurement errors present, and in this article, we show that the use of this approach may, in certain cases, lead to referral patterns that may vary with different values of the covariate. Thus, it would not be the case that all patients are treated equally; some subjects would be more likely to be referred than others, hence violating the principle of equal treatment required by the International Federation for Clinical Chemistry. We show, by using measurement error models, that reference intervals are produced that satisfy the requirement for equal treatment for all subjects. © The Author(s) 2011.

  15. The Neural Network In Coordinate Transformation

    NASA Astrophysics Data System (ADS)

    Urusan, Ahmet Yucel

    2011-12-01

    In international literature, Coordinate operations is divided into two categories. They are coordinate conversion and coordinate transformation. Coordinates converted from coordinate system A to coordinate system B in the same datum (mean origine, scale and axis directions are same) by coordinate conversion. There are two different datum in coordinate transformation. The basis of each datum to a different coordinate reference system. In Coordinate transformation, coordinates are transformed from coordinate reference system A to coordinate referance system B. Geodetic studies based on physical measurements. Coordinate transformation needs identical points which were measured in each coordinate reference system (A and B). However it is difficult (and need a big reserved budget) to measure in some places like as top of mountain, boundry of countries and seaside. In this study, this sample problem solution was researched. The method of learning which is one of the neural network methods, was used for solution of this problem.

  16. High-Accuracy Decoupling Estimation of the Systematic Coordinate Errors of an INS and Intensified High Dynamic Star Tracker Based on the Constrained Least Squares Method

    PubMed Central

    Jiang, Jie; Yu, Wenbo; Zhang, Guangjun

    2017-01-01

    Navigation accuracy is one of the key performance indicators of an inertial navigation system (INS). Requirements for an accuracy assessment of an INS in a real work environment are exceedingly urgent because of enormous differences between real work and laboratory test environments. An attitude accuracy assessment of an INS based on the intensified high dynamic star tracker (IHDST) is particularly suitable for a real complex dynamic environment. However, the coupled systematic coordinate errors of an INS and the IHDST severely decrease the attitude assessment accuracy of an INS. Given that, a high-accuracy decoupling estimation method of the above systematic coordinate errors based on the constrained least squares (CLS) method is proposed in this paper. The reference frame of the IHDST is firstly converted to be consistent with that of the INS because their reference frames are completely different. Thereafter, the decoupling estimation model of the systematic coordinate errors is established and the CLS-based optimization method is utilized to estimate errors accurately. After compensating for error, the attitude accuracy of an INS can be assessed based on IHDST accurately. Both simulated experiments and real flight experiments of aircraft are conducted, and the experimental results demonstrate that the proposed method is effective and shows excellent performance for the attitude accuracy assessment of an INS in a real work environment. PMID:28991179

  17. Temperature compensated and self-calibrated current sensor using reference current

    DOEpatents

    Yakymyshyn, Christopher Paul [Seminole, FL; Brubaker, Michael Allen [Loveland, CO; Yakymyshyn, Pamela Jane [Seminole, FL

    2008-01-22

    A method is described to provide temperature compensation and self-calibration of a current sensor based on a plurality of magnetic field sensors positioned around a current carrying conductor. A reference electrical current carried by a conductor positioned within the sensing window of the current sensor is used to correct variations in the output signal due to temperature variations and aging.

  18. Randomised Controlled Trial of a Parenting Intervention in the Voluntary Sector for Reducing Child Conduct Problems: Outcomes and Mechanisms of Change

    ERIC Educational Resources Information Center

    Gardner, Frances; Burton, Jennifer; Klimes, Ivana

    2006-01-01

    Background: To test effectiveness of a parenting intervention, delivered in a community-based voluntary-sector organisation, for reducing conduct problems in clinically-referred children. Methods: Randomised controlled trial, follow-up at 6, 18 months, assessors blind to treatment status. Participants--76 children referred for conduct problems,…

  19. Preparation and value assignment of standard reference material 968e fat-soluble vitamins, carotenoids, and cholesterol in human serum.

    PubMed

    Thomas, Jeanice B; Duewer, David L; Mugenya, Isaac O; Phinney, Karen W; Sander, Lane C; Sharpless, Katherine E; Sniegoski, Lorna T; Tai, Susan S; Welch, Michael J; Yen, James H

    2012-01-01

    Standard Reference Material 968e Fat-Soluble Vitamins, Carotenoids, and Cholesterol in Human Serum provides certified values for total retinol, γ- and α-tocopherol, total lutein, total zeaxanthin, total β-cryptoxanthin, total β-carotene, 25-hydroxyvitamin D(3), and cholesterol. Reference and information values are also reported for nine additional compounds including total α-cryptoxanthin, trans- and total lycopene, total α-carotene, trans-β-carotene, and coenzyme Q(10). The certified values for the fat-soluble vitamins and carotenoids in SRM 968e were based on the agreement of results from the means of two liquid chromatographic methods used at the National Institute of Standards and Technology (NIST) and from the median of results of an interlaboratory comparison exercise among institutions that participate in the NIST Micronutrients Measurement Quality Assurance Program. The assigned values for cholesterol and 25-hydroxyvitamin D(3) in the SRM are the means of results obtained using the NIST reference method based upon gas chromatography-isotope dilution mass spectrometry and liquid chromatography-isotope dilution tandem mass spectrometry, respectively. SRM 968e is currently one of two available health-related NIST reference materials with concentration values assigned for selected fat-soluble vitamins, carotenoids, and cholesterol in human serum matrix. This SRM is used extensively by laboratories worldwide primarily to validate methods for determining these analytes in human serum and plasma and for assigning values to in-house control materials. The value assignment of the analytes in this SRM will help support measurement accuracy and traceability for laboratories performing health-related measurements in the clinical and nutritional communities.

  20. 40 CFR Appendix A-3 to Part 60 - Test Methods 4 through 5I

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... isokinetic sampling rates prior to a pollutant emission measurement run. The approximation method described... with a pollutant emission measurement run. When it is, calculation of percent isokinetic, pollutant emission rate, etc., for the run shall be based upon the results of the reference method or its equivalent...

  1. Computer Applications in Balancing Chemical Equations.

    ERIC Educational Resources Information Center

    Kumar, David D.

    2001-01-01

    Discusses computer-based approaches to balancing chemical equations. Surveys 13 methods, 6 based on matrix, 2 interactive programs, 1 stand-alone system, 1 developed in algorithm in Basic, 1 based on design engineering, 1 written in HyperCard, and 1 prepared for the World Wide Web. (Contains 17 references.) (Author/YDS)

  2. Accurate determination of brain metabolite concentrations using ERETIC as external reference.

    PubMed

    Zoelch, Niklaus; Hock, Andreas; Heinzer-Schweizer, Susanne; Avdievitch, Nikolai; Henning, Anke

    2017-08-01

    Magnetic Resonance Spectroscopy (MRS) can provide in vivo metabolite concentrations in standard concentration units if a reliable reference signal is available. For 1 H MRS in the human brain, typically the signal from the tissue water is used as the (internal) reference signal. However, a concentration determination based on the tissue water signal most often requires a reliable estimate of the water concentration present in the investigated tissue. Especially in clinically interesting cases, this estimation might be difficult. To avoid assumptions about the water in the investigated tissue, the Electric REference To access In vivo Concentrations (ERETIC) method has been proposed. In this approach, the metabolite signal is compared with a reference signal acquired in a phantom and potential coil-loading differences are corrected using a synthetic reference signal. The aim of this study, conducted with a transceiver quadrature head coil, was to increase the accuracy of the ERETIC method by correcting the influence of spatial B 1 inhomogeneities and to simplify the quantification with ERETIC by incorporating an automatic phase correction for the ERETIC signal. Transmit field ( B1+) differences are minimized with a volume-selective power optimization, whereas reception sensitivity changes are corrected using contrast-minimized images of the brain and by adapting the voxel location in the phantom measurement closely to the position measured in vivo. By applying the proposed B 1 correction scheme, the mean metabolite concentrations determined with ERETIC in 21 healthy subjects at three different positions agree with concentrations derived with the tissue water signal as reference. In addition, brain water concentrations determined with ERETIC were in agreement with estimations derived using tissue segmentation and literature values for relative water densities. Based on the results, the ERETIC method presented here is a valid tool to derive in vivo metabolite concentration, with potential advantages compared with internal water referencing in diseased tissue. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Accuracy in Dental Medicine, A New Way to Measure Trueness and Precision

    PubMed Central

    Ender, Andreas; Mehl, Albert

    2014-01-01

    Reference scanners are used in dental medicine to verify a lot of procedures. The main interest is to verify impression methods as they serve as a base for dental restorations. The current limitation of many reference scanners is the lack of accuracy scanning large objects like full dental arches, or the limited possibility to assess detailed tooth surfaces. A new reference scanner, based on focus variation scanning technique, was evaluated with regards to highest local and general accuracy. A specific scanning protocol was tested to scan original tooth surface from dental impressions. Also, different model materials were verified. The results showed a high scanning accuracy of the reference scanner with a mean deviation of 5.3 ± 1.1 µm for trueness and 1.6 ± 0.6 µm for precision in case of full arch scans. Current dental impression methods showed much higher deviations (trueness: 20.4 ± 2.2 µm, precision: 12.5 ± 2.5 µm) than the internal scanning accuracy of the reference scanner. Smaller objects like single tooth surface can be scanned with an even higher accuracy, enabling the system to assess erosive and abrasive tooth surface loss. The reference scanner can be used to measure differences for a lot of dental research fields. The different magnification levels combined with a high local and general accuracy can be used to assess changes of single teeth or restorations up to full arch changes. PMID:24836007

  4. Comparative Evaluation of Veriflow® Listeria monocytogenes to USDA and AOAC Culture Based Methods for the Detection of Listeria monocytogenes in Food.

    PubMed

    Joelsson, Adam C; Brown, Ashley S; Puri, Amrita; Keough, Martin P; Gaudioso, Zara E; Siciliano, Nicholas A; Snook, Adam E

    2015-01-01

    Veriflow® Listeria monocytogenes (LM) is a molecular based assay for the presumptive detection of Listeria monocytogenes from environmental surfaces, dairy, and ready-to-eat (RTE) food matrixes (hot dogs and deli meat). The assay utilizes a PCR detection method coupled with a rapid, visual, flow-based assay that develops in 3 min post PCR amplification and requires only 24 h of enrichment for maximum sensitivity. The Veriflow LM system eliminates the need for sample purification, gel electrophoresis, or fluorophore-based detection of target amplification, and does not require complex data analysis. This Performance Tested Method(SM) validation study demonstrated the ability of the Veriflow LM method to detect low levels of artificially inoculated L. monocytogenes in seven distinct environmental and food matrixes. In each unpaired reference comparison study, probability of detection analysis indicated no significant difference between the Veriflow LM method and the U.S. Department of Agriculture, Food Safety and Inspection Service Microbiology Laboratory Guidebook 8.08 or AOAC 993.12 reference method. Fifty strains of L. monocytogenes were detected in the inclusivity study, while 39 nonspecific organisms were undetected in the exclusivity study. The study results show that Veriflow LM is a sensitive, selective, and robust assay for the presumptive detection of L. monocytogenes sampled from environmental, dairy, or RTE (hot dogs and deli meat) food matrixes.

  5. The use of digital PCR to improve the application of quantitative molecular diagnostic methods for tuberculosis.

    PubMed

    Devonshire, Alison S; O'Sullivan, Denise M; Honeyborne, Isobella; Jones, Gerwyn; Karczmarczyk, Maria; Pavšič, Jernej; Gutteridge, Alice; Milavec, Mojca; Mendoza, Pablo; Schimmel, Heinz; Van Heuverswyn, Fran; Gorton, Rebecca; Cirillo, Daniela Maria; Borroni, Emanuele; Harris, Kathryn; Barnard, Marinus; Heydenrych, Anthenette; Ndusilo, Norah; Wallis, Carole L; Pillay, Keshree; Barry, Thomas; Reddington, Kate; Richter, Elvira; Mozioğlu, Erkan; Akyürek, Sema; Yalçınkaya, Burhanettin; Akgoz, Muslum; Žel, Jana; Foy, Carole A; McHugh, Timothy D; Huggett, Jim F

    2016-08-03

    Real-time PCR (qPCR) based methods, such as the Xpert MTB/RIF, are increasingly being used to diagnose tuberculosis (TB). While qualitative methods are adequate for diagnosis, the therapeutic monitoring of TB patients requires quantitative methods currently performed using smear microscopy. The potential use of quantitative molecular measurements for therapeutic monitoring has been investigated but findings have been variable and inconclusive. The lack of an adequate reference method and reference materials is a barrier to understanding the source of such disagreement. Digital PCR (dPCR) offers the potential for an accurate method for quantification of specific DNA sequences in reference materials which can be used to evaluate quantitative molecular methods for TB treatment monitoring. To assess a novel approach for the development of quality assurance materials we used dPCR to quantify specific DNA sequences in a range of prototype reference materials and evaluated accuracy between different laboratories and instruments. The materials were then also used to evaluate the quantitative performance of qPCR and Xpert MTB/RIF in eight clinical testing laboratories. dPCR was found to provide results in good agreement with the other methods tested and to be highly reproducible between laboratories without calibration even when using different instruments. When the reference materials were analysed with qPCR and Xpert MTB/RIF by clinical laboratories, all laboratories were able to correctly rank the reference materials according to concentration, however there was a marked difference in the measured magnitude. TB is a disease where the quantification of the pathogen could lead to better patient management and qPCR methods offer the potential to rapidly perform such analysis. However, our findings suggest that when precisely characterised materials are used to evaluate qPCR methods, the measurement result variation is too high to determine whether molecular quantification of Mycobacterium tuberculosis would provide a clinically useful readout. The methods described in this study provide a means by which the technical performance of quantitative molecular methods can be evaluated independently of clinical variability to improve accuracy of measurement results. These will assist in ultimately increasing the likelihood that such approaches could be used to improve patient management of TB.

  6. Equivalence testing using existing reference data: An example with genetically modified and conventional crops in animal feeding studies.

    PubMed

    van der Voet, Hilko; Goedhart, Paul W; Schmidt, Kerstin

    2017-11-01

    An equivalence testing method is described to assess the safety of regulated products using relevant data obtained in historical studies with assumedly safe reference products. The method is illustrated using data from a series of animal feeding studies with genetically modified and reference maize varieties. Several criteria for quantifying equivalence are discussed, and study-corrected distribution-wise equivalence is selected as being appropriate for the example case study. An equivalence test is proposed based on a high probability of declaring equivalence in a simplified situation, where there is no between-group variation, where the historical and current studies have the same residual variance, and where the current study is assumed to have a sample size as set by a regulator. The method makes use of generalized fiducial inference methods to integrate uncertainties from both the historical and the current data. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Alternative Methods for Estimating Plane Parameters Based on a Point Cloud

    NASA Astrophysics Data System (ADS)

    Stryczek, Roman

    2017-12-01

    Non-contact measurement techniques carried out using triangulation optical sensors are increasingly popular in measurements with the use of industrial robots directly on production lines. The result of such measurements is often a cloud of measurement points that is characterized by considerable measuring noise, presence of a number of points that differ from the reference model, and excessive errors that must be eliminated from the analysis. To obtain vector information points contained in the cloud that describe reference models, the data obtained during a measurement should be subjected to appropriate processing operations. The present paperwork presents an analysis of suitability of methods known as RANdom Sample Consensus (RANSAC), Monte Carlo Method (MCM), and Particle Swarm Optimization (PSO) for the extraction of the reference model. The effectiveness of the tested methods is illustrated by examples of measurement of the height of an object and the angle of a plane, which were made on the basis of experiments carried out at workshop conditions.

  8. No-reference image quality assessment based on statistics of convolution feature maps

    NASA Astrophysics Data System (ADS)

    Lv, Xiaoxin; Qin, Min; Chen, Xiaohui; Wei, Guo

    2018-04-01

    We propose a Convolutional Feature Maps (CFM) driven approach to accurately predict image quality. Our motivation bases on the finding that the Nature Scene Statistic (NSS) features on convolution feature maps are significantly sensitive to distortion degree of an image. In our method, a Convolutional Neural Network (CNN) is trained to obtain kernels for generating CFM. We design a forward NSS layer which performs on CFM to better extract NSS features. The quality aware features derived from the output of NSS layer is effective to describe the distortion type and degree an image suffered. Finally, a Support Vector Regression (SVR) is employed in our No-Reference Image Quality Assessment (NR-IQA) model to predict a subjective quality score of a distorted image. Experiments conducted on two public databases demonstrate the promising performance of the proposed method is competitive to state of the art NR-IQA methods.

  9. A Lossless Multichannel Bio-Signal Compression Based on Low-Complexity Joint Coding Scheme for Portable Medical Devices

    PubMed Central

    Kim, Dong-Sun; Kwon, Jin-San

    2014-01-01

    Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900

  10. Calibration of GPS based high accuracy speed meter for vehicles

    NASA Astrophysics Data System (ADS)

    Bai, Yin; Sun, Qiao; Du, Lei; Yu, Mei; Bai, Jie

    2015-02-01

    GPS based high accuracy speed meter for vehicles is a special type of GPS speed meter which uses Doppler Demodulation of GPS signals to calculate the speed of a moving target. It is increasingly used as reference equipment in the field of traffic speed measurement, but acknowledged standard calibration methods are still lacking. To solve this problem, this paper presents the set-ups of simulated calibration, field test signal replay calibration, and in-field test comparison with an optical sensor based non-contact speed meter. All the experiments were carried out on particular speed values in the range of (40-180) km/h with the same GPS speed meter. The speed measurement errors of simulated calibration fall in the range of +/-0.1 km/h or +/-0.1%, with uncertainties smaller than 0.02% (k=2). The errors of replay calibration fall in the range of +/-0.1% with uncertainties smaller than 0.10% (k=2). The calibration results justify the effectiveness of the two methods. The relative deviations of the GPS speed meter from the optical sensor based noncontact speed meter fall in the range of +/-0.3%, which validates the use of GPS speed meter as reference instruments. The results of this research can provide technical basis for the establishment of internationally standard calibration methods of GPS speed meters, and thus ensures the legal status of GPS speed meters as reference equipment in the field of traffic speed metrology.

  11. The Nuclear Energy Density Functional Formalism

    NASA Astrophysics Data System (ADS)

    Duguet, T.

    The present document focuses on the theoretical foundations of the nuclear energy density functional (EDF) method. As such, it does not aim at reviewing the status of the field, at covering all possible ramifications of the approach or at presenting recent achievements and applications. The objective is to provide a modern account of the nuclear EDF formalism that is at variance with traditional presentations that rely, at one point or another, on a Hamiltonian-based picture. The latter is not general enough to encompass what the nuclear EDF method represents as of today. Specifically, the traditional Hamiltonian-based picture does not allow one to grasp the difficulties associated with the fact that currently available parametrizations of the energy kernel E[g',g] at play in the method do not derive from a genuine Hamilton operator, would the latter be effective. The method is formulated from the outset through the most general multi-reference, i.e. beyond mean-field, implementation such that the single-reference, i.e. "mean-field", derives as a particular case. As such, a key point of the presentation provided here is to demonstrate that the multi-reference EDF method can indeed be formulated in a mathematically meaningful fashion even if E[g',g] does not derive from a genuine Hamilton operator. In particular, the restoration of symmetries can be entirely formulated without making any reference to a projected state, i.e. within a genuine EDF framework. However, and as is illustrated in the present document, a mathematically meaningful formulation does not guarantee that the formalism is sound from a physical standpoint. The price at which the latter can be enforced as well in the future is eventually alluded to.

  12. Investigation on the Reference Evapotranspiration Distribution at Regional Scale By Alternative Methods to Compute the FAO Penman-Monteith Equation

    NASA Astrophysics Data System (ADS)

    Snyder, R. L.; Mancosu, N.; Spano, D.

    2014-12-01

    This study derived the summer (June-August) reference evapotranspiration distribution map for Sardinia (Italy) based on weather station data and use of the geographic information system (GIS). A modified daily Penman-Monteith equation from the Food and Agriculture Organization of the United Nations (UN-FAO) and the American Society of Civil Engineers Environmental and Water Resources Institute (ASCE-EWRI) was used to calculate the Standardized Reference Evapotranspiration (ETos) for all weather stations having a "full" set of required data for the calculations. For stations having only temperature data (partial stations), the Hargreaves-Samani equation was used to estimate the reference evapotranspiration for a grass surface (ETo). The ETos and ETo results were different depending on the local climate, so two methods to estimate ETos from the ETo were tested. Substitution of missing solar radiation, wind speed, and humidity data from a nearby station within a similar microclimate was found to give better results than using a calibration factor that related ETos and ETo. Therefore, the substitution method was used to estimate ETos at "partial" stations having only temperature data. The combination of 63 full and partial stations was sufficient to use GIS to map ETos for Sardinia. Three interpolation methods were studied, and the ordinary kriging model fitted the observed data better than a radial basis function or the inverse distance weighting method. Using station data points to create a regional map simplified the zonation of ETos when large scale computations were needed. Making a distinction based on ETos classes allows the simulation of crop water requirements for large areas and it can potentially lead to improved irrigation management and water savings. It also provides a baseline to investigate possible impact of climate change.

  13. Intercomparison of Lab-Based Soil Water Extraction Methods for Stable Water Isotope Analysis

    NASA Astrophysics Data System (ADS)

    Pratt, D.; Orlowski, N.; McDonnell, J.

    2016-12-01

    The effect of pore water extraction technique on resultant isotopic signature is poorly understood. Here we present results of an intercomparison of five common lab-based soil water extraction techniques: high pressure mechanical squeezing, centrifugation, direct vapor equilibration, microwave extraction, and cryogenic extraction. We applied five extraction methods to two physicochemically different standard soil types (silty sand and clayey loam) that were oven-dried and rewetted with water of known isotopic composition at three different gravimetric water contents (8, 20, and 30%). We tested the null hypothisis that all extraction techniques would provide the same isotopic result independent from soil type and water content. Our results showed that the extraction technique had a significant effect on the soil water isotopic composition. Each method exhibited deviations from spiked reference water, with soil type and water content showing a secondary effect. Cryogenic extraction showed the largest deviations from the reference water, whereas mechanical squeezing and centrifugation provided the closest match to the reference water for both soil types. We also compared results for each extraction technique that produced liquid water on both an OA-ICOS and IRMS; differences between them were negligible.

  14. [Development and application of reference materials containing mixed degradation products of amoxicillin and ampicillin].

    PubMed

    Li, Wei; Zhang, Wei-Qing; Li, Xiang; Hu, Chang-Qin

    2014-09-01

    Reference materials containing mixed degradation products of amoxicillin and ampicillin were developed after optimization of preparation processes. The target impurities were obtained by controlled stress testing, and each major component was identified with HPLC-MS and compared with single traceable reference standard each. The developed reference materials were applied to system suitability test for verifying HPLC system performed in accordance with set forth in China Pharmacopeia and identification of major impurities in samples based on retention and spectra information, which have advantages over the methods put forth in foreign pharmacopoeias. The development and application of the reference materials offer an effective way for rapid identification of impurities in chromatograms, and provide references for analyzing source of impurities and evaluation of drug quality.

  15. Study of Fourier transform spectrometer based on Michelson interferometer wave-meter

    NASA Astrophysics Data System (ADS)

    Peng, Yuexiang; Wang, Liqiang; Lin, Li

    2008-03-01

    A wave-meter based on Michelson interferometer consists of a reference and a measurement channel. The voice-coiled motor using PID means can realize to move in stable motion. The wavelength of a measurement laser can be obtained by counting interference fringes of reference and measurement laser. Reference laser with frequency stabilization creates a cosine interferogram signal whose frequency is proportional to velocity of the moving motor. The interferogram of the reference laser is converted to pulse signal, and it is subdivided into 16 times. In order to get optical spectrum, the analog signal of measurement channel should be collected. The Analog-to-Digital Converter (ADC) for measurement channel is triggered by the 16-times pulse signal of reference laser. So the sampling rate is constant only depending on frequency of reference laser and irrelative to the motor velocity. This means the sampling rate of measurement channel signals is on a uniform time-scale. The optical spectrum of measurement channel can be processed with Fast Fourier Transform (FFT) method by DSP and displayed on LCD.

  16. The deconvolution of complex spectra by artificial immune system

    NASA Astrophysics Data System (ADS)

    Galiakhmetova, D. I.; Sibgatullin, M. E.; Galimullin, D. Z.; Kamalova, D. I.

    2017-11-01

    An application of the artificial immune system method for decomposition of complex spectra is presented. The results of decomposition of the model contour consisting of three components, Gaussian contours, are demonstrated. The method of artificial immune system is an optimization method, which is based on the behaviour of the immune system and refers to modern methods of search for the engine optimization.

  17. Quick acquisition and recognition method for the beacon in deep space optical communications.

    PubMed

    Wang, Qiang; Liu, Yuefei; Ma, Jing; Tan, Liying; Yu, Siyuan; Li, Changjiang

    2016-12-01

    In deep space optical communications, it is very difficult to acquire the beacon given the long communication distance. Acquisition efficiency is essential for establishing and holding the optical communication link. Here we proposed a quick acquisition and recognition method for the beacon in deep optical communications based on the characteristics of the deep optical link. To identify the beacon from the background light efficiently, we utilized the maximum similarity between the collecting image and the reference image for accurate recognition and acquisition of the beacon in the area of uncertainty. First, the collecting image and the reference image were processed by Fourier-Mellin. Second, image sampling and image matching were applied for the accurate positioning of the beacon. Finally, the field programmable gate array (FPGA)-based system was used to verify and realize this method. The experimental results showed that the acquisition time for the beacon was as fast as 8.1s. Future application of this method in the system design of deep optical communication will be beneficial.

  18. AFNOR validation of Premi Test, a microbiological-based screening tube-test for the detection of antimicrobial residues in animal muscle tissue.

    PubMed

    Gaudin, Valerie; Juhel-Gaugain, Murielle; Morétain, Jean-Pierre; Sanders, Pascal

    2008-12-01

    Premi Test contains viable spores of a strain of Bacillus stearothermophilus which is sensitive to antimicrobial residues, such as beta-lactams, tetracyclines, macrolides and sulphonamides. The growth of the strain is inhibited by the presence of antimicrobial residues in muscle tissue samples. Premi Test was validated according to AFNOR rules (French Association for Normalisation). The AFNOR validation was based on the comparison of reference methods (French Official method, i.e. four plate test (FPT) and the STAR protocol (five plate test)) with the alternative method (Premi Test). A preliminary study was conducted in an expert laboratory (Community Reference Laboratory, CRL) on both spiked and incurred samples (field samples). Several method performance criteria (sensitivity, specificity, relative accuracy) were estimated and are discussed, in addition to detection capabilities. Adequate agreement was found between the alternative method and the reference methods. However, Premi Test was more sensitive to beta-lactams and sulphonamides than the FPT. Subsequently, a collaborative study with 11 laboratories was organised by the CRL. Blank and spiked meat juice samples were sent to participants. The expert laboratory (CRL) statistically analysed the results. It was concluded that Premi Test could be used for the routine determination of antimicrobial residues in muscle of different animal origin with acceptable analytical performance. The detection capabilities of Premi Test for beta-lactams (amoxicillin, ceftiofur), one macrolide (tylosin) and tetracycline were at the level of the respective maximum residue limits (MRL) in muscle samples or even lower.

  19. Kernel reconstruction methods for Doppler broadening - Temperature interpolation by linear combination of reference cross sections at optimally chosen temperatures

    NASA Astrophysics Data System (ADS)

    Ducru, Pablo; Josey, Colin; Dibert, Karia; Sobes, Vladimir; Forget, Benoit; Smith, Kord

    2017-04-01

    This article establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (Tj). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T0 to a higher temperature T - namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (Tj). The choice of the L2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (Tj) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [Tmin ,Tmax ]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [ 300 K , 3000 K ] with only 9 reference temperatures.

  20. Validating internal controls for quantitative plant gene expression studies.

    PubMed

    Brunner, Amy M; Yakovlev, Igor A; Strauss, Steven H

    2004-08-18

    Real-time reverse transcription PCR (RT-PCR) has greatly improved the ease and sensitivity of quantitative gene expression studies. However, accurate measurement of gene expression with this method relies on the choice of a valid reference for data normalization. Studies rarely verify that gene expression levels for reference genes are adequately consistent among the samples used, nor compare alternative genes to assess which are most reliable for the experimental conditions analyzed. Using real-time RT-PCR to study the expression of 10 poplar (genus Populus) housekeeping genes, we demonstrate a simple method for determining the degree of stability of gene expression over a set of experimental conditions. Based on a traditional method for analyzing the stability of varieties in plant breeding, it defines measures of gene expression stability from analysis of variance (ANOVA) and linear regression. We found that the potential internal control genes differed widely in their expression stability over the different tissues, developmental stages and environmental conditions studied. Our results support that quantitative comparisons of candidate reference genes are an important part of real-time RT-PCR studies that seek to precisely evaluate variation in gene expression. The method we demonstrated facilitates statistical and graphical evaluation of gene expression stability. Selection of the best reference gene for a given set of experimental conditions should enable detection of biologically significant changes in gene expression that are too small to be revealed by less precise methods, or when highly variable reference genes are unknowingly used in real-time RT-PCR experiments.

  1. A Simple Plasma Retinol Isotope Ratio Method for Estimating β-Carotene Relative Bioefficacy in Humans: Validation with the Use of Model-Based Compartmental Analysis.

    PubMed

    Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H

    2017-09-01

    Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides information about the contributions of absorptive and postabsorptive conversion to total bioefficacy if an additional sample is taken at 1 d. © 2017 American Society for Nutrition.

  2. Experimental validation of beam quality correction factors for proton beams

    NASA Astrophysics Data System (ADS)

    Gomà, Carles; Hofstetter-Boillat, Bénédicte; Safai, Sairos; Vörös, Sándor

    2015-04-01

    This paper presents a method to experimentally validate the beam quality correction factors (kQ) tabulated in IAEA TRS-398 for proton beams and to determine the kQ of non-tabulated ionization chambers (based on the already tabulated values). The method is based exclusively on ionometry and it consists in comparing the reading of two ionization chambers under the same reference conditions in a proton beam quality Q and a reference beam quality 60Co. This allows one to experimentally determine the ratio between the kQ of the two ionization chambers. In this work, 7 different ionization chamber models were irradiated under the IAEA TRS-398 reference conditions for 60Co beams and proton beams. For the latter, the reference conditions for both modulated beams (spread-out Bragg peak field) and monoenergetic beams (pseudo-monoenergetic field) were studied. For monoenergetic beams, it was found that the experimental kQ values obtained for plane-parallel chambers are consistent with the values tabulated in IAEA TRS-398; whereas the kQ values obtained for cylindrical chambers are not consistent—being higher than the tabulated values. These results support the suggestion (of previous publications) that the IAEA TRS-398 reference conditions for monoenergetic proton beams should be revised so that the effective point of measurement of cylindrical ionization chambers is taken into account when positioning the reference point of the chamber at the reference depth. For modulated proton beams, the tabulated kQ values of all the ionization chambers studied in this work were found to be consistent with each other—except for the IBA FC65-G, whose experimental kQ value was found to be 0.6% lower than the tabulated one. The kQ of the PTW Advanced Markus chamber, which is not tabulated in IAEA TRS-398, was found to be 0.997 ± 0.042 (k = 2), based on the tabulated value of the PTW Markus chamber.

  3. Comparison of droplet digital PCR with quantitative real-time PCR for determination of zygosity in transgenic maize.

    PubMed

    Xu, Xiaoli; Peng, Cheng; Wang, Xiaofu; Chen, Xiaoyun; Wang, Qiang; Xu, Junfeng

    2016-12-01

    This study evaluated the applicability of droplet digital PCR (ddPCR) as a tool for maize zygosity determination using quantitative real-time PCR (qPCR) as a reference technology. Quantitative real-time PCR is commonly used to determine transgene copy number or GMO zygosity characterization. However, its effectiveness is based on identical reaction efficiencies for the transgene and the endogenous reference gene. Additionally, a calibrator sample should be utilized for accuracy. Droplet digital PCR is a DNA molecule counting technique that directly counts the absolute number of target and reference DNA molecules in a sample, independent of assay efficiency or external calibrators. The zygosity of the transgene can be easily determined using the ratio of the quantity of the target gene to the reference single copy endogenous gene. In this study, both the qPCR and ddPCR methods were used to determine insect-resistant transgenic maize IE034 zygosity. Both methods performed well, but the ddPCR method was more convenient because of its absolute quantification property.

  4. Automated acid and base number determination of mineral-based lubricants by fourier transform infrared spectroscopy: commercial laboratory evaluation.

    PubMed

    Winterfield, Craig; van de Voort, F R

    2014-12-01

    The Fluid Life Corporation assessed and implemented Fourier transform infrared spectroscopy (FTIR)-based methods using American Society for Testing and Materials (ASTM)-like stoichiometric reactions for determination of acid and base number for in-service mineral-based oils. The basic protocols, quality control procedures, calibration, validation, and performance of these new quantitative methods are assessed. ASTM correspondence is attained using a mixed-mode calibration, using primary reference standards to anchor the calibration, supplemented by representative sample lubricants analyzed by ASTM procedures. A partial least squares calibration is devised by combining primary acid/base reference standards and representative samples, focusing on the main spectral stoichiometric response with chemometrics assisting in accounting for matrix variability. FTIR(AN/BN) methodology is precise, accurate, and free of most interference that affects ASTM D664 and D4739 results. Extensive side-by-side operational runs produced normally distributed differences with mean differences close to zero and standard deviations of 0.18 and 0.26 mg KOH/g, respectively. Statistically, the FTIR methods are a direct match to the ASTM methods, with superior performance in terms of analytical throughput, preparation time, and solvent use. FTIR(AN/BN) analysis is a viable, significant advance for in-service lubricant analysis, providing an economic means of trending samples instead of tedious and expensive conventional ASTM(AN/BN) procedures. © 2014 Society for Laboratory Automation and Screening.

  5. Novel blood pressure and pulse pressure estimation based on pulse transit time and stroke volume approximation.

    PubMed

    Lee, Joonnyong; Sohn, JangJay; Park, Jonghyun; Yang, SeungMan; Lee, Saram; Kim, Hee Chan

    2018-06-18

    Non-invasive continuous blood pressure monitors are of great interest to the medical community due to their value in hypertension management. Recently, studies have shown the potential of pulse pressure as a therapeutic target for hypertension, but not enough attention has been given to non-invasive continuous monitoring of pulse pressure. Although accurate pulse pressure estimation can be of direct value to hypertension management and indirectly to the estimation of systolic blood pressure, as it is the sum of pulse pressure and diastolic blood pressure, only a few inadequate methods of pulse pressure estimation have been proposed. We present a novel, non-invasive blood pressure and pulse pressure estimation method based on pulse transit time and pre-ejection period. Pre-ejection period and pulse transit time were measured non-invasively using electrocardiogram, seismocardiogram, and photoplethysmogram measured from the torso. The proposed method used the 2-element Windkessel model to model pulse pressure with the ratio of stroke volume, approximated by pre-ejection period, and arterial compliance, estimated by pulse transit time. Diastolic blood pressure was estimated using pulse transit time, and systolic blood pressure was estimated as the sum of the two estimates. The estimation method was verified in 11 subjects in two separate conditions with induced cardiovascular response and the results were compared against a reference measurement and values obtained from a previously proposed method. The proposed method yielded high agreement with the reference (pulse pressure correlation with reference R ≥ 0.927, diastolic blood pressure correlation with reference R ≥ 0.854, systolic blood pressure correlation with reference R ≥ 0.914) and high estimation accuracy in pulse pressure (mean root-mean-squared error ≤ 3.46 mmHg) and blood pressure (mean root-mean-squared error ≤ 6.31 mmHg for diastolic blood pressure and ≤ 8.41 mmHg for systolic blood pressure) over a wide range of hemodynamic changes. The proposed pulse pressure estimation method provides accurate estimates in situations with and without significant changes in stroke volume. The proposed method improves upon the currently available systolic blood pressure estimation methods by providing accurate pulse pressure estimates.

  6. Reference values of thirty-one frequently used laboratory markers for 75-year-old males and females

    PubMed Central

    Ryden, Ingvar; Lind, Lars

    2012-01-01

    Background We have previously reported reference values for common clinical chemistry tests in healthy 70-year-old males and females. We have now repeated this study 5 years later to establish reference values also at the age of 75. It is important to have adequate reference values for elderly patients as biological markers may change over time, and adequate reference values are essential for correct clinical decisions. Methods We have investigated 31 frequently used laboratory markers in 75-year-old males (n = 354) and females (n = 373) without diabetes. The 2.5 and 97.5 percentiles for these markers were calculated according to the recommendations of the International Federation of Clinical Chemistry. Results Reference values are reported for 75-year-old males and females for 31 frequently used laboratory markers. Conclusion There were minor differences between reference intervals calculated with and without individuals with cardiovascular diseases. Several of the reference intervals differed from Scandinavian reference intervals based on younger individuals (Nordic Reference Interval Project). PMID:22300333

  7. Direct model-based predictive control scheme without cost function for voltage source inverters with reduced common-mode voltage

    NASA Astrophysics Data System (ADS)

    Kim, Jae-Chang; Moon, Sung-Ki; Kwak, Sangshin

    2018-04-01

    This paper presents a direct model-based predictive control scheme for voltage source inverters (VSIs) with reduced common-mode voltages (CMVs). The developed method directly finds optimal vectors without using repetitive calculation of a cost function. To adjust output currents with the CMVs in the range of -Vdc/6 to +Vdc/6, the developed method uses voltage vectors, as finite control resources, excluding zero voltage vectors which produce the CMVs in the VSI within ±Vdc/2. In a model-based predictive control (MPC), not using zero voltage vectors increases the output current ripples and the current errors. To alleviate these problems, the developed method uses two non-zero voltage vectors in one sampling step. In addition, the voltage vectors scheduled to be used are directly selected at every sampling step once the developed method calculates the future reference voltage vector, saving the efforts of repeatedly calculating the cost function. And the two non-zero voltage vectors are optimally allocated to make the output current approach the reference current as close as possible. Thus, low CMV, rapid current-following capability and sufficient output current ripple performance are attained by the developed method. The results of a simulation and an experiment verify the effectiveness of the developed method.

  8. Device and Method for Gathering Ensemble Data Sets

    NASA Technical Reports Server (NTRS)

    Racette, Paul E. (Inventor)

    2014-01-01

    An ensemble detector uses calibrated noise references to produce ensemble sets of data from which properties of non-stationary processes may be extracted. The ensemble detector comprising: a receiver; a switching device coupled to the receiver, the switching device configured to selectively connect each of a plurality of reference noise signals to the receiver; and a gain modulation circuit coupled to the receiver and configured to vary a gain of the receiver based on a forcing signal; whereby the switching device selectively connects each of the plurality of reference noise signals to the receiver to produce an output signal derived from the plurality of reference noise signals and the forcing signal.

  9. The Measurement of Term Importance in Automatic Indexing.

    ERIC Educational Resources Information Center

    Salton, G.; And Others

    1981-01-01

    Reviews major term-weighting theories, presents methods for estimating the relevance properties of terms based on their frequency characteristics in a document collection, and compares weighting systems using term relevance properties with more conventional frequency-based methodologies. Eighteen references are cited. (Author/FM)

  10. 24 CFR 35.1310 - References.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES Methods and Standards for Lead-Paint...) The HUD Guidelines for the Evaluation and Control of Lead-Based Paint Hazards in Housing (Guidelines...

  11. 24 CFR 35.1310 - References.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES Methods and Standards for Lead-Paint...) The HUD Guidelines for the Evaluation and Control of Lead-Based Paint Hazards in Housing (Guidelines...

  12. 24 CFR 35.1310 - References.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES Methods and Standards for Lead-Paint...) The HUD Guidelines for the Evaluation and Control of Lead-Based Paint Hazards in Housing (Guidelines...

  13. 24 CFR 35.1310 - References.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES Methods and Standards for Lead-Paint...) The HUD Guidelines for the Evaluation and Control of Lead-Based Paint Hazards in Housing (Guidelines...

  14. 24 CFR 35.1310 - References.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES Methods and Standards for Lead-Paint...) The HUD Guidelines for the Evaluation and Control of Lead-Based Paint Hazards in Housing (Guidelines...

  15. PCA-based groupwise image registration for quantitative MRI.

    PubMed

    Huizinga, W; Poot, D H J; Guyader, J-M; Klaassen, R; Coolen, B F; van Kranenburg, M; van Geuns, R J M; Uitterdijk, A; Polfliet, M; Vandemeulebroucke, J; Leemans, A; Niessen, W J; Klein, S

    2016-04-01

    Quantitative magnetic resonance imaging (qMRI) is a technique for estimating quantitative tissue properties, such as the T1 and T2 relaxation times, apparent diffusion coefficient (ADC), and various perfusion measures. This estimation is achieved by acquiring multiple images with different acquisition parameters (or at multiple time points after injection of a contrast agent) and by fitting a qMRI signal model to the image intensities. Image registration is often necessary to compensate for misalignments due to subject motion and/or geometric distortions caused by the acquisition. However, large differences in image appearance make accurate image registration challenging. In this work, we propose a groupwise image registration method for compensating misalignment in qMRI. The groupwise formulation of the method eliminates the requirement of choosing a reference image, thus avoiding a registration bias. The method minimizes a cost function that is based on principal component analysis (PCA), exploiting the fact that intensity changes in qMRI can be described by a low-dimensional signal model, but not requiring knowledge on the specific acquisition model. The method was evaluated on 4D CT data of the lungs, and both real and synthetic images of five different qMRI applications: T1 mapping in a porcine heart, combined T1 and T2 mapping in carotid arteries, ADC mapping in the abdomen, diffusion tensor mapping in the brain, and dynamic contrast-enhanced mapping in the abdomen. Each application is based on a different acquisition model. The method is compared to a mutual information-based pairwise registration method and four other state-of-the-art groupwise registration methods. Registration accuracy is evaluated in terms of the precision of the estimated qMRI parameters, overlap of segmented structures, distance between corresponding landmarks, and smoothness of the deformation. In all qMRI applications the proposed method performed better than or equally well as competing methods, while avoiding the need to choose a reference image. It is also shown that the results of the conventional pairwise approach do depend on the choice of this reference image. We therefore conclude that our groupwise registration method with a similarity measure based on PCA is the preferred technique for compensating misalignments in qMRI. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Machine learning-based dual-energy CT parametric mapping

    NASA Astrophysics Data System (ADS)

    Su, Kuan-Hao; Kuo, Jung-Wen; Jordan, David W.; Van Hedent, Steven; Klahr, Paul; Wei, Zhouping; Helo, Rose Al; Liang, Fan; Qian, Pengjiang; Pereira, Gisele C.; Rassouli, Negin; Gilkeson, Robert C.; Traughber, Bryan J.; Cheng, Chee-Wai; Muzic, Raymond F., Jr.

    2018-06-01

    The aim is to develop and evaluate machine learning methods for generating quantitative parametric maps of effective atomic number (Zeff), relative electron density (ρ e), mean excitation energy (I x ), and relative stopping power (RSP) from clinical dual-energy CT data. The maps could be used for material identification and radiation dose calculation. Machine learning methods of historical centroid (HC), random forest (RF), and artificial neural networks (ANN) were used to learn the relationship between dual-energy CT input data and ideal output parametric maps calculated for phantoms from the known compositions of 13 tissue substitutes. After training and model selection steps, the machine learning predictors were used to generate parametric maps from independent phantom and patient input data. Precision and accuracy were evaluated using the ideal maps. This process was repeated for a range of exposure doses, and performance was compared to that of the clinically-used dual-energy, physics-based method which served as the reference. The machine learning methods generated more accurate and precise parametric maps than those obtained using the reference method. Their performance advantage was particularly evident when using data from the lowest exposure, one-fifth of a typical clinical abdomen CT acquisition. The RF method achieved the greatest accuracy. In comparison, the ANN method was only 1% less accurate but had much better computational efficiency than RF, being able to produce parametric maps in 15 s. Machine learning methods outperformed the reference method in terms of accuracy and noise tolerance when generating parametric maps, encouraging further exploration of the techniques. Among the methods we evaluated, ANN is the most suitable for clinical use due to its combination of accuracy, excellent low-noise performance, and computational efficiency.

  17. Machine learning-based dual-energy CT parametric mapping.

    PubMed

    Su, Kuan-Hao; Kuo, Jung-Wen; Jordan, David W; Van Hedent, Steven; Klahr, Paul; Wei, Zhouping; Al Helo, Rose; Liang, Fan; Qian, Pengjiang; Pereira, Gisele C; Rassouli, Negin; Gilkeson, Robert C; Traughber, Bryan J; Cheng, Chee-Wai; Muzic, Raymond F

    2018-06-08

    The aim is to develop and evaluate machine learning methods for generating quantitative parametric maps of effective atomic number (Z eff ), relative electron density (ρ e ), mean excitation energy (I x ), and relative stopping power (RSP) from clinical dual-energy CT data. The maps could be used for material identification and radiation dose calculation. Machine learning methods of historical centroid (HC), random forest (RF), and artificial neural networks (ANN) were used to learn the relationship between dual-energy CT input data and ideal output parametric maps calculated for phantoms from the known compositions of 13 tissue substitutes. After training and model selection steps, the machine learning predictors were used to generate parametric maps from independent phantom and patient input data. Precision and accuracy were evaluated using the ideal maps. This process was repeated for a range of exposure doses, and performance was compared to that of the clinically-used dual-energy, physics-based method which served as the reference. The machine learning methods generated more accurate and precise parametric maps than those obtained using the reference method. Their performance advantage was particularly evident when using data from the lowest exposure, one-fifth of a typical clinical abdomen CT acquisition. The RF method achieved the greatest accuracy. In comparison, the ANN method was only 1% less accurate but had much better computational efficiency than RF, being able to produce parametric maps in 15 s. Machine learning methods outperformed the reference method in terms of accuracy and noise tolerance when generating parametric maps, encouraging further exploration of the techniques. Among the methods we evaluated, ANN is the most suitable for clinical use due to its combination of accuracy, excellent low-noise performance, and computational efficiency.

  18. Quantitative analysis of MRI-guided attenuation correction techniques in time-of-flight brain PET/MRI.

    PubMed

    Mehranian, Abolfazl; Arabi, Hossein; Zaidi, Habib

    2016-04-15

    In quantitative PET/MR imaging, attenuation correction (AC) of PET data is markedly challenged by the need of deriving accurate attenuation maps from MR images. A number of strategies have been developed for MRI-guided attenuation correction with different degrees of success. In this work, we compare the quantitative performance of three generic AC methods, including standard 3-class MR segmentation-based, advanced atlas-registration-based and emission-based approaches in the context of brain time-of-flight (TOF) PET/MRI. Fourteen patients referred for diagnostic MRI and (18)F-FDG PET/CT brain scans were included in this comparative study. For each study, PET images were reconstructed using four different attenuation maps derived from CT-based AC (CTAC) serving as reference, standard 3-class MR-segmentation, atlas-registration and emission-based AC methods. To generate 3-class attenuation maps, T1-weighted MRI images were segmented into background air, fat and soft-tissue classes followed by assignment of constant linear attenuation coefficients of 0, 0.0864 and 0.0975 cm(-1) to each class, respectively. A robust atlas-registration based AC method was developed for pseudo-CT generation using local weighted fusion of atlases based on their morphological similarity to target MR images. Our recently proposed MRI-guided maximum likelihood reconstruction of activity and attenuation (MLAA) algorithm was employed to estimate the attenuation map from TOF emission data. The performance of the different AC algorithms in terms of prediction of bones and quantification of PET tracer uptake was objectively evaluated with respect to reference CTAC maps and CTAC-PET images. Qualitative evaluation showed that the MLAA-AC method could sparsely estimate bones and accurately differentiate them from air cavities. It was found that the atlas-AC method can accurately predict bones with variable errors in defining air cavities. Quantitative assessment of bone extraction accuracy based on Dice similarity coefficient (DSC) showed that MLAA-AC and atlas-AC resulted in DSC mean values of 0.79 and 0.92, respectively, in all patients. The MLAA-AC and atlas-AC methods predicted mean linear attenuation coefficients of 0.107 and 0.134 cm(-1), respectively, for the skull compared to reference CTAC mean value of 0.138cm(-1). The evaluation of the relative change in tracer uptake within 32 distinct regions of the brain with respect to CTAC PET images showed that the 3-class MRAC, MLAA-AC and atlas-AC methods resulted in quantification errors of -16.2 ± 3.6%, -13.3 ± 3.3% and 1.0 ± 3.4%, respectively. Linear regression and Bland-Altman concordance plots showed that both 3-class MRAC and MLAA-AC methods result in a significant systematic bias in PET tracer uptake, while the atlas-AC method results in a negligible bias. The standard 3-class MRAC method significantly underestimated cerebral PET tracer uptake. While current state-of-the-art MLAA-AC methods look promising, they were unable to noticeably reduce quantification errors in the context of brain imaging. Conversely, the proposed atlas-AC method provided the most accurate attenuation maps, and thus the lowest quantification bias. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Development and evaluation of thin-layer chromatography-digital image-based analysis for the quantitation of the botanical pesticide azadirachtin in agricultural matrixes and commercial formulations: comparison with ELISA.

    PubMed

    Tanuja, Penmatsa; Venugopal, Namburi; Sashidhar, Rao Beedu

    2007-01-01

    A simple thin-layer chromatography-digital image-based analytical method has been developed for the quantitation of the botanical pesticide, azadirachtin. The method was validated by analyzing azadirachtin in the spiked food matrixes and processed commercial pesticide formulations, using acidified vanillin reagent as a postchromatographic derivatizing agent. The separated azadirachtin was clearly identified as a green spot. The Rf value was found to be 0.55, which was similar to that of a reference standard. A standard calibration plot was established using a reference standard, based on the linear regression analysis [r2 = 0.996; y = 371.43 + (634.82)x]. The sensitivity of the method was found to be 0.875 microg azadirachtin. Spiking studies conducted at the 1 ppm (microg/g) level in various agricultural matrixes, such as brinjal, tomato, coffee, and cotton seeds, revealed the recoveries of azadirachtin in the range of 67-92%. Azadirachtin content of commercial neem formulations analyzed by the method was in the range of 190-1825 ppm (microg/mL). Further, the present method was compared with an immunoanalytical method enzyme-linked immonosorbent assay developed earlier in our laboratory. Statistical comparison of the 2 methods, using Fischer's F-test, indicated no significant difference in variance, suggesting that both methods are comparable.

  20. Doppler ultrasound-based measurement of tendon velocity and displacement for application toward detecting user-intended motion.

    PubMed

    Stegman, Kelly J; Park, Edward J; Dechev, Nikolai

    2012-07-01

    The motivation of this research is to non-invasively monitor the wrist tendon's displacement and velocity, for purposes of controlling a prosthetic device. This feasibility study aims to determine if the proposed technique using Doppler ultrasound is able to accurately estimate the tendon's instantaneous velocity and displacement. This study is conducted with a tendon mimicking experiment consisting of two different materials: a commercial ultrasound scanner, and a reference linear motion stage set-up. Audio-based output signals are acquired from the ultrasound scanner, and are processed with our proposed Fourier technique to obtain the tendon's velocity and displacement estimates. We then compare our estimates to an external reference system, and also to the ultrasound scanner's own estimates based on its proprietary software. The proposed tendon motion estimation method has been shown to be repeatable, effective and accurate in comparison to the external reference system, and is generally more accurate than the scanner's own estimates. After establishing this feasibility study, future testing will include cadaver-based studies to test the technique on the human arm tendon anatomy, and later on live human test subjects in order to further refine the proposed method for the novel purpose of detecting user-intended tendon motion for controlling wearable prosthetic devices.

  1. Fibrinolysis standards: a review of the current status.

    PubMed

    Thelwell, C

    2010-07-01

    Biological standards are used to calibrate measurements of components of the fibrinolytic system, either for assigning potency values to therapeutic products, or to determine levels in human plasma as an indicator of thrombotic risk. Traditionally WHO International Standards are calibrated in International Units based on consensus values from collaborative studies. The International Unit is defined by the response activity of a given amount of the standard in a bioassay, independent of the method used. Assay validity is based on the assumption that both standard and test preparation contain the same analyte, and the response in an assay is a true function of this analyte. This principle is reflected in the diversity of source materials used to prepare fibrinolysis standards, which has depended on the contemporary preparations they were employed to measure. With advancing recombinant technology, and improved analytical techniques, a reference system based on reference materials and associated reference methods has been recommended for future fibrinolysis standards. Careful consideration and scientific judgement must however be applied when deciding on an approach to develop a new standard, with decisions based on the suitability of a standard to serve its purpose, and not just to satisfy a metrological ideal. 2010 The International Association for Biologicals. Published by Elsevier Ltd. All rights reserved.

  2. The role of perspective taking in how children connect reference frames when explaining astronomical phenomena

    NASA Astrophysics Data System (ADS)

    Plummer, Julia D.; Bower, Corinne A.; Liben, Lynn S.

    2016-02-01

    This study investigates the role of perspective-taking skills in how children explain spatially complex astronomical phenomena. Explaining many astronomical phenomena, especially those studied in elementary and middle school, requires shifting between an Earth-based description of the phenomena and a space-based reference frame. We studied 7- to 9-year-old children (N = 15) to (a) develop a method for capturing how children make connections between reference frames and to (b) explore connections between perspective-taking skill and the nature of children's explanations. Children's explanations for the apparent motion of the Sun and stars and for seasonal changes in constellations were coded for accuracy of explanation, connection between frames of reference, and use of gesture. Children with higher spatial perspective-taking skills made more explicit connections between reference frames and used certain gesture-types more frequently, although this pattern was evident for only some phenomena. Findings suggest that children - particularly those with lower perspective-taking skills - may need additional support in learning to explicitly connect reference frames in astronomy. Understanding spatial thinking among children who successfully made explicit connections between reference frames in their explanations could be a starting point for future instruction in this domain.

  3. MTPA control of mechanical sensorless IPMSM based on adaptive nonlinear control.

    PubMed

    Najjar-Khodabakhsh, Abbas; Soltani, Jafar

    2016-03-01

    In this paper, an adaptive nonlinear control scheme has been proposed for implementing maximum torque per ampere (MTPA) control strategy corresponding to interior permanent magnet synchronous motor (IPMSM) drive. This control scheme is developed in the rotor d-q axis reference frame using adaptive input-output state feedback linearization (AIOFL) method. The drive system control stability is supported by Lyapunov theory. The motor inductances are online estimated by an estimation law obtained by AIOFL. The estimation errors of these parameters are proved to be asymptotically converged to zero. Based on minimizing the motor current amplitude, the MTPA control strategy is performed by using the nonlinear optimization technique while considering the online reference torque. The motor reference torque is generated by a conventional rotor speed PI controller. By performing MTPA control strategy, the generated online motor d-q reference currents were used in AIOFL controller to obtain the SV-PWM reference voltages and the online estimation of the motor d-q inductances. In addition, the stator resistance is online estimated using a conventional PI controller. Moreover, the rotor position is detected using the online estimation of the stator flux and online estimation of the motor q-axis inductance. Simulation and experimental results obtained prove the effectiveness and the capability of the proposed control method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Quantification of the predominant monomeric catechins in baking chocolate standard reference material by LC/APCI-MS.

    PubMed

    Nelson, Bryant C; Sharpless, Katherine E

    2003-01-29

    Catechins are polyphenolic plant compounds (flavonoids) that may offer significant health benefits to humans. These benefits stem largely from their anticarcinogenic, antioxidant, and antimutagenic properties. Recent epidemiological studies suggest that the consumption of flavonoid-containing foods is associated with reduced risk of cardiovascular disease. Chocolate is a natural cocoa bean-based product that reportedly contains high levels of monomeric, oligomeric, and polymeric catechins. We have applied solid-liquid extraction and liquid chromatography coupled with atmospheric pressure chemical ionization-mass spectrometry to the identification and determination of the predominant monomeric catechins, (+)-catechin and (-)-epicatechin, in a baking chocolate Standard Reference Material (NIST Standard Reference Material 2384). (+)-Catechin and (-)-epicatechin are detected and quantified in chocolate extracts on the basis of selected-ion monitoring of their protonated [M + H](+) molecular ions. Tryptophan methyl ester is used as an internal standard. The developed method has the capacity to accurately quantify as little as 0.1 microg/mL (0.01 mg of catechin/g of chocolate) of either catechin in chocolate extracts, and the method has additionally been used to certify (+)-catechin and (-)-epicatechin levels in the baking chocolate Standard Reference Material. This is the first reported use of liquid chromatography/mass spectrometry for the quantitative determination of monomeric catechins in chocolate and the only report certifying monomeric catechin levels in a food-based Standard Reference Material.

  5. Floquet wave ultrasonic method for determination of single ply moduli in multidirectional composites.

    PubMed

    Wang, L; Rokhlin, S I

    2002-09-01

    An inversion method based on Floquet wave velocity in a periodic medium has been introduced to determine the single ply elastic moduli of a multi-ply composite. The stability of this algorithm is demonstrated by numerical simulation. The applicability of the plane wave approximation to the velocity measurement in the double-through-transmission self-reference method has been analyzed using a time-domain beam model. It shows that the finite width of the transmitter affects only the amplitudes of the signals and has almost no effect on the time delay. Using this method, the ply moduli for a multiply composite have been experimentally determined. While the paper focuses on elastic constant reconstruction from phase velocity measurements by the self-reference double-through-transmission method, the reconstruction methodology is also applicable to assessment of data collected by other methods.

  6. [Investigation on pattern of quality control for Chinese materia medica based on famous-region drug and bioassay--the work reference].

    PubMed

    Yan, Dan; Xiao, Xiaohe

    2011-05-01

    Selection and standardization of the work reference are the technical issues to be faced with in the bioassay of Chinese materia medica. Taking the bioassay of Coptis chinensis. as an example, the manufacture process of the famous-region drugs extraction was explained from the aspects of original identification, routine examination, component analysis and bioassay. The common technologies were extracted, and the selection and standardization procedures of the work reference for the bioassay of Chinese materia medica were drawn up, so as to provide technical support for constructing a new mode and method of the quality control of Chinese materia medica based on the famous-region drugs and bioassay.

  7. Optimal consistency in microRNA expression analysis using reference-gene-based normalization.

    PubMed

    Wang, Xi; Gardiner, Erin J; Cairns, Murray J

    2015-05-01

    Normalization of high-throughput molecular expression profiles secures differential expression analysis between samples of different phenotypes or biological conditions, and facilitates comparison between experimental batches. While the same general principles apply to microRNA (miRNA) normalization, there is mounting evidence that global shifts in their expression patterns occur in specific circumstances, which pose a challenge for normalizing miRNA expression data. As an alternative to global normalization, which has the propensity to flatten large trends, normalization against constitutively expressed reference genes presents an advantage through their relative independence. Here we investigated the performance of reference-gene-based (RGB) normalization for differential miRNA expression analysis of microarray expression data, and compared the results with other normalization methods, including: quantile, variance stabilization, robust spline, simple scaling, rank invariant, and Loess regression. The comparative analyses were executed using miRNA expression in tissue samples derived from subjects with schizophrenia and non-psychiatric controls. We proposed a consistency criterion for evaluating methods by examining the overlapping of differentially expressed miRNAs detected using different partitions of the whole data. Based on this criterion, we found that RGB normalization generally outperformed global normalization methods. Thus we recommend the application of RGB normalization for miRNA expression data sets, and believe that this will yield a more consistent and useful readout of differentially expressed miRNAs, particularly in biological conditions characterized by large shifts in miRNA expression.

  8. Mach-zehnder based optical marker/comb generator for streak camera calibration

    DOEpatents

    Miller, Edward Kirk

    2015-03-03

    This disclosure is directed to a method and apparatus for generating marker and comb indicia in an optical environment using a Mach-Zehnder (M-Z) modulator. High speed recording devices are configured to record image or other data defining a high speed event. To calibrate and establish time reference, the markers or combs are indicia which serve as timing pulses (markers) or a constant-frequency train of optical pulses (comb) to be imaged on a streak camera for accurate time based calibration and time reference. The system includes a camera, an optic signal generator which provides an optic signal to an M-Z modulator and biasing and modulation signal generators configured to provide input to the M-Z modulator. An optical reference signal is provided to the M-Z modulator. The M-Z modulator modulates the reference signal to a higher frequency optical signal which is output through a fiber coupled link to the streak camera.

  9. National geodetic satellite program, part 2

    NASA Technical Reports Server (NTRS)

    Schmid, H.

    1977-01-01

    Satellite geodesy and the creation of worldwide geodetic reference systems is discussed. The geometric description of the surface and the analytical description of the gravity field of the earth by means of worldwide reference systems, with the aid of satellite geodesy, are presented. A triangulation method based on photogrammetric principles is described in detail. Results are derived in the form of three dimensional models. These mathematical models represent the frame of reference into which one can fit the existing geodetic results from the various local datums, as well as future measurements.

  10. Nationwide Multicenter Reference Interval Study for 28 Common Biochemical Analytes in China.

    PubMed

    Xia, Liangyu; Chen, Ming; Liu, Min; Tao, Zhihua; Li, Shijun; Wang, Liang; Cheng, Xinqi; Qin, Xuzhen; Han, Jianhua; Li, Pengchang; Hou, Li'an; Yu, Songlin; Ichihara, Kiyoshi; Qiu, Ling

    2016-03-01

    A nationwide multicenter study was conducted in the China to explore sources of variation of reference values and establish reference intervals for 28 common biochemical analytes, as a part of the International Federation of Clinical Chemistry and Laboratory Medicine, Committee on Reference Intervals and Decision Limits (IFCC/C-RIDL) global study on reference values. A total of 3148 apparently healthy volunteers were recruited in 6 cities covering a wide area in China. Blood samples were tested in 2 central laboratories using Beckman Coulter AU5800 chemistry analyzers. Certified reference materials and value-assigned serum panel were used for standardization of test results. Multiple regression analysis was performed to explore sources of variation. Need for partition of reference intervals was evaluated based on 3-level nested ANOVA. After secondary exclusion using the latent abnormal values exclusion method, reference intervals were derived by a parametric method using the modified Box-Cox formula. Test results of 20 analytes were made traceable to reference measurement procedures. By the ANOVA, significant sex-related and age-related differences were observed in 12 and 12 analytes, respectively. A small regional difference was observed in the results for albumin, glucose, and sodium. Multiple regression analysis revealed BMI-related changes in results of 9 analytes for man and 6 for woman. Reference intervals of 28 analytes were computed with 17 analytes partitioned by sex and/or age. In conclusion, reference intervals of 28 common chemistry analytes applicable to Chinese Han population were established by use of the latest methodology. Reference intervals of 20 analytes traceable to reference measurement procedures can be used as common reference intervals, whereas others can be used as the assay system-specific reference intervals in China.

  11. Nationwide Multicenter Reference Interval Study for 28 Common Biochemical Analytes in China

    PubMed Central

    Xia, Liangyu; Chen, Ming; Liu, Min; Tao, Zhihua; Li, Shijun; Wang, Liang; Cheng, Xinqi; Qin, Xuzhen; Han, Jianhua; Li, Pengchang; Hou, Li’an; Yu, Songlin; Ichihara, Kiyoshi; Qiu, Ling

    2016-01-01

    Abstract A nationwide multicenter study was conducted in the China to explore sources of variation of reference values and establish reference intervals for 28 common biochemical analytes, as a part of the International Federation of Clinical Chemistry and Laboratory Medicine, Committee on Reference Intervals and Decision Limits (IFCC/C-RIDL) global study on reference values. A total of 3148 apparently healthy volunteers were recruited in 6 cities covering a wide area in China. Blood samples were tested in 2 central laboratories using Beckman Coulter AU5800 chemistry analyzers. Certified reference materials and value-assigned serum panel were used for standardization of test results. Multiple regression analysis was performed to explore sources of variation. Need for partition of reference intervals was evaluated based on 3-level nested ANOVA. After secondary exclusion using the latent abnormal values exclusion method, reference intervals were derived by a parametric method using the modified Box–Cox formula. Test results of 20 analytes were made traceable to reference measurement procedures. By the ANOVA, significant sex-related and age-related differences were observed in 12 and 12 analytes, respectively. A small regional difference was observed in the results for albumin, glucose, and sodium. Multiple regression analysis revealed BMI-related changes in results of 9 analytes for man and 6 for woman. Reference intervals of 28 analytes were computed with 17 analytes partitioned by sex and/or age. In conclusion, reference intervals of 28 common chemistry analytes applicable to Chinese Han population were established by use of the latest methodology. Reference intervals of 20 analytes traceable to reference measurement procedures can be used as common reference intervals, whereas others can be used as the assay system-specific reference intervals in China. PMID:26945390

  12. [Development of the certified reference material of mercury in lyophilized human urine].

    PubMed

    Zhao, Wei; Zhang, Fu-gang; DU, Hui-fang; Pan, Ya-juan; Yan, Hui-fang

    2011-02-01

    To develop the certified reference material of mercury in lyophilized human urine. Human urine samples from normal level mercury districts were filtered, homogenized, dispensed, lyophilized and radio-sterilized. Homogeneity test, stability inspection and certification were conducted using a atom fluorescence spectrophotometric method. The physical and chemical stability of the certified reference material were assessed for 18 months. The certified values are based on analysis made by three independent laboratories. The certified values are as follows: low level was (35.6 ± 2.1) µg/L, high level was (50.5 ± 3.0) µg/L. The certified reference material of mercury in lyophilized human urine in this research reached the national certified reference material requirements and could be used for the quality control.

  13. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model.

    PubMed

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Temperature compensated and self-calibrated current sensor using reference magnetic field

    DOEpatents

    Yakymyshyn, Christopher Paul; Brubaker, Michael Allen; Yakymyshyn, Pamela Jane

    2007-10-09

    A method is described to provide temperature compensation and self-calibration of a current sensor based on a plurality of magnetic field sensors positioned around a current carrying conductor. A reference magnetic field generated within the current sensor housing is detected by the magnetic field sensors and is used to correct variations in the output signal due to temperature variations and aging.

  15. Temperature compensated current sensor using reference magnetic field

    DOEpatents

    Yakymyshyn, Christopher Paul; Brubaker, Michael Allen; Yakymyshyn, Pamela Jane

    2007-10-09

    A method is described to provide temperature compensation and self-calibration of a current sensor based on a plurality of magnetic field sensors positioned around a current carrying conductor. A reference magnetic field generated within the current sensor housing is detected by a separate but identical magnetic field sensor and is used to correct variations in the output signal due to temperature variations and aging.

  16. Report on New Methods for Representing and Interacting with Qualitative Geographic Information, Stage 2: Task Group 1 Core Re-engineering and Place-based Use Case

    DTIC Science & Technology

    2013-06-30

    Void, Behind you, Quantum Leaping, The space between spaces  when a geographic reference is provided, we identified several categories of challenges to... consciously opt-in to include geolocation in times of crisis. In addition, 10% of tweets in our sample included place references in the text and that

  17. Effect of defuzzification method of fuzzy modeling

    NASA Astrophysics Data System (ADS)

    Lapohos, Tibor; Buchal, Ralph O.

    1994-10-01

    Imprecision can arise in fuzzy relational modeling as a result of fuzzification, inference and defuzzification. These three sources of imprecision are difficult to separate. We have determined through numerical studies that an important source of imprecision is the defuzzification stage. This imprecision adversely affects the quality of the model output. The most widely used defuzzification algorithm is known by the name of `center of area' (COA) or `center of gravity' (COG). In this paper, we show that this algorithm not only maps the near limit values of the variables improperly but also introduces errors for middle domain values of the same variables. Furthermore, the behavior of this algorithm is a function of the shape of the reference sets. We compare the COA method to the weighted average of cluster centers (WACC) procedure in which the transformation is carried out based on the values of the cluster centers belonging to each of the reference membership functions instead of using the functions themselves. We show that this procedure is more effective and computationally much faster than the COA. The method is tested for a family of reference sets satisfying certain constraints, that is, for any support value the sum of reference membership function values equals one and the peak values of the two marginal membership functions project to the boundaries of the universe of discourse. For all the member sets of this family of reference sets the defuzzification errors do not get bigger as the linguistic variables tend to their extreme values. In addition, the more reference sets that are defined for a certain linguistic variable, the less the average defuzzification error becomes. In case of triangle shaped reference sets there is no defuzzification error at all. Finally, an alternative solution is provided that improves the performance of the COA method.

  18. Inter-University Collaboration for Online Teaching Innovation: An Emerging Model

    ERIC Educational Resources Information Center

    Nerlich, Andrea Perkins; Soldner, James L.; Millington, Michael J.

    2012-01-01

    Distance education is constantly evolving and improving. To stay current, effective online instructors must utilize the most innovative, evidence-based teaching methods available to promote student learning and satisfaction in their courses. One emerging teaching method, referred to as blended online learning (BOL), involves collaborative…

  19. The Analysis of Seawater: A Laboratory-Centered Learning Project in General Chemistry.

    ERIC Educational Resources Information Center

    Selco, Jodye I.; Roberts, Julian L., Jr.; Wacks, Daniel B.

    2003-01-01

    Describes a sea-water analysis project that introduces qualitative and quantitative analysis methods and laboratory methods such as gravimetric analysis, potentiometric titration, ion-selective electrodes, and the use of calibration curves. Uses a problem-based cooperative teaching approach. (Contains 24 references.) (YDS)

  20. Localized Energy-Based Normalization of Medical Images: Application to Chest Radiography.

    PubMed

    Philipsen, R H H M; Maduskar, P; Hogeweg, L; Melendez, J; Sánchez, C I; van Ginneken, B

    2015-09-01

    Automated quantitative analysis systems for medical images often lack the capability to successfully process images from multiple sources. Normalization of such images prior to further analysis is a possible solution to this limitation. This work presents a general method to normalize medical images and thoroughly investigates its effectiveness for chest radiography (CXR). The method starts with an energy decomposition of the image in different bands. Next, each band's localized energy is scaled to a reference value and the image is reconstructed. We investigate iterative and local application of this technique. The normalization is applied iteratively to the lung fields on six datasets from different sources, each comprising 50 normal CXRs and 50 abnormal CXRs. The method is evaluated in three supervised computer-aided detection tasks related to CXR analysis and compared to two reference normalization methods. In the first task, automatic lung segmentation, the average Jaccard overlap significantly increased from 0.72±0.30 and 0.87±0.11 for both reference methods to with normalization. The second experiment was aimed at segmentation of the clavicles. The reference methods had an average Jaccard index of 0.57±0.26 and 0.53±0.26; with normalization this significantly increased to . The third experiment was detection of tuberculosis related abnormalities in the lung fields. The average area under the Receiver Operating Curve increased significantly from 0.72±0.14 and 0.79±0.06 using the reference methods to with normalization. We conclude that the normalization can be successfully applied in chest radiography and makes supervised systems more generally applicable to data from different sources.

  1. 77 FR 46447 - Proposed Fair Market Rents for the Housing Choice Voucher Program and Moderate Rehabilitation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-03

    ... ratios is very similar to the method used when the bedroom ratios were based on 2000 decennial census...'' section. There are two methods for submitting public comments. 1. Submission of Comments by Mail. Comments... submitted through one of the two methods specified above. Again, all submissions must refer to the docket...

  2. Job Search Methods: Consequences for Gender-based Earnings Inequality.

    ERIC Educational Resources Information Center

    Huffman, Matt L.; Torres, Lisa

    2001-01-01

    Data from adults in Atlanta, Boston, and Los Angeles (n=1,942) who searched for work using formal (ads, agencies) or informal (networks) methods indicated that type of method used did not contribute to the gender gap in earnings. Results do not support formal job search as a way to reduce gender inequality. (Contains 55 references.) (SK)

  3. EVALUATION OF IODINE BASED IMPINGER SOLUTIONS FOR THE EFFICIENT CAPTURE OF HG USING DIRECT INJECTION NEBULIZATION INDUCTIVELY COUPLED PLASMA MASS SPECTROMETRY (DIN-ICP/MS) ANALYSIS

    EPA Science Inventory

    Currently there are no EPA reference sampling methods that have been promulgated for measuring stack emissions of Hg from coal combustion sources, however, EPA Method 29 is most commonly applied. The draft ASTM Ontario Hydro Method for measuring oxidized, elemental, particulate-b...

  4. Three Methods of Estimating a Model of Group Effects: A Comparison with Reference to School Effect Studies.

    ERIC Educational Resources Information Center

    Igra, Amnon

    1980-01-01

    Three methods of estimating a model of school effects are compared: ordinary least squares; an approach based on the analysis of covariance; and, a residualized input-output approach. Results are presented using a matrix algebra formulation, and advantages of the first two methods are considered. (Author/GK)

  5. Regional Lung Ventilation Analysis Using Temporally Resolved Magnetic Resonance Imaging.

    PubMed

    Kolb, Christoph; Wetscherek, Andreas; Buzan, Maria Teodora; Werner, René; Rank, Christopher M; Kachelrie, Marc; Kreuter, Michael; Dinkel, Julien; Heuel, Claus Peter; Maier-Hein, Klaus

    We propose a computer-aided method for regional ventilation analysis and observation of lung diseases in temporally resolved magnetic resonance imaging (4D MRI). A shape model-based segmentation and registration workflow was used to create an atlas-derived reference system in which regional tissue motion can be quantified and multimodal image data can be compared regionally. Model-based temporal registration of the lung surfaces in 4D MRI data was compared with the registration of 4D computed tomography (CT) images. A ventilation analysis was performed on 4D MR images of patients with lung fibrosis; 4D MR ventilation maps were compared with corresponding diagnostic 3D CT images of the patients and 4D CT maps of subjects without impaired lung function (serving as reference). Comparison between the computed patient-specific 4D MR regional ventilation maps and diagnostic CT images shows good correlation in conspicuous regions. Comparison to 4D CT-derived ventilation maps supports the plausibility of the 4D MR maps. Dynamic MRI-based flow-volume loops and spirograms further visualize the free-breathing behavior. The proposed methods allow for 4D MR-based regional analysis of tissue dynamics and ventilation in spontaneous breathing and comparison of patient data. The proposed atlas-based reference coordinate system provides an automated manner of annotating and comparing multimodal lung image data.

  6. Assessing Graduate Attributes: Building a Criteria-Based Competency Model

    ERIC Educational Resources Information Center

    Ipperciel, Donald; ElAtia, Samira

    2014-01-01

    Graduate attributes (GAs) have become a necessary framework of reference for the 21st century competency-based model of higher education. However, the issue of evaluating and assessing GAs still remains unchartered territory. In this article, we present a criteria-based method of assessment that allows for an institution-wide comparison of the…

  7. A Qualitative Study of Technology-Based Training in Organizations that Hire Agriculture and Life Sciences Students

    ERIC Educational Resources Information Center

    Bedgood, Leslie; Murphrey, Theresa Pesl; Dooley, Kim E.

    2008-01-01

    Technological advances have created unlimited opportunities in education. Training and technology have merged to create new methods referred to as technology-based training. The purpose of this study was to identify organizations that hire agriculture and life sciences students for positions involving technology-based training and identify…

  8. Automated navigation assessment for earth survey sensors using island targets

    NASA Technical Reports Server (NTRS)

    Patt, Frederick S.; Woodward, Robert H.; Gregg, Watson W.

    1997-01-01

    An automated method has been developed for performing navigation assessment on satellite-based Earth sensor data. The method utilizes islands as targets which can be readily located in the sensor data and identified with reference locations. The essential elements are an algorithm for classifying the sensor data according to source, a reference catalog of island locations, and a robust pattern-matching algorithm for island identification. The algorithms were developed and tested for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), an ocean color sensor. This method will allow navigation error statistics to be automatically generated for large numbers of points, supporting analysis over large spatial and temporal ranges.

  9. Two new computational methods for universal DNA barcoding: a benchmark using barcode sequences of bacteria, archaea, animals, fungi, and land plants.

    PubMed

    Tanabe, Akifumi S; Toju, Hirokazu

    2013-01-01

    Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to accelerate the registration of reference barcode sequences to apply high-throughput DNA barcoding to genus or species level identification in biodiversity research.

  10. Two New Computational Methods for Universal DNA Barcoding: A Benchmark Using Barcode Sequences of Bacteria, Archaea, Animals, Fungi, and Land Plants

    PubMed Central

    Tanabe, Akifumi S.; Toju, Hirokazu

    2013-01-01

    Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used “1-nearest-neighbor” (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to accelerate the registration of reference barcode sequences to apply high-throughput DNA barcoding to genus or species level identification in biodiversity research. PMID:24204702

  11. Estimation of low back moments from video analysis: a validation study.

    PubMed

    Coenen, Pieter; Kingma, Idsart; Boot, Cécile R L; Faber, Gert S; Xu, Xu; Bongers, Paulien M; van Dieën, Jaap H

    2011-09-02

    This study aimed to develop, compare and validate two versions of a video analysis method for assessment of low back moments during occupational lifting tasks since for epidemiological studies and ergonomic practice relatively cheap and easily applicable methods to assess low back loads are needed. Ten healthy subjects participated in a protocol comprising 12 lifting conditions. Low back moments were assessed using two variants of a video analysis method and a lab-based reference method. Repeated measures ANOVAs showed no overall differences in peak moments between the two versions of the video analysis method and the reference method. However, two conditions showed a minor overestimation of one of the video analysis method moments. Standard deviations were considerable suggesting that errors in the video analysis were random. Furthermore, there was a small underestimation of dynamic components and overestimation of the static components of the moments. Intraclass correlations coefficients for peak moments showed high correspondence (>0.85) of the video analyses with the reference method. It is concluded that, when a sufficient number of measurements can be taken, the video analysis method for assessment of low back loads during lifting tasks provides valid estimates of low back moments in ergonomic practice and epidemiological studies for lifts up to a moderate level of asymmetry. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. An Innovative Procedure for Calibration of Strapdown Electro-Optical Sensors Onboard Unmanned Air Vehicles

    PubMed Central

    Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio; Rispoli, Attilio

    2010-01-01

    This paper presents an innovative method for estimating the attitude of airborne electro-optical cameras with respect to the onboard autonomous navigation unit. The procedure is based on the use of attitude measurements under static conditions taken by an inertial unit and carrier-phase differential Global Positioning System to obtain accurate camera position estimates in the aircraft body reference frame, while image analysis allows line-of-sight unit vectors in the camera based reference frame to be computed. The method has been applied to the alignment of the visible and infrared cameras installed onboard the experimental aircraft of the Italian Aerospace Research Center and adopted for in-flight obstacle detection and collision avoidance. Results show an angular uncertainty on the order of 0.1° (rms). PMID:22315559

  13. Reference intervals for 24 laboratory parameters determined in 24-hour urine collections.

    PubMed

    Curcio, Raffaele; Stettler, Helen; Suter, Paolo M; Aksözen, Jasmin Barman; Saleh, Lanja; Spanaus, Katharina; Bochud, Murielle; Minder, Elisabeth; von Eckardstein, Arnold

    2016-01-01

    Reference intervals for many laboratory parameters determined in 24-h urine collections are either not publicly available or based on small numbers, not sex specific or not from a representative sample. Osmolality and concentrations or enzymatic activities of sodium, potassium, chloride, glucose, creatinine, citrate, cortisol, pancreatic α-amylase, total protein, albumin, transferrin, immunoglobulin G, α1-microglobulin, α2-macroglobulin, as well as porphyrins and their precursors (δ-aminolevulinic acid and porphobilinogen) were determined in 241 24-h urine samples of a population-based cohort of asymptomatic adults (121 men and 120 women). For 16 of these 24 parameters creatinine-normalized ratios were calculated based on 24-h urine creatinine. The reference intervals for these parameters were calculated according to the CLSI C28-A3 statistical guidelines. By contrast to most published reference intervals, which do not stratify for sex, reference intervals of 12 of 24 laboratory parameters in 24-h urine collections and of eight of 16 parameters as creatinine-normalized ratios differed significantly between men and women. For six parameters calculated as 24-h urine excretion and four parameters calculated as creatinine-normalized ratios no reference intervals had been published before. For some parameters we found significant and relevant deviations from previously reported reference intervals, most notably for 24-h urine cortisol in women. Ten 24-h urine parameters showed weak or moderate sex-specific correlations with age. By applying up-to-date analytical methods and clinical chemistry analyzers to 24-h urine collections from a large population-based cohort we provide as yet the most comprehensive set of sex-specific reference intervals calculated according to CLSI guidelines for parameters determined in 24-h urine collections.

  14. Individualized adjustments to reference phantom internal organ dosimetry—scaling factors given knowledge of patient external anatomy

    NASA Astrophysics Data System (ADS)

    Wayson, Michael B.; Bolch, Wesley E.

    2018-04-01

    Internal radiation dose estimates for diagnostic nuclear medicine procedures are typically calculated for a reference individual. Resultantly, there is uncertainty when determining the organ doses to patients who are not at 50th percentile on either height or weight. This study aims to better personalize internal radiation dose estimates for individual patients by modifying the dose estimates calculated for reference individuals based on easily obtainable morphometric characteristics of the patient. Phantoms of different sitting heights and waist circumferences were constructed based on computational reference phantoms for the newborn, 10 year-old, and adult. Monoenergetic photons and electrons were then simulated separately at 15 energies. Photon and electron specific absorbed fractions (SAFs) were computed for the newly constructed non-reference phantoms and compared to SAFs previously generated for the age-matched reference phantoms. Differences in SAFs were correlated to changes in sitting height and waist circumference to develop scaling factors that could be applied to reference SAFs as morphometry corrections. A further set of arbitrary non-reference phantoms were then constructed and used in validation studies for the SAF scaling factors. Both photon and electron dose scaling methods were found to increase average accuracy when sitting height was used as the scaling parameter (~11%). Photon waist circumference-based scaling factors showed modest increases in average accuracy (~7%) for underweight individuals, but not for overweight individuals. Electron waist circumference-based scaling factors did not show increases in average accuracy. When sitting height and waist circumference scaling factors were combined, modest average gains in accuracy were observed for photons (~6%), but not for electrons. Both photon and electron absorbed doses are more reliably scaled using scaling factors computed in this study. They can be effectively scaled using sitting height alone as patient-specific morphometric parameter.

  15. Individualized adjustments to reference phantom internal organ dosimetry-scaling factors given knowledge of patient external anatomy.

    PubMed

    Wayson, Michael B; Bolch, Wesley E

    2018-04-13

    Internal radiation dose estimates for diagnostic nuclear medicine procedures are typically calculated for a reference individual. Resultantly, there is uncertainty when determining the organ doses to patients who are not at 50th percentile on either height or weight. This study aims to better personalize internal radiation dose estimates for individual patients by modifying the dose estimates calculated for reference individuals based on easily obtainable morphometric characteristics of the patient. Phantoms of different sitting heights and waist circumferences were constructed based on computational reference phantoms for the newborn, 10 year-old, and adult. Monoenergetic photons and electrons were then simulated separately at 15 energies. Photon and electron specific absorbed fractions (SAFs) were computed for the newly constructed non-reference phantoms and compared to SAFs previously generated for the age-matched reference phantoms. Differences in SAFs were correlated to changes in sitting height and waist circumference to develop scaling factors that could be applied to reference SAFs as morphometry corrections. A further set of arbitrary non-reference phantoms were then constructed and used in validation studies for the SAF scaling factors. Both photon and electron dose scaling methods were found to increase average accuracy when sitting height was used as the scaling parameter (~11%). Photon waist circumference-based scaling factors showed modest increases in average accuracy (~7%) for underweight individuals, but not for overweight individuals. Electron waist circumference-based scaling factors did not show increases in average accuracy. When sitting height and waist circumference scaling factors were combined, modest average gains in accuracy were observed for photons (~6%), but not for electrons. Both photon and electron absorbed doses are more reliably scaled using scaling factors computed in this study. They can be effectively scaled using sitting height alone as patient-specific morphometric parameter.

  16. Assessing HTS Performance Using BioAssay Ontology: Screening and Analysis of a Bacterial Phospho-N-Acetylmuramoyl-Pentapeptide Translocase Campaign

    PubMed Central

    Moberg, Andreas; Hansson, Eva; Boyd, Helen

    2014-01-01

    Abstract With the public availability of biochemical assays and screening data constantly increasing, new applications for data mining and method analysis are evolving in parallel. One example is BioAssay Ontology (BAO) for systematic classification of assays based on screening setup and metadata annotations. In this article we report a high-throughput screening (HTS) against phospho-N-acetylmuramoyl-pentapeptide translocase (MraY), an attractive antibacterial drug target involved in peptidoglycan synthesis. The screen resulted in novel chemistry identification using a fluorescence resonance energy transfer assay. To address a subset of the false positive hits, a frequent hitter analysis was performed using an approach in which MraY hits were compared with hits from similar assays, previously used for HTS. The MraY assay was annotated according to BAO and three internal reference assays, using a similar assay design and detection technology, were identified. Analyzing the assays retrospectively, it was clear that both MraY and the three reference assays all showed a high false positive rate in the primary HTS assays. In the case of MraY, false positives were efficiently identified by applying a method to correct for compound interference at the hit-confirmation stage. Frequent hitter analysis based on the three reference assays with similar assay method identified additional false actives in the primary MraY assay as frequent hitters. This article demonstrates how assays annotated using BAO terms can be used to identify closely related reference assays, and that analysis based on these assays clearly can provide useful data to influence assay design, technology, and screening strategy. PMID:25415593

  17. An Optical Flow-Based Full Reference Video Quality Assessment Algorithm.

    PubMed

    K, Manasa; Channappayya, Sumohana S

    2016-06-01

    We present a simple yet effective optical flow-based full-reference video quality assessment (FR-VQA) algorithm for assessing the perceptual quality of natural videos. Our algorithm is based on the premise that local optical flow statistics are affected by distortions and the deviation from pristine flow statistics is proportional to the amount of distortion. We characterize the local flow statistics using the mean, the standard deviation, the coefficient of variation (CV), and the minimum eigenvalue ( λ min ) of the local flow patches. Temporal distortion is estimated as the change in the CV of the distorted flow with respect to the reference flow, and the correlation between λ min of the reference and of the distorted patches. We rely on the robust multi-scale structural similarity index for spatial quality estimation. The computed temporal and spatial distortions, thus, are then pooled using a perceptually motivated heuristic to generate a spatio-temporal quality score. The proposed method is shown to be competitive with the state-of-the-art when evaluated on the LIVE SD database, the EPFL Polimi SD database, and the LIVE Mobile HD database. The distortions considered in these databases include those due to compression, packet-loss, wireless channel errors, and rate-adaptation. Our algorithm is flexible enough to allow for any robust FR spatial distortion metric for spatial distortion estimation. In addition, the proposed method is not only parameter-free but also independent of the choice of the optical flow algorithm. Finally, we show that the replacement of the optical flow vectors in our proposed method with the much coarser block motion vectors also results in an acceptable FR-VQA algorithm. Our algorithm is called the flow similarity index.

  18. Ordering of the O-O stretching vibrational frequencies in ozone

    NASA Technical Reports Server (NTRS)

    Scuseria, Gustavo E.; Lee, Timothy J.; Scheiner, Andrew C.; Schaefer, Henry F., III

    1989-01-01

    The ordering of nu1 and nu3 for O3 is incorrectly predicted by most theoretical methods, including some very high level methods. The first systematic electron correlation method based on one-reference configuration to solve this problem is the coupled cluster single and double excitation method. However, a relatively large basis set, triple zeta plus double polarization is required. Comparison with other theoretical methods is made.

  19. Statistical tools for transgene copy number estimation based on real-time PCR.

    PubMed

    Yuan, Joshua S; Burris, Jason; Stewart, Nathan R; Mentewab, Ayalew; Stewart, C Neal

    2007-11-01

    As compared with traditional transgene copy number detection technologies such as Southern blot analysis, real-time PCR provides a fast, inexpensive and high-throughput alternative. However, the real-time PCR based transgene copy number estimation tends to be ambiguous and subjective stemming from the lack of proper statistical analysis and data quality control to render a reliable estimation of copy number with a prediction value. Despite the recent progresses in statistical analysis of real-time PCR, few publications have integrated these advancements in real-time PCR based transgene copy number determination. Three experimental designs and four data quality control integrated statistical models are presented. For the first method, external calibration curves are established for the transgene based on serially-diluted templates. The Ct number from a control transgenic event and putative transgenic event are compared to derive the transgene copy number or zygosity estimation. Simple linear regression and two group T-test procedures were combined to model the data from this design. For the second experimental design, standard curves were generated for both an internal reference gene and the transgene, and the copy number of transgene was compared with that of internal reference gene. Multiple regression models and ANOVA models can be employed to analyze the data and perform quality control for this approach. In the third experimental design, transgene copy number is compared with reference gene without a standard curve, but rather, is based directly on fluorescence data. Two different multiple regression models were proposed to analyze the data based on two different approaches of amplification efficiency integration. Our results highlight the importance of proper statistical treatment and quality control integration in real-time PCR-based transgene copy number determination. These statistical methods allow the real-time PCR-based transgene copy number estimation to be more reliable and precise with a proper statistical estimation. Proper confidence intervals are necessary for unambiguous prediction of trangene copy number. The four different statistical methods are compared for their advantages and disadvantages. Moreover, the statistical methods can also be applied for other real-time PCR-based quantification assays including transfection efficiency analysis and pathogen quantification.

  20. The importance of group-wise registration in tract based spatial statistics study of neurodegeneration: a simulation study in Alzheimer's disease.

    PubMed

    Keihaninejad, Shiva; Ryan, Natalie S; Malone, Ian B; Modat, Marc; Cash, David; Ridgway, Gerard R; Zhang, Hui; Fox, Nick C; Ourselin, Sebastien

    2012-01-01

    Tract-based spatial statistics (TBSS) is a popular method for the analysis of diffusion tensor imaging data. TBSS focuses on differences in white matter voxels with high fractional anisotropy (FA), representing the major fibre tracts, through registering all subjects to a common reference and the creation of a FA skeleton. This work considers the effect of choice of reference in the TBSS pipeline, which can be a standard template, an individual subject from the study, a study-specific template or a group-wise average. While TBSS attempts to overcome registration error by searching the neighbourhood perpendicular to the FA skeleton for the voxel with maximum FA, this projection step may not compensate for large registration errors that might occur in the presence of pathology such as atrophy in neurodegenerative diseases. This makes registration performance and choice of reference an important issue. Substantial work in the field of computational anatomy has shown the use of group-wise averages to reduce biases while avoiding the arbitrary selection of a single individual. Here, we demonstrate the impact of the choice of reference on: (a) specificity (b) sensitivity in a simulation study and (c) a real-world comparison of Alzheimer's disease patients to controls. In (a) and (b), simulated deformations and decreases in FA were applied to control subjects to simulate changes of shape and WM integrity similar to what would be seen in AD patients, in order to provide a "ground truth" for evaluating the various methods of TBSS reference. Using a group-wise average atlas as the reference outperformed other references in the TBSS pipeline in all evaluations.

  1. The Importance of Group-Wise Registration in Tract Based Spatial Statistics Study of Neurodegeneration: A Simulation Study in Alzheimer's Disease

    PubMed Central

    Keihaninejad, Shiva; Ryan, Natalie S.; Malone, Ian B.; Modat, Marc; Cash, David; Ridgway, Gerard R.; Zhang, Hui; Fox, Nick C.; Ourselin, Sebastien

    2012-01-01

    Tract-based spatial statistics (TBSS) is a popular method for the analysis of diffusion tensor imaging data. TBSS focuses on differences in white matter voxels with high fractional anisotropy (FA), representing the major fibre tracts, through registering all subjects to a common reference and the creation of a FA skeleton. This work considers the effect of choice of reference in the TBSS pipeline, which can be a standard template, an individual subject from the study, a study-specific template or a group-wise average. While TBSS attempts to overcome registration error by searching the neighbourhood perpendicular to the FA skeleton for the voxel with maximum FA, this projection step may not compensate for large registration errors that might occur in the presence of pathology such as atrophy in neurodegenerative diseases. This makes registration performance and choice of reference an important issue. Substantial work in the field of computational anatomy has shown the use of group-wise averages to reduce biases while avoiding the arbitrary selection of a single individual. Here, we demonstrate the impact of the choice of reference on: (a) specificity (b) sensitivity in a simulation study and (c) a real-world comparison of Alzheimer's disease patients to controls. In (a) and (b), simulated deformations and decreases in FA were applied to control subjects to simulate changes of shape and WM integrity similar to what would be seen in AD patients, in order to provide a “ground truth” for evaluating the various methods of TBSS reference. Using a group-wise average atlas as the reference outperformed other references in the TBSS pipeline in all evaluations. PMID:23139736

  2. Impact of the choice of reference genome on the ability of the core genome SNV methodology to distinguish strains of Salmonella enterica serovar Heidelberg.

    PubMed

    Usongo, Valentine; Berry, Chrystal; Yousfi, Khadidja; Doualla-Bell, Florence; Labbé, Genevieve; Johnson, Roger; Fournier, Eric; Nadon, Celine; Goodridge, Lawrence; Bekal, Sadjia

    2018-01-01

    Salmonella enterica serovar Heidelberg (S. Heidelberg) is one of the top serovars causing human salmonellosis. The core genome single nucleotide variant pipeline (cgSNV) is one of several whole genome based sequence typing methods used for the laboratory investigation of foodborne pathogens. SNV detection using this method requires a reference genome. The purpose of this study was to investigate the impact of the choice of the reference genome on the cgSNV-informed phylogenetic clustering and inferred isolate relationships. We found that using a draft or closed genome of S. Heidelberg as reference did not impact the ability of the cgSNV methodology to differentiate among 145 S. Heidelberg isolates involved in foodborne outbreaks. We also found that using a distantly related genome such as S. Dublin as choice of reference led to a loss in resolution since some sporadic isolates were found to cluster together with outbreak isolates. In addition, the genetic distances between outbreak isolates as well as between outbreak and sporadic isolates were overall reduced when S. Dublin was used as the reference genome as opposed to S. Heidelberg.

  3. Visually testing the dynamic character of a blazed-angle adjustable grating by digital holographic microscopy.

    PubMed

    Qin, Chuan; Zhao, Jianlin; Di, Jianglei; Wang, Le; Yu, Yiting; Yuan, Weizheng

    2009-02-10

    We employed digital holographic microscopy to visually test microoptoelectromechanical systems (MOEMS). The sample is a blazed-angle adjustable grating. Considering the periodic structure of the sample, a local area unwrapping method based on a binary template was adopted to demodulate the fringes obtained by referring to a reference hologram. A series of holograms at different deformation states due to different drive voltages were captured to analyze the dynamic character of the MOEMS, and the uniformity of different microcantilever beams was also inspected. The results show this testing method is effective for a periodic structure.

  4. Optical calculation of correlation filters for a robotic vision system

    NASA Technical Reports Server (NTRS)

    Knopp, Jerome

    1989-01-01

    A method is presented for designing optical correlation filters based on measuring three intensity patterns: the Fourier transform of a filter object, a reference wave and the interference pattern produced by the sum of the object transform and the reference. The method can produce a filter that is well matched to both the object, its transforming optical system and the spatial light modulator used in the correlator input plane. A computer simulation was presented to demonstrate the approach for the special case of a conventional binary phase-only filter. The simulation produced a workable filter with a sharp correlation peak.

  5. Apparatus and method for identification and recognition of an item with ultrasonic patterns from item subsurface micro-features

    DOEpatents

    Perkins, Richard W.; Fuller, James L.; Doctor, Steven R.; Good, Morris S.; Heasler, Patrick G.; Skorpik, James R.; Hansen, Norman H.

    1995-01-01

    The present invention is a means and method for identification and recognition of an item by ultrasonic imaging of material microfeatures and/or macrofeatures within the bulk volume of a material. The invention is based upon ultrasonic interrogation and imaging of material microfeatures within the body of material by accepting only reflected ultrasonic energy from a preselected plane or volume within the material. An initial interrogation produces an identification reference. Subsequent new scans are statistically compared to the identification reference for making a match/non-match decision.

  6. Apparatus and method for identification and recognition of an item with ultrasonic patterns from item subsurface micro-features

    DOEpatents

    Perkins, R.W.; Fuller, J.L.; Doctor, S.R.; Good, M.S.; Heasler, P.G.; Skorpik, J.R.; Hansen, N.H.

    1995-09-26

    The present invention is a means and method for identification and recognition of an item by ultrasonic imaging of material microfeatures and/or macrofeatures within the bulk volume of a material. The invention is based upon ultrasonic interrogation and imaging of material microfeatures within the body of material by accepting only reflected ultrasonic energy from a preselected plane or volume within the material. An initial interrogation produces an identification reference. Subsequent new scans are statistically compared to the identification reference for making a match/non-match decision. 15 figs.

  7. [Study on blood pressure standard in children using the automatic sphygmomanometer].

    PubMed

    Niida, Mami; Hataya, Hiroshi; Honda, Masataka

    2015-01-01

    In Japan, two treatment guidelines exist for pediatric patients with hypertension. The Guidelines for Drug Therapy in Pediatric Patients with Cardiovascular Diseases (JCS2012), by the Japanese Circulation Society, cite the stethoscopy-based American guidelines. The Guidelines for the Management of Hypertension (JSH2009), by the Japanese Society of Hypertension, focus on Japanese data obtained from automated sphygmomanometry. The frequent use of automated sphygmomanometers in clinical practice implies that the JSH2009 guidelines might be better; however with strict low reference values for the diastolic phase, overtreatment may result. Only the Japanese Circulation Society's guidelines include a therapeutic strategy, and the Chronic Kidney Disease (CKD) Guide, CKD Guidelines, and school urinary screening tests all cite these guidelines on stethoscopy-based blood pressure determination. Stethoscopy should be conducted during a medical examination; however, due to limited time in clinical practice, most physicians use automated sphygmomanometers while nevertheless relying on the Japanese Circulation Society reference values--which are stethoscopy-based. To find a compromise, we compared reference values in Japan with those from South Korea (automated sphygmomanometer-based) and those from the United States (stethoscopy-based). Moreover, we examined the results of recent accuracy tests for automated sphygmomanometers. Although the JSH2009 reference values for the systolic phase were consistent with those in the United States (stethoscopy-based), the reference values for the diastolic phase were lower. We observed the same tendency when comparing JSH2009 reference values with those in South Korea (automated sphygmomanometer-based). Conversely, there were only small differences between automated sphygmomanometry and mercury measurement, and we found it was possible to substitute the values from automated sphygmomanometry for stethoscopy. A large-scale study that takes into account patient height, measurement method, and treatment criteria is required to establish appropriate reference values. Even if automated sphygmomanometry is used until appropriate values are established, we consider the criteria provided in the American guidelines as appropriate.

  8. MaCH-Admix: Genotype Imputation for Admixed Populations

    PubMed Central

    Liu, Eric Yi; Li, Mingyao; Wang, Wei; Li, Yun

    2012-01-01

    Imputation in admixed populations is an important problem but challenging due to the complex linkage disequilibrium (LD) pattern. The emergence of large reference panels such as that from the 1,000 Genomes Project enables more accurate imputation in general, and in particular for admixed populations and for uncommon variants. To efficiently benefit from these large reference panels, one key issue to consider in modern genotype imputation framework is the selection of effective reference panels. In this work, we consider a number of methods for effective reference panel construction inside a hidden Markov model and specific to each target individual. These methods fall into two categories: identity-by-state (IBS) based and ancestry-weighted approach. We evaluated the performance on individuals from recently admixed populations. Our target samples include 8,421 African Americans and 3,587 Hispanic Americans from the Women’s Health Initiative, which allow assessment of imputation quality for uncommon variants. Our experiments include both large and small reference panels; large, medium, and small target samples; and in genome regions of varying levels of LD. We also include BEAGLE and IMPUTE2 for comparison. Experiment results with large reference panel suggest that our novel piecewise IBS method yields consistently higher imputation quality than other methods/software. The advantage is particularly noteworthy among uncommon variants where we observe up to 5.1% information gain with the difference being highly significant (Wilcoxon signed rank test P-value < 0.0001). Our work is the first that considers various sensible approaches for imputation in admixed populations and presents a comprehensive comparison. PMID:23074066

  9. Analysis of street drugs in seized material without primary reference standards.

    PubMed

    Laks, Suvi; Pelander, Anna; Vuori, Erkki; Ali-Tolppa, Elisa; Sippola, Erkki; Ojanperä, Ilkka

    2004-12-15

    A novel approach was used to analyze street drugs in seized material without primary reference standards. Identification was performed by liquid chromatography/time-of-flight mass spectrometry (LC/TOFMS), essentially based on accurate mass determination using a target library of 735 exact monoisotopic masses. Quantification was carried out by liquid chromatography/chemiluminescence nitrogen detection (LC/CLND) with a single secondary standard (caffeine), utilizing the detector's equimolar response to nitrogen. Sample preparation comprised dilution, first with methanol and further with the LC mobile phase. Altogether 21 seized drug samples were analyzed blind by the present method, and results were compared to accredited reference methods utilizing identification by gas chromatography/mass spectrometry and quantification by gas chromatography or liquid chromatography. The 31 drug findings by LC/TOFMS comprised 19 different drugs-of-abuse, byproducts, and adulterants, including amphetamine and tryptamine designer drugs, with one unresolved pair of compounds having an identical mass. By the reference methods, 27 findings could be confirmed, and among the four unconfirmed findings, only 1 apparent false positive was found. In the quantitative analysis of 11 amphetamine, heroin, and cocaine findings, mean relative difference between the results of LC/CLND and the reference methods was 11% (range 4.2-21%), without any observable bias. Mean relative standard deviation for three parallel LC/CLND results was 6%. Results suggest that the present combination of LC/TOFMS and LC/CLND offers a simple solution for the analysis of scheduled and designer drugs in seized material, independent of the availability of primary reference standards.

  10. Applicability of the polynomial chaos expansion method for personalization of a cardiovascular pulse wave propagation model.

    PubMed

    Huberts, W; Donders, W P; Delhaas, T; van de Vosse, F N

    2014-12-01

    Patient-specific modeling requires model personalization, which can be achieved in an efficient manner by parameter fixing and parameter prioritization. An efficient variance-based method is using generalized polynomial chaos expansion (gPCE), but it has not been applied in the context of model personalization, nor has it ever been compared with standard variance-based methods for models with many parameters. In this work, we apply the gPCE method to a previously reported pulse wave propagation model and compare the conclusions for model personalization with that of a reference analysis performed with Saltelli's efficient Monte Carlo method. We furthermore differentiate two approaches for obtaining the expansion coefficients: one based on spectral projection (gPCE-P) and one based on least squares regression (gPCE-R). It was found that in general the gPCE yields similar conclusions as the reference analysis but at much lower cost, as long as the polynomial metamodel does not contain unnecessary high order terms. Furthermore, the gPCE-R approach generally yielded better results than gPCE-P. The weak performance of the gPCE-P can be attributed to the assessment of the expansion coefficients using the Smolyak algorithm, which might be hampered by the high number of model parameters and/or by possible non-smoothness in the output space. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Development and application of absolute quantitative detection by duplex chamber-based digital PCR of genetically modified maize events without pretreatment steps.

    PubMed

    Zhu, Pengyu; Fu, Wei; Wang, Chenguang; Du, Zhixin; Huang, Kunlun; Zhu, Shuifang; Xu, Wentao

    2016-04-15

    The possibility of the absolute quantitation of GMO events by digital PCR was recently reported. However, most absolute quantitation methods based on the digital PCR required pretreatment steps. Meanwhile, singleplex detection could not meet the demand of the absolute quantitation of GMO events that is based on the ratio of foreign fragments and reference genes. Thus, to promote the absolute quantitative detection of different GMO events by digital PCR, we developed a quantitative detection method based on duplex digital PCR without pretreatment. Moreover, we tested 7 GMO events in our study to evaluate the fitness of our method. The optimized combination of foreign and reference primers, limit of quantitation (LOQ), limit of detection (LOD) and specificity were validated. The results showed that the LOQ of our method for different GMO events was 0.5%, while the LOD is 0.1%. Additionally, we found that duplex digital PCR could achieve the detection results with lower RSD compared with singleplex digital PCR. In summary, the duplex digital PCR detection system is a simple and stable way to achieve the absolute quantitation of different GMO events. Moreover, the LOQ and LOD indicated that this method is suitable for the daily detection and quantitation of GMO events. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Tracer Kinetic Analysis of (S)-¹⁸F-THK5117 as a PET Tracer for Assessing Tau Pathology.

    PubMed

    Jonasson, My; Wall, Anders; Chiotis, Konstantinos; Saint-Aubert, Laure; Wilking, Helena; Sprycha, Margareta; Borg, Beatrice; Thibblin, Alf; Eriksson, Jonas; Sörensen, Jens; Antoni, Gunnar; Nordberg, Agneta; Lubberink, Mark

    2016-04-01

    Because a correlation between tau pathology and the clinical symptoms of Alzheimer disease (AD) has been hypothesized, there is increasing interest in developing PET tracers that bind specifically to tau protein. The aim of this study was to evaluate tracer kinetic models for quantitative analysis and generation of parametric images for the novel tau ligand (S)-(18)F-THK5117. Nine subjects (5 with AD, 4 with mild cognitive impairment) received a 90-min dynamic (S)-(18)F-THK5117 PET scan. Arterial blood was sampled for measurement of blood radioactivity and metabolite analysis. Volume-of-interest (VOI)-based analysis was performed using plasma-input models; single-tissue and 2-tissue (2TCM) compartment models and plasma-input Logan and reference tissue models; and simplified reference tissue model (SRTM), reference Logan, and SUV ratio (SUVr). Cerebellum gray matter was used as the reference region. Voxel-level analysis was performed using basis function implementations of SRTM, reference Logan, and SUVr. Regionally averaged voxel values were compared with VOI-based values from the optimal reference tissue model, and simulations were made to assess accuracy and precision. In addition to 90 min, initial 40- and 60-min data were analyzed. Plasma-input Logan distribution volume ratio (DVR)-1 values agreed well with 2TCM DVR-1 values (R(2)= 0.99, slope = 0.96). SRTM binding potential (BP(ND)) and reference Logan DVR-1 values were highly correlated with plasma-input Logan DVR-1 (R(2)= 1.00, slope ≈ 1.00) whereas SUVr(70-90)-1 values correlated less well and overestimated binding. Agreement between parametric methods and SRTM was best for reference Logan (R(2)= 0.99, slope = 1.03). SUVr(70-90)-1 values were almost 3 times higher than BP(ND) values in white matter and 1.5 times higher in gray matter. Simulations showed poorer accuracy and precision for SUVr(70-90)-1 values than for the other reference methods. SRTM BP(ND) and reference Logan DVR-1 values were not affected by a shorter scan duration of 60 min. SRTM BP(ND) and reference Logan DVR-1 values were highly correlated with plasma-input Logan DVR-1 values. VOI-based data analyses indicated robust results for scan durations of 60 min. Reference Logan generated quantitative (S)-(18)F-THK5117 DVR-1 parametric images with the greatest accuracy and precision and with a much lower white-matter signal than seen with SUVr(70-90)-1 images. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  13. Bias Assessment of General Chemistry Analytes using Commutable Samples.

    PubMed

    Koerbin, Gus; Tate, Jillian R; Ryan, Julie; Jones, Graham Rd; Sikaris, Ken A; Kanowski, David; Reed, Maxine; Gill, Janice; Koumantakis, George; Yen, Tina; St John, Andrew; Hickman, Peter E; Simpson, Aaron; Graham, Peter

    2014-11-01

    Harmonisation of reference intervals for routine general chemistry analytes has been a goal for many years. Analytical bias may prevent this harmonisation. To determine if analytical bias is present when comparing methods, the use of commutable samples, or samples that have the same properties as the clinical samples routinely analysed, should be used as reference samples to eliminate the possibility of matrix effect. The use of commutable samples has improved the identification of unacceptable analytical performance in the Netherlands and Spain. The International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) has undertaken a pilot study using commutable samples in an attempt to determine not only country specific reference intervals but to make them comparable between countries. Australia and New Zealand, through the Australasian Association of Clinical Biochemists (AACB), have also undertaken an assessment of analytical bias using commutable samples and determined that of the 27 general chemistry analytes studied, 19 showed sufficiently small between method biases as to not prevent harmonisation of reference intervals. Application of evidence based approaches including the determination of analytical bias using commutable material is necessary when seeking to harmonise reference intervals.

  14. Reduced-Reference Quality Assessment Based on the Entropy of DWT Coefficients of Locally Weighted Gradient Magnitudes.

    PubMed

    Golestaneh, S Alireza; Karam, Lina

    2016-08-24

    Perceptual image quality assessment (IQA) attempts to use computational models to estimate the image quality in accordance with subjective evaluations. Reduced-reference (RR) image quality assessment (IQA) methods make use of partial information or features extracted from the reference image for estimating the quality of distorted images. Finding a balance between the number of RR features and accuracy of the estimated image quality is essential and important in IQA. In this paper we propose a training-free low-cost RRIQA method that requires a very small number of RR features (6 RR features). The proposed RRIQA algorithm is based on the discrete wavelet transform (DWT) of locally weighted gradient magnitudes.We apply human visual system's contrast sensitivity and neighborhood gradient information to weight the gradient magnitudes in a locally adaptive manner. The RR features are computed by measuring the entropy of each DWT subband, for each scale, and pooling the subband entropies along all orientations, resulting in L RR features (one average entropy per scale) for an L-level DWT. Extensive experiments performed on seven large-scale benchmark databases demonstrate that the proposed RRIQA method delivers highly competitive performance as compared to the state-of-the-art RRIQA models as well as full reference ones for both natural and texture images. The MATLAB source code of REDLOG and the evaluation results are publicly available online at https://http://lab.engineering.asu.edu/ivulab/software/redlog/.

  15. Reference Charts for Fetal Cerebellar Vermis Height: A Prospective Cross-Sectional Study of 10605 Fetuses

    PubMed Central

    Cignini, Pietro; Giorlandino, Maurizio; Brutti, Pierpaolo; Mangiafico, Lucia; Aloisi, Alessia; Giorlandino, Claudio

    2016-01-01

    Objective To establish reference charts for fetal cerebellar vermis height in an unselected population. Methods A prospective cross-sectional study between September 2009 and December 2014 was carried out at ALTAMEDICA Fetal–Maternal Medical Centre, Rome, Italy. Of 25203 fetal biometric measurements, 12167 (48%) measurements of the cerebellar vermis were available. After excluding 1562 (12.8%) measurements, a total of 10605 (87.2%) fetuses were considered and analyzed once only. Parametric and nonparametric quantile regression models were used for the statistical analysis. In order to evaluate the robustness of the proposed reference charts regarding various distributional assumptions on the ultrasound measurements at hand, we compared the gestational age-specific reference curves we produced through the statistical methods used. Normal mean height based on parametric and nonparametric methods were defined for each week of gestation and the regression equation expressing the height of the cerebellar vermis as a function of gestational age was calculated. Finally the correlation between dimension/gestation was measured. Results The mean height of the cerebellar vermis was 12.7mm (SD, 1.6mm; 95% confidence interval, 12.7–12.8mm). The regression equation expressing the height of the CV as a function of the gestational age was: height (mm) = -4.85+0.78 x gestational age. The correlation between dimension/gestation was expressed by the coefficient r = 0.87. Conclusion This is the first prospective cross-sectional study on fetal cerebellar vermis biometry with such a large sample size reported in literature. It is a detailed statistical survey and contains new centile-based reference charts for fetal height of cerebellar vermis measurements. PMID:26812238

  16. Automatic reference selection for quantitative EEG interpretation: identification of diffuse/localised activity and the active earlobe reference, iterative detection of the distribution of EEG rhythms.

    PubMed

    Wang, Bei; Wang, Xingyu; Ikeda, Akio; Nagamine, Takashi; Shibasaki, Hiroshi; Nakamura, Masatoshi

    2014-01-01

    EEG (Electroencephalograph) interpretation is important for the diagnosis of neurological disorders. The proper adjustment of the montage can highlight the EEG rhythm of interest and avoid false interpretation. The aim of this study was to develop an automatic reference selection method to identify a suitable reference. The results may contribute to the accurate inspection of the distribution of EEG rhythms for quantitative EEG interpretation. The method includes two pre-judgements and one iterative detection module. The diffuse case is initially identified by pre-judgement 1 when intermittent rhythmic waveforms occur over large areas along the scalp. The earlobe reference or averaged reference is adopted for the diffuse case due to the effect of the earlobe reference depending on pre-judgement 2. An iterative detection algorithm is developed for the localised case when the signal is distributed in a small area of the brain. The suitable averaged reference is finally determined based on the detected focal and distributed electrodes. The presented technique was applied to the pathological EEG recordings of nine patients. One example of the diffuse case is introduced by illustrating the results of the pre-judgements. The diffusely intermittent rhythmic slow wave is identified. The effect of active earlobe reference is analysed. Two examples of the localised case are presented, indicating the results of the iterative detection module. The focal and distributed electrodes are detected automatically during the repeating algorithm. The identification of diffuse and localised activity was satisfactory compared with the visual inspection. The EEG rhythm of interest can be highlighted using a suitable selected reference. The implementation of an automatic reference selection method is helpful to detect the distribution of an EEG rhythm, which can improve the accuracy of EEG interpretation during both visual inspection and automatic interpretation. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  17. Comparison of a novel strategy for the detection and isolation of Salmonella in shell eggs with the Food and Drug Administration Bacteriological Analytical Manual method.

    PubMed

    Zhang, Guodong; Thau, Eve; Brown, Eric W; Hammack, Thomas S

    2013-12-01

    The current FDA Bacteriological Analytical Manual (BAM) method for the detection of Salmonella in eggs requires 2 wk to complete. The objective of this project was to improve the BAM method for the detection and isolation of Salmonella in whole shell eggs. A novel protocol, using 1,000 g of liquid eggs for direct preenrichment with 2 L of tryptic soy broth (TSB) followed by enrichment using Rappaport-Vassiliadis and Tetrathionate broths, was compared with the standard BAM method, which requires 96 h room temperature incubation of whole shell egg samples followed by preenrichment in TSB supplemented with FeSO4. Four Salmonella ser. Enteritidis (4 phage types) and one Salmonella ser. Heidelberg isolates were used in the study. Bulk inoculated pooled liquid eggs, weighing 52 or 56 kg (approximately 1,100 eggs) were used in each trial. Twenty 1,000-g test portions were withdrawn from the pooled eggs for both the alternative and the reference methods. Test portions were inoculated with Salmonella at 1 to 5 cfu/1,000 g eggs. Two replicates were performed for each isolate. In the 8 trials conducted with Salmonella ser. Enteritidis, the alternative method was significantly (P < 0.05) more productive than the reference method in 3 trials, and significantly (P < 0.05) less productive than the reference method in 1 trial. There were no significant (P < 0.05) differences between the 2 methods for the other 4 trials. For Salmonella ser. Heidelberg, combined data from 2 trials showed the alternative method was significantly (P < 0.05) more efficient than the BAM method. We have concluded that the alternative method, described herein, has the potential to replace the current BAM culture method for detection and isolation of Salmonella from shell eggs based on the following factors: 1) the alternative method is 4 d shorter than the reference method; 2) it uses regular TSB instead of the more complicated TSB supplemented with FeSO4; and 3) it was equivalent or superior to the reference method in 9 out of 10 trials for the detection of Salmonella in shell eggs.

  18. School Finance Adequacy: What Is It and How Do We Measure It?

    ERIC Educational Resources Information Center

    Picus, Lawrence O.

    2001-01-01

    Discusses legal definition of school-finance "adequacy" and four methods for determining the cost of an adequate system: Cost function, observational methods, professional judgment, and costs of a comprehensive school design. Draws implications for school districts' resource-allocation decisions based on adequacy. (Contains 21 references.) (PKP)

  19. PERFORMANCE AND SENSITIVITY ANALYSIS OF THE USEPA WINS FRACTIONATOR FOR THE PM 2.5 FEDERAL REFERENCE METHOD

    EPA Science Inventory

    In response to growing health concerns related to atmospheric fine particles, EPA promulgated in 1997 a new particulate matter standard accompanied by new sampling methodology. Based on a review of pertinent literature, a new metric (PM;,) was adopted and its measurement method...

  20. Electrothermal atomic absorption spectrometric determination of copper in nickel-base alloys with various chemical modifiers*1

    NASA Astrophysics Data System (ADS)

    Tsai, Suh-Jen Jane; Shiue, Chia-Chann; Chang, Shiow-Ing

    1997-07-01

    The analytical characteristics of copper in nickel-base alloys have been investigated with electrothermal atomic absorption spectrometry. Deuterium background correction was employed. The effects of various chemical modifiers on the analysis of copper were investigated. Organic modifiers which included 2-(5-bromo-2-pyridylazo)-5-(diethylamino-phenol) (Br-PADAP), ammonium citrate, 1-(2-pyridylazo)-naphthol, 4-(2-pyridylazo)resorcinol, ethylenediaminetetraacetic acid and Triton X-100 were studied. Inorganic modifiers palladium nitrate, magnesium nitrate, aluminum chloride, ammonium dihydrogen phosphate, hydrogen peroxide and potassium nitrate were also applied in this work. In addition, zirconium hydroxide and ammonium hydroxide precipitation methods have also been studied. Interference effects were effectively reduced with Br-PADAP modifier. Aqueous standards were used to construct the calibration curves. The detection limit was 1.9 pg. Standard reference materials of nickel-base alloys were used to evaluate the accuracy of the proposed method. The copper contents determined with the proposed method agreed closely with the certified values of the reference materials. The recoveries were within the range 90-100% with relative standard deviation of less than 10%. Good precision was obtained.

  1. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing

    PubMed Central

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-01-01

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623

  2. Quality control for quantitative PCR based on amplification compatibility test.

    PubMed

    Tichopad, Ales; Bar, Tzachi; Pecen, Ladislav; Kitchen, Robert R; Kubista, Mikael; Pfaffl, Michael W

    2010-04-01

    Quantitative qPCR is a routinely used method for the accurate quantification of nucleic acids. Yet it may generate erroneous results if the amplification process is obscured by inhibition or generation of aberrant side-products such as primer dimers. Several methods have been established to control for pre-processing performance that rely on the introduction of a co-amplified reference sequence, however there is currently no method to allow for reliable control of the amplification process without directly modifying the sample mix. Herein we present a statistical approach based on multivariate analysis of the amplification response data generated in real-time. The amplification trajectory in its most resolved and dynamic phase is fitted with a suitable model. Two parameters of this model, related to amplification efficiency, are then used for calculation of the Z-score statistics. Each studied sample is compared to a predefined reference set of reactions, typically calibration reactions. A probabilistic decision for each individual Z-score is then used to identify the majority of inhibited reactions in our experiments. We compare this approach to univariate methods using only the sample specific amplification efficiency as reporter of the compatibility. We demonstrate improved identification performance using the multivariate approach compared to the univariate approach. Finally we stress that the performance of the amplification compatibility test as a quality control procedure depends on the quality of the reference set. Copyright 2010 Elsevier Inc. All rights reserved.

  3. Pulmonary vessel segmentation utilizing curved planar reformation and optimal path finding (CROP) in computed tomographic pulmonary angiography (CTPA) for CAD applications

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Kuriakose, Jean W.; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Guo, Yanhui; Patel, Smita; Kazerooni, Ella A.

    2012-03-01

    Vessel segmentation is a fundamental step in an automated pulmonary embolism (PE) detection system. The purpose of this study is to improve the segmentation scheme for pulmonary vessels affected by PE and other lung diseases. We have developed a multiscale hierarchical vessel enhancement and segmentation (MHES) method for pulmonary vessel tree extraction based on the analysis of eigenvalues of Hessian matrices. However, it is difficult to segment the pulmonary vessels accurately under suboptimal conditions, such as vessels occluded by PEs, surrounded by lymphoid tissues or lung diseases, and crossing with other vessels. In this study, we developed a new vessel refinement method utilizing curved planar reformation (CPR) technique combined with optimal path finding method (MHES-CROP). The MHES segmented vessels straightened in the CPR volume was refined using adaptive gray level thresholding where the local threshold was obtained from least-square estimation of a spline curve fitted to the gray levels of the vessel along the straightened volume. An optimal path finding method based on Dijkstra's algorithm was finally used to trace the correct path for the vessel of interest. Two and eight CTPA scans were randomly selected as training and test data sets, respectively. Forty volumes of interest (VOIs) containing "representative" vessels were manually segmented by a radiologist experienced in CTPA interpretation and used as reference standard. The results show that, for the 32 test VOIs, the average percentage volume error relative to the reference standard was improved from 32.9+/-10.2% using the MHES method to 9.9+/-7.9% using the MHES-CROP method. The accuracy of vessel segmentation was improved significantly (p<0.05). The intraclass correlation coefficient (ICC) of the segmented vessel volume between the automated segmentation and the reference standard was improved from 0.919 to 0.988. Quantitative comparison of the MHES method and the MHES-CROP method with the reference standard was also evaluated by the Bland-Altman plot. This preliminary study indicates that the MHES-CROP method has the potential to improve PE detection.

  4. Missing portion sizes in FFQ--alternatives to use of standard portions.

    PubMed

    Køster-Rasmussen, Rasmus; Siersma, Volkert; Halldorsson, Thorhallur I; de Fine Olivarius, Niels; Henriksen, Jan E; Heitmann, Berit L

    2015-08-01

    Standard portions or substitution of missing portion sizes with medians may generate bias when quantifying the dietary intake from FFQ. The present study compared four different methods to include portion sizes in FFQ. We evaluated three stochastic methods for imputation of portion sizes based on information about anthropometry, sex, physical activity and age. Energy intakes computed with standard portion sizes, defined as sex-specific medians (median), or with portion sizes estimated with multinomial logistic regression (MLR), 'comparable categories' (Coca) or k-nearest neighbours (KNN) were compared with a reference based on self-reported portion sizes (quantified by a photographic food atlas embedded in the FFQ). The Danish Health Examination Survey 2007-2008. The study included 3728 adults with complete portion size data. Compared with the reference, the root-mean-square errors of the mean daily total energy intake (in kJ) computed with portion sizes estimated by the four methods were (men; women): median (1118; 1061), MLR (1060; 1051), Coca (1230; 1146), KNN (1281; 1181). The equivalent biases (mean error) were (in kJ): median (579; 469), MLR (248; 178), Coca (234; 188), KNN (-340; 218). The methods MLR and Coca provided the best agreement with the reference. The stochastic methods allowed for estimation of meaningful portion sizes by conditioning on information about physiology and they were suitable for multiple imputation. We propose to use MLR or Coca to substitute missing portion size values or when portion sizes needs to be included in FFQ without portion size data.

  5. Construction of measurement uncertainty profiles for quantitative analysis of genetically modified organisms based on interlaboratory validation data.

    PubMed

    Macarthur, Roy; Feinberg, Max; Bertheau, Yves

    2010-01-01

    A method is presented for estimating the size of uncertainty associated with the measurement of products derived from genetically modified organisms (GMOs). The method is based on the uncertainty profile, which is an extension, for the estimation of uncertainty, of a recent graphical statistical tool called an accuracy profile that was developed for the validation of quantitative analytical methods. The application of uncertainty profiles as an aid to decision making and assessment of fitness for purpose is also presented. Results of the measurement of the quantity of GMOs in flour by PCR-based methods collected through a number of interlaboratory studies followed the log-normal distribution. Uncertainty profiles built using the results generally give an expected range for measurement results of 50-200% of reference concentrations for materials that contain at least 1% GMO. This range is consistent with European Network of GM Laboratories and the European Union (EU) Community Reference Laboratory validation criteria and can be used as a fitness for purpose criterion for measurement methods. The effect on the enforcement of EU labeling regulations is that, in general, an individual analytical result needs to be < 0.45% to demonstrate compliance, and > 1.8% to demonstrate noncompliance with a labeling threshold of 0.9%.

  6. Reference tissue modeling with parameter coupling: application to a study of SERT binding in HIV

    NASA Astrophysics Data System (ADS)

    Endres, Christopher J.; Hammoud, Dima A.; Pomper, Martin G.

    2011-04-01

    When applicable, it is generally preferred to evaluate positron emission tomography (PET) studies using a reference tissue-based approach as that avoids the need for invasive arterial blood sampling. However, most reference tissue methods have been shown to have a bias that is dependent on the level of tracer binding, and the variability of parameter estimates may be substantially affected by noise level. In a study of serotonin transporter (SERT) binding in HIV dementia, it was determined that applying parameter coupling to the simplified reference tissue model (SRTM) reduced the variability of parameter estimates and yielded the strongest between-group significant differences in SERT binding. The use of parameter coupling makes the application of SRTM more consistent with conventional blood input models and reduces the total number of fitted parameters, thus should yield more robust parameter estimates. Here, we provide a detailed evaluation of the application of parameter constraint and parameter coupling to [11C]DASB PET studies. Five quantitative methods, including three methods that constrain the reference tissue clearance (kr2) to a common value across regions were applied to the clinical and simulated data to compare measurement of the tracer binding potential (BPND). Compared with standard SRTM, either coupling of kr2 across regions or constraining kr2 to a first-pass estimate improved the sensitivity of SRTM to measuring a significant difference in BPND between patients and controls. Parameter coupling was particularly effective in reducing the variance of parameter estimates, which was less than 50% of the variance obtained with standard SRTM. A linear approach was also improved when constraining kr2 to a first-pass estimate, although the SRTM-based methods yielded stronger significant differences when applied to the clinical study. This work shows that parameter coupling reduces the variance of parameter estimates and may better discriminate between-group differences in specific binding.

  7. A localization algorithm of adaptively determining the ROI of the reference circle in image

    NASA Astrophysics Data System (ADS)

    Xu, Zeen; Zhang, Jun; Zhang, Daimeng; Liu, Xiaomao; Tian, Jinwen

    2018-03-01

    Aiming at solving the problem of accurately positioning the detection probes underwater, this paper proposed a method based on computer vision which can effectively solve this problem. The theory of this method is that: First, because the shape information of the heat tube is similar to a circle in the image, we can find a circle which physical location is well known in the image, we set this circle as the reference circle. Second, we calculate the pixel offset between the reference circle and the probes in the picture, and adjust the steering gear through the offset. As a result, we can accurately measure the physical distance between the probes and the under test heat tubes, then we can know the precise location of the probes underwater. However, how to choose reference circle in image is a difficult problem. In this paper, we propose an algorithm that can adaptively confirm the area of reference circle. In this area, there will be only one circle, and the circle is the reference circle. The test results show that the accuracy of the algorithm of extracting the reference circle in the whole picture without using ROI (region of interest) of the reference circle is only 58.76% and the proposed algorithm is 95.88%. The experimental results indicate that the proposed algorithm can effectively improve the efficiency of the tubes detection.

  8. Microwave-assisted wet digestion with H2O2 at high temperature and pressure using single reaction chamber for elemental determination in milk powder by ICP-OES and ICP-MS.

    PubMed

    Muller, Edson I; Souza, Juliana P; Muller, Cristiano C; Muller, Aline L H; Mello, Paola A; Bizzi, Cezar A

    2016-08-15

    In this work a green digestion method which only used H2O2 as an oxidant and high temperature and pressure in the single reaction chamber system (SRC-UltraWave™) was applied for subsequent elemental determination by inductively coupled plasma-based techniques. Milk powder was chosen to demonstrate the feasibility and advantages of the proposed method. Samples masses up to 500mg were efficiently digested, and the determination of Ca, Fe, K, Mg and Na was performed by inductively coupled plasma optical emission spectrometry (ICP-OES), while trace elements (B, Ba, Cd, Cu, Mn, Mo, Pb, Sr and Zn) were determined by inductively coupled plasma mass spectrometry (ICP-MS). Residual carbon (RC) lower than 918mgL(-1) of C was obtained for digests which contributed to minimizing interferences in determination by ICP-OES and ICP-MS. Accuracy was evaluated using certified reference materials NIST 1549 (non-fat milk powder certified reference material) and NIST 8435 (whole milk powder reference material). The results obtained by the proposed method were in agreement with the certified reference values (t-test, 95% confidence level). In addition, no significant difference was observed between results obtained by the proposed method and conventional wet digestion using concentrated HNO3. As digestion was performed without using any kind of acid, the characteristics of final digests were in agreement with green chemistry principles when compared to digests obtained using conventional wet digestion method with concentrated HNO3. Additionally, H2O2 digests were more suitable for subsequent analysis by ICP-based techniques due to of water being the main product of organic matrix oxidation. The proposed method was suitable for quality control of major components and trace elements present in milk powder in consonance with green sample preparation. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Examination of the semi-automatic calculation technique of vegetation cover rate by digital camera images.

    NASA Astrophysics Data System (ADS)

    Takemine, S.; Rikimaru, A.; Takahashi, K.

    The rice is one of the staple foods in the world High quality rice production requires periodically collecting rice growth data to control the growth of rice The height of plant the number of stem the color of leaf is well known parameters to indicate rice growth Rice growth diagnosis method based on these parameters is used operationally in Japan although collecting these parameters by field survey needs a lot of labor and time Recently a laborsaving method for rice growth diagnosis is proposed which is based on vegetation cover rate of rice Vegetation cover rate of rice is calculated based on discriminating rice plant areas in a digital camera image which is photographed in nadir direction Discrimination of rice plant areas in the image was done by the automatic binarization processing However in the case of vegetation cover rate calculation method depending on the automatic binarization process there is a possibility to decrease vegetation cover rate against growth of rice In this paper a calculation method of vegetation cover rate was proposed which based on the automatic binarization process and referred to the growth hysteresis information For several images obtained by field survey during rice growing season vegetation cover rate was calculated by the conventional automatic binarization processing and the proposed method respectively And vegetation cover rate of both methods was compared with reference value obtained by visual interpretation As a result of comparison the accuracy of discriminating rice plant areas was increased by the proposed

  10. Advances in spectroscopic methods for quantifying soil carbon

    USGS Publications Warehouse

    Liebig, Mark; Franzluebbers, Alan J.; Follett, Ronald F.; Hively, W. Dean; Reeves, James B.; McCarty, Gregory W.; Calderon, Francisco

    2012-01-01

    The gold standard for soil C determination is combustion. However, this method requires expensive consumables, is limited to the determination of the total carbon and in the number of samples which can be processed (~100/d). With increased interest in soil C sequestration, faster methods are needed. Thus, interest in methods based on diffuse reflectance spectroscopy in the visible, near-infrared or mid-infrared ranges using either proximal or remote sensing. These methods have the ability to analyze more samples (2 to 3X/d) or huge areas (imagery) and do multiple analytes simultaneously, but require calibrations relating spectral and reference data and have specific problems, i.e., remote sensing is capable of scanning entire watersheds, thus reducing the sampling needed, but is limiting to the surface layer of tilled soils and by difficulty in obtaining proper calibration reference values. The objective of this discussion is the present state of spectroscopic methods for soil C determination.

  11. How Different EEG References Influence Sensor Level Functional Connectivity Graphs

    PubMed Central

    Huang, Yunzhi; Zhang, Junpeng; Cui, Yuan; Yang, Gang; He, Ling; Liu, Qi; Yin, Guangfu

    2017-01-01

    Highlights: Hamming Distance is applied to distinguish the difference of functional connectivity networkThe orientations of sources are testified to influence the scalp Functional Connectivity Graph (FCG) from different references significantlyREST, the reference electrode standardization technique, is proved to have an overall stable and excellent performance in variable situations. The choice of an electroencephalograph (EEG) reference is a practical issue for the study of brain functional connectivity. To study how EEG reference influence functional connectivity estimation (FCE), this study compares the differences of FCE resulting from the different references such as REST (the reference electrode standardization technique), average reference (AR), linked mastoids (LM), and left mastoid references (LR). Simulations involve two parts. One is based on 300 dipolar pairs, which are located on the superficial cortex with a radial source direction. The other part is based on 20 dipolar pairs. In each pair, the dipoles have various orientation combinations. The relative error (RE) and Hamming distance (HD) between functional connectivity matrices of ideal recordings and that of recordings obtained with different references, are metrics to compare the differences of the scalp functional connectivity graph (FCG) derived from those two kinds of recordings. Lower RE and HD values imply more similarity between the two FCGs. Using the ideal recording (IR) as a standard, the results show that AR, LM and LR perform well only in specific conditions, i.e., AR performs stable when there is no upward component in sources' orientation. LR achieves desirable results when the sources' locations are away from left ear. LM achieves an indistinct difference with IR, i.e., when the distribution of source locations is symmetric along the line linking the two ears. However, REST not only achieves excellent performance for superficial and radial dipolar sources, but also achieves a stable and robust performance with variable source locations and orientations. Benefitting from the stable and robust performance of REST vs. other reference methods, REST might best recover the real FCG of EEG. Thus, REST based FCG may be a good candidate to compare the FCG of EEG based on different references from different labs. PMID:28725175

  12. Kernel reconstruction methods for Doppler broadening — Temperature interpolation by linear combination of reference cross sections at optimally chosen temperatures

    DOE PAGES

    Ducru, Pablo; Josey, Colin; Dibert, Karia; ...

    2017-01-25

    This paper establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (T j). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T 0 to a higher temperature T — namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernelmore » of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (T j). The choice of the L 2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (T j) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [T min,T max]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [300 K,3000 K] with only 9 reference temperatures.« less

  13. Verification of chemistry reference ranges using a simple method in sub-Saharan Africa

    PubMed Central

    Taylor, Douglas; Mandala, Justin; Nanda, Kavita; Van Campenhout, Christel; Agingu, Walter; Madurai, Lorna; Barsch, Eva-Maria; Deese, Jennifer; Van Damme, Lut; Crucitti, Tania

    2016-01-01

    Background Chemistry safety assessments are interpreted by using chemistry reference ranges (CRRs). Verification of CRRs is time consuming and often requires a statistical background. Objectives We report on an easy and cost-saving method to verify CRRs. Methods Using a former method introduced by Sigma Diagnostics, three study sites in sub-Saharan Africa, Bondo, Kenya, and Pretoria and Bloemfontein, South Africa, verified the CRRs for hepatic and renal biochemistry assays performed during a clinical trial of HIV antiretroviral pre-exposure prophylaxis. The aspartate aminotransferase/alanine aminotransferase, creatinine and phosphorus results from 10 clinically-healthy participants at the screening visit were used. In the event the CRRs did not pass the verification, new CRRs had to be calculated based on 40 clinically-healthy participants. Results Within a few weeks, the study sites accomplished verification of the CRRs without additional costs. The aspartate aminotransferase reference ranges for the Bondo, Kenya site and the alanine aminotransferase reference ranges for the Pretoria, South Africa site required adjustment. The phosphorus CRR passed verification and the creatinine CRR required adjustment at every site. The newly-established CRR intervals were narrower than the CRRs used previously at these study sites due to decreases in the upper limits of the reference ranges. As a result, more toxicities were detected. Conclusion To ensure the safety of clinical trial participants, verification of CRRs should be standard practice in clinical trials conducted in settings where the CRR has not been validated for the local population. This verification method is simple, inexpensive, and can be performed by any medical laboratory. PMID:28879112

  14. Validating internal controls for quantitative plant gene expression studies

    PubMed Central

    Brunner, Amy M; Yakovlev, Igor A; Strauss, Steven H

    2004-01-01

    Background Real-time reverse transcription PCR (RT-PCR) has greatly improved the ease and sensitivity of quantitative gene expression studies. However, accurate measurement of gene expression with this method relies on the choice of a valid reference for data normalization. Studies rarely verify that gene expression levels for reference genes are adequately consistent among the samples used, nor compare alternative genes to assess which are most reliable for the experimental conditions analyzed. Results Using real-time RT-PCR to study the expression of 10 poplar (genus Populus) housekeeping genes, we demonstrate a simple method for determining the degree of stability of gene expression over a set of experimental conditions. Based on a traditional method for analyzing the stability of varieties in plant breeding, it defines measures of gene expression stability from analysis of variance (ANOVA) and linear regression. We found that the potential internal control genes differed widely in their expression stability over the different tissues, developmental stages and environmental conditions studied. Conclusion Our results support that quantitative comparisons of candidate reference genes are an important part of real-time RT-PCR studies that seek to precisely evaluate variation in gene expression. The method we demonstrated facilitates statistical and graphical evaluation of gene expression stability. Selection of the best reference gene for a given set of experimental conditions should enable detection of biologically significant changes in gene expression that are too small to be revealed by less precise methods, or when highly variable reference genes are unknowingly used in real-time RT-PCR experiments. PMID:15317655

  15. Radio frequency electromagnetic field compliance assessment of multi-band and MIMO equipped radio base stations.

    PubMed

    Thors, Björn; Thielens, Arno; Fridén, Jonas; Colombi, Davide; Törnevik, Christer; Vermeeren, Günter; Martens, Luc; Joseph, Wout

    2014-05-01

    In this paper, different methods for practical numerical radio frequency exposure compliance assessments of radio base station products were investigated. Both multi-band base station antennas and antennas designed for multiple input multiple output (MIMO) transmission schemes were considered. For the multi-band case, various standardized assessment methods were evaluated in terms of resulting compliance distance with respect to the reference levels and basic restrictions of the International Commission on Non-Ionizing Radiation Protection. Both single frequency and multiple frequency (cumulative) compliance distances were determined using numerical simulations for a mobile communication base station antenna transmitting in four frequency bands between 800 and 2600 MHz. The assessments were conducted in terms of root-mean-squared electromagnetic fields, whole-body averaged specific absorption rate (SAR) and peak 10 g averaged SAR. In general, assessments based on peak field strengths were found to be less computationally intensive, but lead to larger compliance distances than spatial averaging of electromagnetic fields used in combination with localized SAR assessments. For adult exposure, the results indicated that even shorter compliance distances were obtained by using assessments based on localized and whole-body SAR. Numerical simulations, using base station products employing MIMO transmission schemes, were performed as well and were in agreement with reference measurements. The applicability of various field combination methods for correlated exposure was investigated, and best estimate methods were proposed. Our results showed that field combining methods generally considered as conservative could be used to efficiently assess compliance boundary dimensions of single- and dual-polarized multicolumn base station antennas with only minor increases in compliance distances. © 2014 Wiley Periodicals, Inc.

  16. An atlas-based multimodal registration method for 2D images with discrepancy structures.

    PubMed

    Lv, Wenchao; Chen, Houjin; Peng, Yahui; Li, Yanfeng; Li, Jupeng

    2018-06-04

    An atlas-based multimodal registration method for 2-dimension images with discrepancy structures was proposed in this paper. Atlas was utilized for complementing the discrepancy structure information in multimodal medical images. The scheme includes three steps: floating image to atlas registration, atlas to reference image registration, and field-based deformation. To evaluate the performance, a frame model, a brain model, and clinical images were employed in registration experiments. We measured the registration performance by the squared sum of intensity differences. Results indicate that this method is robust and performs better than the direct registration for multimodal images with discrepancy structures. We conclude that the proposed method is suitable for multimodal images with discrepancy structures. Graphical Abstract An Atlas-based multimodal registration method schematic diagram.

  17. A Technique of Two-Stage Clustering Applied to Environmental and Civil Engineering and Related Methods of Citation Analysis.

    ERIC Educational Resources Information Center

    Miyamoto, S.; Nakayama, K.

    1983-01-01

    A method of two-stage clustering of literature based on citation frequency is applied to 5,065 articles from 57 journals in environmental and civil engineering. Results of related methods of citation analysis (hierarchical graph, clustering of journals, multidimensional scaling) applied to same set of articles are compared. Ten references are…

  18. An inductance Fourier decomposition-based current-hysteresis control strategy for switched reluctance motors

    NASA Astrophysics Data System (ADS)

    Hua, Wei; Qi, Ji; Jia, Meng

    2017-05-01

    Switched reluctance machines (SRMs) have attracted extensive attentions due to the inherent advantages, including simple and robust structure, low cost, excellent fault-tolerance and wide speed range, etc. However, one of the bottlenecks limiting the SRMs for further applications is its unfavorable torque ripple, and consequently noise and vibration due to the unique doubly-salient structure and pulse-current-based power supply method. In this paper, an inductance Fourier decomposition-based current-hysteresis-control (IFD-CHC) strategy is proposed to reduce torque ripple of SRMs. After obtaining a nonlinear inductance-current-position model based Fourier decomposition, reference currents can be calculated by reference torque and the derived inductance model. Both the simulations and experimental results confirm the effectiveness of the proposed strategy.

  19. A natural-color mapping for single-band night-time image based on FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Yilun; Qian, Yunsheng

    2018-01-01

    A natural-color mapping for single-band night-time image method based on FPGA can transmit the color of the reference image to single-band night-time image, which is consistent with human visual habits and can help observers identify the target. This paper introduces the processing of the natural-color mapping algorithm based on FPGA. Firstly, the image can be transformed based on histogram equalization, and the intensity features and standard deviation features of reference image are stored in SRAM. Then, the real-time digital images' intensity features and standard deviation features are calculated by FPGA. At last, FPGA completes the color mapping through matching pixels between images using the features in luminance channel.

  20. Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing

    NASA Astrophysics Data System (ADS)

    Ou, Meiying; Li, Shihua; Wang, Chaoli

    2013-12-01

    This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.

  1. DNA-COMPACT: DNA COMpression Based on a Pattern-Aware Contextual Modeling Technique

    PubMed Central

    Li, Pinghao; Wang, Shuang; Kim, Jihoon; Xiong, Hongkai; Ohno-Machado, Lucila; Jiang, Xiaoqian

    2013-01-01

    Genome data are becoming increasingly important for modern medicine. As the rate of increase in DNA sequencing outstrips the rate of increase in disk storage capacity, the storage and data transferring of large genome data are becoming important concerns for biomedical researchers. We propose a two-pass lossless genome compression algorithm, which highlights the synthesis of complementary contextual models, to improve the compression performance. The proposed framework could handle genome compression with and without reference sequences, and demonstrated performance advantages over best existing algorithms. The method for reference-free compression led to bit rates of 1.720 and 1.838 bits per base for bacteria and yeast, which were approximately 3.7% and 2.6% better than the state-of-the-art algorithms. Regarding performance with reference, we tested on the first Korean personal genome sequence data set, and our proposed method demonstrated a 189-fold compression rate, reducing the raw file size from 2986.8 MB to 15.8 MB at a comparable decompression cost with existing algorithms. DNAcompact is freely available at https://sourceforge.net/projects/dnacompact/for research purpose. PMID:24282536

  2. Robust iterative closest point algorithm based on global reference point for rotation invariant registration.

    PubMed

    Du, Shaoyi; Xu, Yiting; Wan, Teng; Hu, Huaizhong; Zhang, Sirui; Xu, Guanglin; Zhang, Xuetao

    2017-01-01

    The iterative closest point (ICP) algorithm is efficient and accurate for rigid registration but it needs the good initial parameters. It is easily failed when the rotation angle between two point sets is large. To deal with this problem, a new objective function is proposed by introducing a rotation invariant feature based on the Euclidean distance between each point and a global reference point, where the global reference point is a rotation invariant. After that, this optimization problem is solved by a variant of ICP algorithm, which is an iterative method. Firstly, the accurate correspondence is established by using the weighted rotation invariant feature distance and position distance together. Secondly, the rigid transformation is solved by the singular value decomposition method. Thirdly, the weight is adjusted to control the relative contribution of the positions and features. Finally this new algorithm accomplishes the registration by a coarse-to-fine way whatever the initial rotation angle is, which is demonstrated to converge monotonically. The experimental results validate that the proposed algorithm is more accurate and robust compared with the original ICP algorithm.

  3. Robust iterative closest point algorithm based on global reference point for rotation invariant registration

    PubMed Central

    Du, Shaoyi; Xu, Yiting; Wan, Teng; Zhang, Sirui; Xu, Guanglin; Zhang, Xuetao

    2017-01-01

    The iterative closest point (ICP) algorithm is efficient and accurate for rigid registration but it needs the good initial parameters. It is easily failed when the rotation angle between two point sets is large. To deal with this problem, a new objective function is proposed by introducing a rotation invariant feature based on the Euclidean distance between each point and a global reference point, where the global reference point is a rotation invariant. After that, this optimization problem is solved by a variant of ICP algorithm, which is an iterative method. Firstly, the accurate correspondence is established by using the weighted rotation invariant feature distance and position distance together. Secondly, the rigid transformation is solved by the singular value decomposition method. Thirdly, the weight is adjusted to control the relative contribution of the positions and features. Finally this new algorithm accomplishes the registration by a coarse-to-fine way whatever the initial rotation angle is, which is demonstrated to converge monotonically. The experimental results validate that the proposed algorithm is more accurate and robust compared with the original ICP algorithm. PMID:29176780

  4. Development and Validation of a Rapid 13C6-Glucose Isotope Dilution UPLC-MRM Mass Spectrometry Method for Use in Determining System Accuracy and Performance of Blood Glucose Monitoring Devices

    PubMed Central

    Matsunami, Risë K.; Angelides, Kimon; Engler, David A.

    2015-01-01

    Background: There is currently considerable discussion about the accuracy of blood glucose concentrations determined by personal blood glucose monitoring systems (BGMS). To date, the FDA has allowed new BGMS to demonstrate accuracy in reference to other glucose measurement systems that use the same or similar enzymatic-based methods to determine glucose concentration. These types of reference measurement procedures are only comparative in nature and are subject to the same potential sources of error in measurement and system perturbations as the device under evaluation. It would be ideal to have a completely orthogonal primary method that could serve as a true standard reference measurement procedure for establishing the accuracy of new BGMS. Methods: An isotope-dilution liquid chromatography/mass spectrometry (ID-UPLC-MRM) assay was developed using 13C6-glucose as a stable isotope analogue to specifically measure glucose concentration in human plasma, and validated for use against NIST standard reference materials, and against fresh isolates of whole blood and plasma into which exogenous glucose had been spiked. Assay performance was quantified to NIST-traceable dry weight measures for both glucose and 13C6-glucose. Results: The newly developed assay method was shown to be rapid, highly specific, sensitive, accurate, and precise for measuring plasma glucose levels. The assay displayed sufficient dynamic range and linearity to measure across the range of both normal and diabetic blood glucose levels. Assay performance was measured to within the same uncertainty levels (<1%) as the NIST definitive method for glucose measurement in human serum. Conclusions: The newly developed ID UPLC-MRM assay can serve as a validated reference measurement procedure to which new BGMS can be assessed for glucose measurement performance. PMID:25986627

  5. Recent improvements of the French liquid micro-flow reference facility

    NASA Astrophysics Data System (ADS)

    Florestan, Ogheard; Sandy, Margot; Julien, Savary

    2018-02-01

    According to the mission of the national reference laboratory, LNE-CETIAT achieved in 2012 the construction and accreditation of a modern and innovative calibration laboratory based on the gravimetric method. The measurement capabilities cover a flow rate range for liquid from 10 kg · h-1 down to 1 g · h-1 with expanded relative uncertainties from 0.1% to 0.6% (k  =  2). Since 2012, several theoretical and experimental studies have allowed a better knowledge and control over uncertainty sources and have decreased calibration time. When dealing with liquid micro-flow using a reference method such as the gravimetric method, several difficulties have to be overcome. The main improvements described in this paper relate to the enhancement of the evaporation trap system, the merging of the four dedicated measurement lines into one, and the implementation of a gravimetric dynamic ‘flying’ method for the calculation of the reference flow rate. The evaporation-avoiding system has been replaced by an oil layer in order to remove the possibility of condensation of water on both the weighed vessel and the immersed capillary. The article describes the experimental method used to quantify the effect of surface tension of water/oil/air interfaces on the weighed mass. The traditional static gravimetric method has been upgraded by a dynamic ‘flying’ gravimetric method. The article presents the newly implemented method, its validation and its advantages compared to the static method. The four dedicated weighing devices, dispatched over four sub-ranges of flow rate, have been merged leading to the use of only one weighing scale with the same uncertainties on the reference flow rate. The article discusses the new uncertainty budget over the full flow rate range capability. Finally, the article discusses the improvements still under development and the general prospects of liquid micro-flow metrology.

  6. Synthesized view comparison method for no-reference 3D image quality assessment

    NASA Astrophysics Data System (ADS)

    Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun

    2018-04-01

    We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.

  7. The dimensional salience solution to the expectancy-value muddle: an extension.

    PubMed

    Newton, Joshua D; Newton, Fiona J; Ewing, Michael T

    2014-01-01

    The theory of reasoned action (TRA) specifies a set of expectancy-value, belief-based frameworks that underpin attitude (behavioural beliefs × outcome evaluations) and subjective norm (normative beliefs × motivation to comply). Unfortunately, the most common method for analysing these frameworks generates statistically uninterpretable findings, resulting in what has been termed the 'expectancy-value muddle'. Recently, however, a dimensional salience approach was found to resolve this muddle for the belief-based framework underpinning attitude. An online survey of 262 participants was therefore conducted to determine whether the dimensional salience approach could also be applied to the belief-based framework underpinning subjective norm. Results revealed that motivations to comply were greater for salient, as opposed to non-salient, social referents. The belief-based framework underpinning subjective norm was therefore represented by evaluating normative belief ratings for salient social referents. This modified framework was found to predict subjective norm, although predictions were greater when participants were forced to select five salient social referents rather than being free to select any number of social referents. These findings validate the use of the dimensional salience approach for examining the belief-based frameworks underpinning subjective norm. As such, this approach provides a complete solution to addressing the expectancy-value muddle in the TRA.

  8. Resonance-Based Sparse Signal Decomposition and its Application in Mechanical Fault Diagnosis: A Review.

    PubMed

    Huang, Wentao; Sun, Hongjian; Wang, Weijie

    2017-06-03

    Mechanical equipment is the heart of industry. For this reason, mechanical fault diagnosis has drawn considerable attention. In terms of the rich information hidden in fault vibration signals, the processing and analysis techniques of vibration signals have become a crucial research issue in the field of mechanical fault diagnosis. Based on the theory of sparse decomposition, Selesnick proposed a novel nonlinear signal processing method: resonance-based sparse signal decomposition (RSSD). Since being put forward, RSSD has become widely recognized, and many RSSD-based methods have been developed to guide mechanical fault diagnosis. This paper attempts to summarize and review the theoretical developments and application advances of RSSD in mechanical fault diagnosis, and to provide a more comprehensive reference for those interested in RSSD and mechanical fault diagnosis. Followed by a brief introduction of RSSD's theoretical foundation, based on different optimization directions, applications of RSSD in mechanical fault diagnosis are categorized into five aspects: original RSSD, parameter optimized RSSD, subband optimized RSSD, integrated optimized RSSD, and RSSD combined with other methods. On this basis, outstanding issues in current RSSD study are also pointed out, as well as corresponding instructional solutions. We hope this review will provide an insightful reference for researchers and readers who are interested in RSSD and mechanical fault diagnosis.

  9. Resonance-Based Sparse Signal Decomposition and Its Application in Mechanical Fault Diagnosis: A Review

    PubMed Central

    Huang, Wentao; Sun, Hongjian; Wang, Weijie

    2017-01-01

    Mechanical equipment is the heart of industry. For this reason, mechanical fault diagnosis has drawn considerable attention. In terms of the rich information hidden in fault vibration signals, the processing and analysis techniques of vibration signals have become a crucial research issue in the field of mechanical fault diagnosis. Based on the theory of sparse decomposition, Selesnick proposed a novel nonlinear signal processing method: resonance-based sparse signal decomposition (RSSD). Since being put forward, RSSD has become widely recognized, and many RSSD-based methods have been developed to guide mechanical fault diagnosis. This paper attempts to summarize and review the theoretical developments and application advances of RSSD in mechanical fault diagnosis, and to provide a more comprehensive reference for those interested in RSSD and mechanical fault diagnosis. Followed by a brief introduction of RSSD’s theoretical foundation, based on different optimization directions, applications of RSSD in mechanical fault diagnosis are categorized into five aspects: original RSSD, parameter optimized RSSD, subband optimized RSSD, integrated optimized RSSD, and RSSD combined with other methods. On this basis, outstanding issues in current RSSD study are also pointed out, as well as corresponding instructional solutions. We hope this review will provide an insightful reference for researchers and readers who are interested in RSSD and mechanical fault diagnosis. PMID:28587198

  10. Current control of PMSM based on maximum torque control reference frame

    NASA Astrophysics Data System (ADS)

    Ohnuma, Takumi

    2017-07-01

    This study presents a new method of current controls of PMSMs (Permanent Magnet Synchronous Motors) based on a maximum torque control reference frame, which is suitable for high-performance controls of the PMSMs. As the issues of environment and energy increase seriously, PMSMs, one of the AC motors, are becoming popular because of their high-efficiency and high-torque density in various applications, such as electric vehicles, trains, industrial machines, and home appliances. To use the PMSMs efficiently, a proper current control of the PMSMs is necessary. In general, a rotational coordinate system synchronizing with the rotor is used for the current control of PMSMs. In the rotating reference frame, the current control is easier because the currents on the rotating reference frame can be expressed as a direct current in the controller. On the other hand, the torque characteristics of PMSMs are non-linear and complex; the PMSMs are efficient and high-density though. Therefore, a complicated control system is required to involve the relation between the torque and the current, even though the rotating reference frame is adopted. The maximum torque control reference frame provides a simpler way to control efficiently the currents taking the torque characteristics of the PMSMs into consideration.

  11. An Interoperable Electronic Medical Record-Based Platform for Personalized Predictive Analytics

    ERIC Educational Resources Information Center

    Abedtash, Hamed

    2017-01-01

    Precision medicine refers to the delivering of customized treatment to patients based on their individual characteristics, and aims to reduce adverse events, improve diagnostic methods, and enhance the efficacy of therapies. Among efforts to achieve the goals of precision medicine, researchers have used observational data for developing predictive…

  12. Theoretical foundation, methods, and criteria for calibrating human vibration models using frequency response functions

    PubMed Central

    Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.

    2015-01-01

    While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726

  13. An internal reference model-based PRF temperature mapping method with Cramer-Rao lower bound noise performance analysis.

    PubMed

    Li, Cheng; Pan, Xinyi; Ying, Kui; Zhang, Qiang; An, Jing; Weng, Dehe; Qin, Wen; Li, Kuncheng

    2009-11-01

    The conventional phase difference method for MR thermometry suffers from disturbances caused by the presence of lipid protons, motion-induced error, and field drift. A signal model is presented with multi-echo gradient echo (GRE) sequence using a fat signal as an internal reference to overcome these problems. The internal reference signal model is fit to the water and fat signals by the extended Prony algorithm and the Levenberg-Marquardt algorithm to estimate the chemical shifts between water and fat which contain temperature information. A noise analysis of the signal model was conducted using the Cramer-Rao lower bound to evaluate the noise performance of various algorithms, the effects of imaging parameters, and the influence of the water:fat signal ratio in a sample on the temperature estimate. Comparison of the calculated temperature map and thermocouple temperature measurements shows that the maximum temperature estimation error is 0.614 degrees C, with a standard deviation of 0.06 degrees C, confirming the feasibility of this model-based temperature mapping method. The influence of sample water:fat signal ratio on the accuracy of the temperature estimate is evaluated in a water-fat mixed phantom experiment with an optimal ratio of approximately 0.66:1. (c) 2009 Wiley-Liss, Inc.

  14. Design verification of large time constant thermal shields for optical reference cavities.

    PubMed

    Zhang, J; Wu, W; Shi, X H; Zeng, X Y; Deng, K; Lu, Z H

    2016-02-01

    In order to achieve high frequency stability in ultra-stable lasers, the Fabry-Pérot reference cavities shall be put inside vacuum chambers with large thermal time constants to reduce the sensitivity to external temperature fluctuations. Currently, the determination of thermal time constants of vacuum chambers is based either on theoretical calculation or time-consuming experiments. The first method can only apply to simple system, while the second method will take a lot of time to try out different designs. To overcome these limitations, we present thermal time constant simulation using finite element analysis (FEA) based on complete vacuum chamber models and verify the results with measured time constants. We measure the thermal time constants using ultrastable laser systems and a frequency comb. The thermal expansion coefficients of optical reference cavities are precisely measured to reduce the measurement error of time constants. The simulation results and the experimental results agree very well. With this knowledge, we simulate several simplified design models using FEA to obtain larger vacuum thermal time constants at room temperature, taking into account vacuum pressure, shielding layers, and support structure. We adopt the Taguchi method for shielding layer optimization and demonstrate that layer material and layer number dominate the contributions to the thermal time constant, compared with layer thickness and layer spacing.

  15. Spectra resolution for simultaneous spectrophotometric determination of lamivudine and zidovudine components in pharmaceutical formulation of human immunodeficiency virus drug based on using continuous wavelet transform and derivative transform techniques.

    PubMed

    Sohrabi, Mahmoud Reza; Tayefeh Zarkesh, Mahshid

    2014-05-01

    In the present paper, two spectrophotometric methods based on signal processing are proposed for the simultaneous determination of two components of an anti-HIV drug called lamivudine (LMV) and zidovudine (ZDV). The proposed methods are applied to synthetic binary mixtures and commercial pharmaceutical tablets without the need for any chemical separation procedures. The developed methods are based on the application of Continuous Wavelet Transform (CWT) and Derivative Spectrophotometry (DS) combined with the zero cross point technique. The Daubechies (db5) wavelet family (242 nm) and Dmey wavelet family (236 nm) were found to give the best results under optimum conditions for simultaneous analysis of lamivudine and zidovudine, respectively. In addition, the first derivative absorption spectra were selected for the determination of lamivudine and zidovudine at 266 nm and 248 nm, respectively. Assaying various synthetic mixtures of the components validated the presented methods. Mean recovery values were found to be between 100.31% and 100.2% for CWT and 99.42% and 97.37% for DS, respectively for determination of LMV and ZDV. The results obtained from analyzing the real samples by the proposed methods were compared to the HPLC reference method. One-way ANOVA test at 95% confidence level was applied to the results. The statistical data from comparing the proposed methods with the reference method showed no significant differences. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. An algebraic method for constructing stable and consistent autoregressive filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu

    2015-02-15

    In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less

  17. Analytical performance specifications for changes in assay bias (Δbias) for data with logarithmic distributions as assessed by effects on reference change values.

    PubMed

    Petersen, Per H; Lund, Flemming; Fraser, Callum G; Sölétormos, György

    2016-11-01

    Background The distributions of within-subject biological variation are usually described as coefficients of variation, as are analytical performance specifications for bias, imprecision and other characteristics. Estimation of specifications required for reference change values is traditionally done using relationship between the batch-related changes during routine performance, described as Δbias, and the coefficients of variation for analytical imprecision (CV A ): the original theory is based on standard deviations or coefficients of variation calculated as if distributions were Gaussian. Methods The distribution of between-subject biological variation can generally be described as log-Gaussian. Moreover, recent analyses of within-subject biological variation suggest that many measurands have log-Gaussian distributions. In consequence, we generated a model for the estimation of analytical performance specifications for reference change value, with combination of Δbias and CV A based on log-Gaussian distributions of CV I as natural logarithms. The model was tested using plasma prolactin and glucose as examples. Results Analytical performance specifications for reference change value generated using the new model based on log-Gaussian distributions were practically identical with the traditional model based on Gaussian distributions. Conclusion The traditional and simple to apply model used to generate analytical performance specifications for reference change value, based on the use of coefficients of variation and assuming Gaussian distributions for both CV I and CV A , is generally useful.

  18. Two phase sampling for wheat acreage estimation. [large area crop inventory experiment

    NASA Technical Reports Server (NTRS)

    Thomas, R. W.; Hay, C. M.

    1977-01-01

    A two phase LANDSAT-based sample allocation and wheat proportion estimation method was developed. This technique employs manual, LANDSAT full frame-based wheat or cultivated land proportion estimates from a large number of segments comprising a first sample phase to optimally allocate a smaller phase two sample of computer or manually processed segments. Application to the Kansas Southwest CRD for 1974 produced a wheat acreage estimate for that CRD within 2.42 percent of the USDA SRS-based estimate using a lower CRD inventory budget than for a simulated reference LACIE system. Factor of 2 or greater cost or precision improvements relative to the reference system were obtained.

  19. The Comparison of Iranian Normative Reference Data with Five Countries ‎Across Variables in Eight Rorschach Comprehensive System (CS) Clusters

    PubMed Central

    Hosseininasab, Abufazel; Mohammadi, Mohammadreza; Jouzi, Samira; Esmaeilinasab, Maryam; Delavar, Ali

    2016-01-01

    Objective: This study aimed to provide a normative study documenting how 114 five-seven year-old non-‎patient Iranian children respond to the Rorschach test. We compared this especial sample to ‎international normative reference values for the Comprehensive System (CS).‎ Method: One hundred fourteen 5- 7- year-old non-patient Iranian children were recruited from public ‎schools. Using five child and adolescent samples from five countries, we compared Iranian ‎Normative Reference Data- based on reference means and standard deviations for each sample.‎ Results: Findings revealed that how the scores in each sample were distributed and how the samples were ‎compared across variables in eight Rorschach Comprehensive System (CS) clusters. We reported ‎all descriptive statistics such as reference mean and standard deviation for all variables.‎ Conclusion: Iranian clinicians could rely on country specific or “local norms” when assessing children. We ‎discourage Iranian clinicians to use many CS scores to make nomothetic, score-based inferences ‎about psychopathology in children and adolescents.‎ PMID:27928247

  20. Comparison of 16S rDNA-based PCR and checkerboard DNA-DNA hybridisation for detection of selected endodontic pathogens.

    PubMed

    Siqueira, José F; Rôças, Isabela N; De Uzeda, Milton; Colombo, Ana P; Santos, Kátia R N

    2002-12-01

    Molecular methods have been used recently to investigate the bacteria encountered in human endodontic infections. The aim of the present study was to compare the ability of a 16S rDNA-based PCR assay and checkerboard DNA-DNA hybridisation in detecting Actinobacillus actinomycetemcomitans, Bacteroides forsythus, Peptostreptococcus micros, Porphyromonas endodontalis, Por. gingivalis and Treponema denticola directly from clinical samples. Specimens were obtained from 50 cases of endodontic infections and the presence of the target species was investigated by whole genomic DNA probes and checkerboard DNA-DNA hybridisation or taxon-specific oligonucleotides with PCR assay. Prevalence of the target species was based on data obtained by each method. The sensitivity and specificity of each molecular method was compared with the data generated by the other method as the reference--a value of 1.0 representing total agreement with the chosen standard. The methods were also compared with regard to the prevalence values for each target species. Regardless of the detection method used, T. denticola, Por. gingivalis, Por. endodontalis and B. forsythus were the most prevalent species. If the checkerboard data for these four species were used as the reference, PCR detection sensitivities ranged from 0.53 to 1.0, and specificities from 0.5 to 0.88, depending on the target bacterial species. When PCR data for the same species were used as the reference, the detection sensitivities for the checkerboard method ranged from 0.17 to 0.73, and specificities from 0.75 to 1.0. Accuracy values ranged from 0.6 to 0.74. On the whole, matching results between the two molecular methods ranged from 60% to 97.5%, depending on the target species. The major discrepancies between the methods comprised a number of PCR-positive but checkerboard-negative results. Significantly higher prevalence figures for Por. endodontalis and T. denticola were observed after PCR assessment. There was no further significant difference between the methods with regard to detection of the other target species.

  1. Control of the repeatability of high frequency multibeam echosounder backscatter by using natural reference areas

    NASA Astrophysics Data System (ADS)

    Roche, Marc; Degrendele, Koen; Vrignaud, Christophe; Loyer, Sophie; Le Bas, Tim; Augustin, Jean-Marie; Lurton, Xavier

    2018-06-01

    The increased use of backscatter measurements in time series for environmental monitoring necessitates the comparability of individual results. With the current lack of pre-calibrated multibeam echosounder systems for absolute backscatter measurement, a pragmatic solution is the use of natural reference areas for ensuring regular assessment of the backscatter measurement repeatability. This method mainly relies on the assumption of a sufficiently stable reference area regarding its backscatter signature. The aptitude of a natural area to provide a stable and uniform backscatter response must be carefully considered and demonstrated by a sufficiently long time-series of measurements. Furthermore, this approach requires a strict control of the acquisition and processing parameters. If all these conditions are met, stability check and relative calibration of a system are possible by comparison with the averaged backscatter values for the area. Based on a common multibeam echosounder and sampling campaign completed by available bathymetric and backscatter time series, the suitability as a backscatter reference area of three different candidates was evaluated. Two among them, Carré Renard and Kwinte, prove to be excellent choices, while the third one, Western Solent, lacks sufficient data over time, but remains a valuable candidate. The case studies and the available backscatter data on these areas prove the applicability of this method. The expansion of the number of commonly used reference areas and the growth of the number of multibeam echosounder controlled thereon could greatly contribute to the further development of quantitative applications based on multibeam echosounder backscatter measurements.

  2. Study on the calibration and optimization of double theodolites baseline

    NASA Astrophysics Data System (ADS)

    Ma, Jing-yi; Ni, Jin-ping; Wu, Zhi-chao

    2018-01-01

    For the double theodolites measurement system baseline as the benchmark of the scale of the measurement system and affect the accuracy of the system, this paper puts forward a method for calibration and optimization of the double theodolites baseline. Using double theodolites to measure the known length of the reference ruler, and then reverse the baseline formula. Based on the error propagation law, the analyses show that the baseline error function is an important index to measure the accuracy of the system, and the reference ruler position, posture and so on have an impact on the baseline error. The optimization model is established and the baseline error function is used as the objective function, and optimizes the position and posture of the reference ruler. The simulation results show that the height of the reference ruler has no effect on the baseline error; the posture is not uniform; when the reference ruler is placed at x=500mm and y=1000mm in the measurement space, the baseline error is the smallest. The experimental results show that the experimental results are consistent with the theoretical analyses in the measurement space. In this paper, based on the study of the placement of the reference ruler, for improving the accuracy of the double theodolites measurement system has a reference value.

  3. A survey on the geographic scope of textual documents

    NASA Astrophysics Data System (ADS)

    Monteiro, Bruno R.; Davis, Clodoveu A.; Fonseca, Fred

    2016-11-01

    Recognizing references to places in texts is needed in many applications, such as search engines, location-based social media and document classification. In this paper we present a survey of methods and techniques for the recognition and identification of places referenced in texts. We discuss concepts and terminology, and propose a classification of the solutions given in the literature. We introduce a definition of the Geographic Scope Resolution (GSR) problem, dividing it in three steps: geoparsing, reference resolution, and grounding references. Solutions to the first two steps are organized according to the method used, and solutions to the third step are organized according to the type of output produced. We found that it is difficult to compare existing solutions directly to one another, because they often create their own benchmarking data, targeted to their own problem.

  4. Methods to estimate irrigated reference crop evapotranspiration - a review.

    PubMed

    Kumar, R; Jat, M K; Shankar, V

    2012-01-01

    Efficient water management of crops requires accurate irrigation scheduling which, in turn, requires the accurate measurement of crop water requirement. Irrigation is applied to replenish depleted moisture for optimum plant growth. Reference evapotranspiration plays an important role for the determination of water requirements for crops and irrigation scheduling. Various models/approaches varying from empirical to physically base distributed are available for the estimation of reference evapotranspiration. Mathematical models are useful tools to estimate the evapotranspiration and water requirement of crops, which is essential information required to design or choose best water management practices. In this paper the most commonly used models/approaches, which are suitable for the estimation of daily water requirement for agricultural crops grown in different agro-climatic regions, are reviewed. Further, an effort has been made to compare the accuracy of various widely used methods under different climatic conditions.

  5. The Current Status and Tendency of China Millimeter Coordinate Frame Implementation and Maintenance

    NASA Astrophysics Data System (ADS)

    Cheng, P.; Cheng, Y.; Bei, J.

    2017-12-01

    China Geodetic Coordinate System 2000 (CGCS2000) was first officially declared as the national standard coordinate system on July 1, 2008. This reference frame was defined in the ITRF97 frame at epoch 2000.0 and included 2600 GPS geodetic control points. The paper discusses differences between China Geodetic Coordinate System 2000 (CGCS2000) and later updated ITRF versions, such as ITRF2014,in terms of technical implementation and maintenance. With the development of the Beidou navigation satellite system, especially third generation of BDS with signal global coverage in the future, and with progress of space geodetic technology, it is possible for us to establish a global millimeter-level reference frame based on space geodetic technology including BDS. The millimeter reference frame implementation concerns two factors: 1) The variation of geocenter motion estimation, and 2) the site nonlinear motion modeling. In this paper, the geocentric inversion methods are discussed and compared among results derived from various technical methods. Our nonlinear site movement modeling focuses on singular spectrum analysis method, which is of apparent advantages over earth physical effect modeling. All presented in the paper expected to provide reference to our future CGCS2000 maintenance.

  6. Unsupervised change detection in a particular vegetation land cover type using spectral angle mapper

    NASA Astrophysics Data System (ADS)

    Renza, Diego; Martinez, Estibaliz; Molina, Iñigo; Ballesteros L., Dora M.

    2017-04-01

    This paper presents a new unsupervised change detection methodology for multispectral images applied to specific land covers. The proposed method involves comparing each image against a reference spectrum, where the reference spectrum is obtained from the spectral signature of the type of coverage you want to detect. In this case the method has been tested using multispectral images (SPOT5) of the community of Madrid (Spain), and multispectral images (Quickbird) of an area over Indonesia that was impacted by the December 26, 2004 tsunami; here, the tests have focused on the detection of changes in vegetation. The image comparison is obtained by applying Spectral Angle Mapper between the reference spectrum and each multitemporal image. Then, a threshold to produce a single image of change is applied, which corresponds to the vegetation zones. The results for each multitemporal image are combined through an exclusive or (XOR) operation that selects vegetation zones that have changed over time. Finally, the derived results were compared against a supervised method based on classification with the Support Vector Machine. Furthermore, the NDVI-differencing and the Spectral Angle Mapper techniques were selected as unsupervised methods for comparison purposes. The main novelty of the method consists in the detection of changes in a specific land cover type (vegetation), therefore, for comparison purposes, the best scenario is to compare it with methods that aim to detect changes in a specific land cover type (vegetation). This is the main reason to select NDVI-based method and the post-classification method (SVM implemented in a standard software tool). To evaluate the improvements using a reference spectrum vector, the results are compared with the basic-SAM method. In SPOT5 image, the overall accuracy was 99.36% and the κ index was 90.11%; in Quickbird image, the overall accuracy was 97.5% and the κ index was 82.16%. Finally, the precision results of the method are comparable to those of a supervised method, supported by low detection of false positives and false negatives, along with a high overall accuracy and a high kappa index. On the other hand, the execution times were comparable to those of unsupervised methods of low computational load.

  7. Interactive Reference Point Procedure Based on the Conic Scalarizing Function

    PubMed Central

    2014-01-01

    In multiobjective optimization methods, multiple conflicting objectives are typically converted into a single objective optimization problem with the help of scalarizing functions. The conic scalarizing function is a general characterization of Benson proper efficient solutions of non-convex multiobjective problems in terms of saddle points of scalar Lagrangian functions. This approach preserves convexity. The conic scalarizing function, as a part of a posteriori or a priori methods, has successfully been applied to several real-life problems. In this paper, we propose a conic scalarizing function based interactive reference point procedure where the decision maker actively takes part in the solution process and directs the search according to her or his preferences. An algorithmic framework for the interactive solution of multiple objective optimization problems is presented and is utilized for solving some illustrative examples. PMID:24723795

  8. Evaluation of Pyrrolidonyl Arylamidase Activity in Staphylococcus delphini.

    PubMed

    Compton, Samantha T; Kania, Stephen A; Robertson, Amy E; Lawhon, Sara D; Jenkins, Stephen G; Westblade, Lars F; Bemis, David A

    2017-03-01

    Clinical reference textbooks lack data for pyrrolidonyl arylamidase (PYR) activity in Staphylococcus delphini This study evaluated PYR activities of 21 S. delphini strains by reference broth, rapid disc, and rapid slide methods. Species and subgroup identifications were confirmed by nucleic acid-based methods and included nine group A and 12 group B strains. Testing by rapid PYR methods with products from four manufacturers was performed at two testing locations, and, with the exception of one strain tested at one location using reagents from one manufacturer, each S. delphini strain tested positive for PYR activity. Therefore, PYR may be a useful single-test adjunct for distinguishing Staphylococcus aureus from S. delphini and other members of the Staphylococcus intermedius group. Copyright © 2017 American Society for Microbiology.

  9. Validation of the ANSR(®) Listeria monocytogenes Method for Detection of Listeria monocytogenes in Selected Food and Environmental Samples.

    PubMed

    Caballero, Oscar; Alles, Susan; Le, Quynh-Nhi; Gray, R Lucas; Hosking, Edan; Pinkava, Lisa; Norton, Paul; Tolan, Jerry; Mozola, Mark; Rice, Jennifer; Chen, Yi; Ryser, Elliot; Odumeru, Joseph

    2016-01-01

    Work was conducted to validate performance of the ANSR(®) for Listeria monocytogenes method in selected food and environmental matrixes. This DNA-based assay involves amplification of nucleic acid via an isothermal reaction based on nicking enzyme amplification technology. Following single-step sample enrichment for 16-24 h for most matrixes, the assay is completed in 40 min using only simple instrumentation. When 50 distinct strains of L. monocytogenes were tested for inclusivity, 48 produced positive results, the exceptions being two strains confirmed by PCR to lack the assay target gene. Forty-seven nontarget strains (30 species), including multiple non-monocytogenes Listeria species as well as non-Listeria, Gram-positive bacteria, were tested, and all generated negative ANSR assay results. Performance of the ANSR method was compared with that of the U.S. Department of Agriculture, Food Safety and Inspection Service Microbiology Laboratory Guidebook reference culture procedure for detection of L. monocytogenes in hot dogs, pasteurized liquid egg, and sponge samples taken from an inoculated stainless steel surface. In addition, ANSR performance was measured against the U.S. Food and Drug Administration Bacteriological Analytical Manual reference method for detection of L. monocytogenes in Mexican-style cheese, cantaloupe, sprout irrigation water, and guacamole. With the single exception of pasteurized liquid egg at 16 h, ANSR method performance as quantified by the number of positives obtained was not statistically different from that of the reference methods. Robustness trials demonstrated that deliberate introduction of small deviations to the normal assay parameters did not affect ANSR method performance. Results of accelerated stability testing conducted using two manufactured lots of reagents predicts stability at the specified storage temperature of 4°C of more than 1 year.

  10. 40 CFR Appendix A-3 to Part 60 - Test Methods 4 through 5I

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... moisture to aid in setting isokinetic sampling rates prior to a pollutant emission measurement run. The... simultaneously with a pollutant emission measurement run. When it is, calculation of percent isokinetic, pollutant emission rate, etc., for the run shall be based upon the results of the reference method or its...

  11. 40 CFR Appendix A-3 to Part 60 - Test Methods 4 through 5I

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... moisture to aid in setting isokinetic sampling rates prior to a pollutant emission measurement run. The... simultaneously with a pollutant emission measurement run. When it is, calculation of percent isokinetic, pollutant emission rate, etc., for the run shall be based upon the results of the reference method or its...

  12. 40 CFR Appendix A-3 to Part 60 - Test Methods 4 through 5I

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... moisture to aid in setting isokinetic sampling rates prior to a pollutant emission measurement run. The... simultaneously with a pollutant emission measurement run. When it is, calculation of percent isokinetic, pollutant emission rate, etc., for the run shall be based upon the results of the reference method or its...

  13. 40 CFR Appendix A-3 to Part 60 - Test Methods 4 through 5I

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... moisture to aid in setting isokinetic sampling rates prior to a pollutant emission measurement run. The... simultaneously with a pollutant emission measurement run. When it is, calculation of percent isokinetic, pollutant emission rate, etc., for the run shall be based upon the results of the reference method or its...

  14. 21 CFR 172.215 - Coumarone-indene resin.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...), substituted benzenes, and related compounds. (2) It contains no more than 0.25 percent tar bases. (3) 95...) Softening point, ring and ball: 126 °C minimum as determined by ASTM method E28-67 (Reapproved 1982), “Standard Test Method for Softening Point by Ring-and-Ball Apparatus,” which is incorporated by reference...

  15. 40 CFR 53.4 - Applications for reference or equivalent method determinations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Required or recommended routine, periodic, and preventative maintenance and maintenance schedules. (J) Any... methods for PM 2.5 and PM 10-2,5 must be described in sufficient detail, based on the elements described... Table A-1 to this subpart) will be met throughout the warranty period and that the applicant accepts...

  16. 40 CFR 53.4 - Applications for reference or equivalent method determinations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Required or recommended routine, periodic, and preventative maintenance and maintenance schedules. (J) Any... methods for PM 2.5 and PM 10-2.5 must be described in sufficient detail, based on the elements described... Table A-1 to this subpart) will be met throughout the warranty period and that the applicant accepts...

  17. Evaluation and Comparison of Chemiluminescence and UV Photometric Methods for Measuring Ozone Concentrations in Ambient Air

    EPA Science Inventory

    The current Federal Reference Method (FRM) for measuring concentrations of ozone in ambient air is based on the dry, gas-phase, chemiluminescence reaction between ethylene (C2H4) and any ozone (O3) that may be p...

  18. CLSI-based transference and verification of CALIPER pediatric reference intervals for 29 Ortho VITROS 5600 chemistry assays.

    PubMed

    Higgins, Victoria; Truong, Dorothy; Woroch, Amy; Chan, Man Khun; Tahmasebi, Houman; Adeli, Khosrow

    2018-03-01

    Evidence-based reference intervals (RIs) are essential to accurately interpret pediatric laboratory test results. To fill gaps in pediatric RIs, the Canadian Laboratory Initiative on Pediatric Reference Intervals (CALIPER) project developed an age- and sex-specific pediatric RI database based on healthy pediatric subjects. Originally established for Abbott ARCHITECT assays, CALIPER RIs were transferred to assays on Beckman, Roche, Siemens, and Ortho analytical platforms. This study provides transferred reference intervals for 29 biochemical assays for the Ortho VITROS 5600 Chemistry System (Ortho). Based on Clinical Laboratory Standards Institute (CLSI) guidelines, a method comparison analysis was performed by measuring approximately 200 patient serum samples using Abbott and Ortho assays. The equation of the line of best fit was calculated and the appropriateness of the linear model was assessed. This equation was used to transfer RIs from Abbott to Ortho assays. Transferred RIs were verified using 84 healthy pediatric serum samples from the CALIPER cohort. RIs for most chemistry analytes successfully transferred from Abbott to Ortho assays. Calcium and CO 2 did not meet statistical criteria for transference (r 2 <0.70). Of the 32 transferred reference intervals, 29 successfully verified with approximately 90% of results from reference samples falling within transferred confidence limits. Transferred RIs for total bilirubin, magnesium, and LDH did not meet verification criteria and are not reported. This study broadens the utility of the CALIPER pediatric RI database to laboratories using Ortho VITROS 5600 biochemical assays. Clinical laboratories should verify CALIPER reference intervals for their specific analytical platform and local population as recommended by CLSI. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  19. Perceptual video quality assessment in H.264 video coding standard using objective modeling.

    PubMed

    Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu

    2014-01-01

    Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.

  20. Computational assessment of model-based wave separation using a database of virtual subjects.

    PubMed

    Hametner, Bernhard; Schneider, Magdalena; Parragh, Stephanie; Wassertheurer, Siegfried

    2017-11-07

    The quantification of arterial wave reflection is an important area of interest in arterial pulse wave analysis. It can be achieved by wave separation analysis (WSA) if both the aortic pressure waveform and the aortic flow waveform are known. For better applicability, several mathematical models have been established to estimate aortic flow solely based on pressure waveforms. The aim of this study is to investigate and verify the model-based wave separation of the ARCSolver method on virtual pulse wave measurements. The study is based on an open access virtual database generated via simulations. Seven cardiac and arterial parameters were varied within physiological healthy ranges, leading to a total of 3325 virtual healthy subjects. For assessing the model-based ARCSolver method computationally, this method was used to perform WSA based on the aortic root pressure waveforms of the virtual patients. Asa reference, the values of WSA using both the pressure and flow waveforms provided by the virtual database were taken. The investigated parameters showed a good overall agreement between the model-based method and the reference. Mean differences and standard deviations were -0.05±0.02AU for characteristic impedance, -3.93±1.79mmHg for forward pressure amplitude, 1.37±1.56mmHg for backward pressure amplitude and 12.42±4.88% for reflection magnitude. The results indicate that the mathematical blood flow model of the ARCSolver method is a feasible surrogate for a measured flow waveform and provides a reasonable way to assess arterial wave reflection non-invasively in healthy subjects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Glucose Measurement: Time for a Gold Standard

    PubMed Central

    Hagvik, Joakim

    2007-01-01

    There is no internationally recognized reference method for the measurement of blood glucose. The Centers for Disease Control and Prevention (CDC) highlighted the need for standardization some years ago when a project was started. The project objectives were to (1) investigate whether there are significant differences in calibration levels among currently used glucose monitors for home use and (2) develop a reference method for glucose determination. A first study confirmed the assumption that currently used home-use monitors differ significantly and that standardization is necessary in order to minimize variability and to improve patient care. As a reference method, CDC recommended a method based on isotope dilution gas chromatography–mass spectrometry, an assay that has received support from clinical chemists worldwide. CDC initiated a preliminary study to establish the suitability of this method, but then the project came to a halt. It is hoped that CDC, with support from the industry, as well as academic and professional organizations such as the American Association for Clinical Chemistry and International Federation of Clinical Chemistry and Laboratory Medicine, will be able to finalize the project and develop the long-awaited and much needed “gold standard” for glucose measurement. PMID:19888402

  2. Methodological evaluation and comparison of five urinary albumin measurements.

    PubMed

    Liu, Rui; Li, Gang; Cui, Xiao-Fan; Zhang, Dong-Ling; Yang, Qing-Hong; Mu, Xiao-Yan; Pan, Wen-Jie

    2011-01-01

    Microalbuminuria is an indicator of kidney damage and a risk factor for the progression kidney disease, cardiovascular disease, and so on. Therefore, accurate and precise measurement of urinary albumin is critical. However, there are no reference measurement procedures and reference materials for urinary albumin. Nephelometry, turbidimetry, colloidal gold method, radioimmunoassay, and chemiluminescence immunoassay were performed for methodological evaluation, based on imprecision test, recovery rate, linearity, haemoglobin interference rate, and verified reference interval. Then we tested 40 urine samples from diabetic patients by each method, and compared the result between assays. The results indicate that nephelometry is the method with best analytical performance among the five methods, with an average intraassay coefficient of variation (CV) of 2.6%, an average interassay CV of 1.7%, a mean recovery of 99.6%, a linearity of R=1.00 from 2 to 250 mg/l, and an interference rate of <10% at haemoglobin concentrations of <1.82 g/l. The correlation (r) between assays was from 0.701 to 0.982, and the Bland-Altman plots indicated each assay provided significantly different results from each other. Nephelometry is the clinical urinary albumin method with best analytical performance in our study. © 2011 Wiley-Liss, Inc.

  3. Validated spectrophotometric methods for determination of sodium valproate based on charge transfer complexation reactions.

    PubMed

    Belal, Tarek S; El-Kafrawy, Dina S; Mahrous, Mohamed S; Abdel-Khalek, Magdi M; Abo-Gharam, Amira H

    2016-02-15

    This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415nm. The fourth method involves the formation of a yellow complex peaking at 361nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Validated spectrophotometric methods for determination of sodium valproate based on charge transfer complexation reactions

    NASA Astrophysics Data System (ADS)

    Belal, Tarek S.; El-Kafrawy, Dina S.; Mahrous, Mohamed S.; Abdel-Khalek, Magdi M.; Abo-Gharam, Amira H.

    2016-02-01

    This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524 nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490 nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415 nm. The fourth method involves the formation of a yellow complex peaking at 361 nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8 μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method.

  5. Striking against bioterrorism with advanced proteomics and reference methods.

    PubMed

    Armengaud, Jean

    2017-01-01

    The intentional use by terrorists of biological toxins as weapons has been of great concern for many years. Among the numerous toxins produced by plants, animals, algae, fungi, and bacteria, ricin is one of the most scrutinized by the media because it has already been used in biocrimes and acts of bioterrorism. Improving the analytical toolbox of national authorities to monitor these potential bioweapons all at once is of the utmost interest. MS/MS allows their absolute quantitation and exhibits advantageous sensitivity, discriminative power, multiplexing possibilities, and speed. In this issue of Proteomics, Gilquin et al. (Proteomics 2017, 17, 1600357) present a robust multiplex assay to quantify a set of eight toxins in the presence of a complex food matrix. This MS/MS reference method is based on scheduled SRM and high-quality standards consisting of isotopically labeled versions of these toxins. Their results demonstrate robust reliability based on rather loose scheduling of SRM transitions and good sensitivity for the eight toxins, lower than their oral median lethal doses. In the face of an increased threat from terrorism, relevant reference assays based on advanced proteomics and high-quality companion toxin standards are reliable and firm answers. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. (GTG)5-PCR reference framework for acetic acid bacteria.

    PubMed

    Papalexandratou, Zoi; Cleenwerck, Ilse; De Vos, Paul; De Vuyst, Luc

    2009-11-01

    One hundred and fifty-eight strains of acetic acid bacteria (AAB) were subjected to (GTG)(5)-PCR fingerprinting to construct a reference framework for their rapid classification and identification. Most of them clustered according to their respective taxonomic designation; others had to be reclassified based on polyphasic data. This study shows the usefulness of the method to determine the taxonomic and phylogenetic relationships among AAB and to study the AAB diversity of complex ecosystems.

  7. 40 CFR 53.16 - Supersession of reference methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Supersession of reference methods. 53... (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.16 Supersession of reference methods. (a) This section prescribes procedures and criteria applicable to requests that...

  8. Methodological proposal for validation of the disinfecting efficacy of an automated flexible endoscope reprocessor

    PubMed Central

    Graziano, Kazuko Uchikawa; Pereira, Marta Elisa Auler; Koda, Elaine

    2016-01-01

    ABSTRACT Objective: to elaborate and apply a method to assess the efficacy of automated flexible endoscope reprocessors at a time when there is not an official method or trained laboratories to comply with the requirements described in specific standards for this type of health product in Brazil. Method: the present methodological study was developed based on the following theoretical references: International Organization for Standardization (ISO) standard ISO 15883-4/2008 and Brazilian Health Surveillance Agency (Agência Nacional de Vigilância Sanitária - ANVISA) Collegiate Board Resolution (Resolução de Diretoria Colegiada - RDC) no. 35/2010 and 15/2012. The proposed method was applied to a commercially available device using a high-level 0.2% peracetic acid-based disinfectant. Results: the proposed method of assessment was found to be robust when the recommendations made in the relevant legislation were incorporated with some adjustments to ensure their feasibility. Application of the proposed method provided evidence of the efficacy of the tested equipment for the high-level disinfection of endoscopes. Conclusion: the proposed method may serve as a reference for the assessment of flexible endoscope reprocessors, thereby providing solid ground for the purchase of this category of health products. PMID:27508915

  9. Inverse scattering theory: Inverse scattering series method for one dimensional non-compact support potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Jie, E-mail: yjie2@uh.edu; Lesage, Anne-Cécile; Hussain, Fazle

    2014-12-15

    The reversion of the Born-Neumann series of the Lippmann-Schwinger equation is one of the standard ways to solve the inverse acoustic scattering problem. One limitation of the current inversion methods based on the reversion of the Born-Neumann series is that the velocity potential should have compact support. However, this assumption cannot be satisfied in certain cases, especially in seismic inversion. Based on the idea of distorted wave scattering, we explore an inverse scattering method for velocity potentials without compact support. The strategy is to decompose the actual medium as a known single interface reference medium, which has the same asymptoticmore » form as the actual medium and a perturbative scattering potential with compact support. After introducing the method to calculate the Green’s function for the known reference potential, the inverse scattering series and Volterra inverse scattering series are derived for the perturbative potential. Analytical and numerical examples demonstrate the feasibility and effectiveness of this method. Besides, to ensure stability of the numerical computation, the Lanczos averaging method is employed as a filter to reduce the Gibbs oscillations for the truncated discrete inverse Fourier transform of each order. Our method provides a rigorous mathematical framework for inverse acoustic scattering with a non-compact support velocity potential.« less

  10. Study on the evaluation method for fault displacement based on characterized source model

    NASA Astrophysics Data System (ADS)

    Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.

    2016-12-01

    In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.

  11. Quality Control of Next-generation Sequencing-based In vitro Diagnostic Test for Onco-relevant Mutations Using Multiplex Reference Materials in Plasma.

    PubMed

    Liu, Donglai; Zhou, Haiwei; Shi, Dawei; Shen, Shu; Tian, Yabin; Wang, Lin; Lou, Jiatao; Cong, Rong; Lu, Juan; Zhang, Henghui; Zhao, Meiru; Zhu, Shida; Cao, Zhisheng; Jin, Ruilin; Wang, Yin; Zhang, Xiaoni; Yang, Guohua; Wang, Youchun; Zhang, Chuntao

    2018-01-01

    Background: Widespread clinical implementation of next-generation sequencing (NGS)-based cancer in vitro diagnostic tests (IVDs) highlighted the urgency to establish reference materials which could provide full control of the process from nucleic acid extraction to test report generation. The formalin-fixed, paraffin-embedded (FFPE) tissue and blood plasma containing circulating tumor deoxyribonucleic acid (ctDNA) were mostly used for clinically detecting onco-relevant mutations. Methods: We respectively developed multiplex FFPE and plasma reference materials covering three clinically onco-relevant mutations within the epidermal growth factor receptor ( EGFR ) gene at serial allelic frequencies. All reference materials were quantified and validated via droplet digital polymerase chain reaction (ddPCR), and then were distributed to eight domestic manufacturers for the collaborative evaluation of the performance of several domestic NGS-based cancer IVDs covering four major NGS platforms (NextSeq, HiSeq, Ion Proton and BGISEQ). Results: All expected mutations except one at extremely low allelic frequencies were detected, despite some differences in coefficient of variation (CV) which increased with the decrease of allelic frequency (CVs ranging from 18% to 106%). It was worth noting that the CV value seemed to correlate with a particular mutation as well. The repeatability of determination of different mutations was L858R>T790M>19del. Conclusions: The results indicated our reference materials would be pivotal for quality control of NGS-based cancer IVDs and would guide the further development of reference materials covering more onco-relevant mutations.

  12. [An anti-Taylor approach: the invention of a method for the cogovernance of health care institutions in order to produce freedom and compromise].

    PubMed

    Campos, G W

    1998-01-01

    This paper describes a new health care management method. A triangular confrontation system was constructed, based on a theoretical review, empirical facts observed from health services, and the researcher's knowledge, jointly analyzed. This new management model was termed 'health-team-focused collegiate management', entailing several original organizational concepts: production unity, matrix-based reference team, collegiate management system, cogovernance, and product/production interface.

  13. Absolute calibration for complex-geometry biomedical diffuse optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Mastanduno, Michael A.; Jiang, Shudong; El-Ghussein, Fadi; diFlorio-Alexander, Roberta; Pogue, Brian W.; Paulsen, Keith D.

    2013-03-01

    We have presented methodology to calibrate data in NIRS/MRI imaging versus an absolute reference phantom and results in both phantoms and healthy volunteers. This method directly calibrates data to a diffusion-based model, takes advantage of patient specific geometry from MRI prior information, and generates an initial guess without the need for a large data set. This method of calibration allows for more accurate quantification of total hemoglobin, oxygen saturation, water content, scattering, and lipid concentration as compared with other, slope-based methods. We found the main source of error in the method to be derived from incorrect assignment of reference phantom optical properties rather than initial guess in reconstruction. We also present examples of phantom and breast images from a combined frequency domain and continuous wave MRI-coupled NIRS system. We were able to recover phantom data within 10% of expected contrast and within 10% of the actual value using this method and compare these results with slope-based calibration methods. Finally, we were able to use this technique to calibrate and reconstruct images from healthy volunteers. Representative images are shown and discussion is provided for comparison with existing literature. These methods work towards fully combining the synergistic attributes of MRI and NIRS for in-vivo imaging of breast cancer. Complete software and hardware integration in dual modality instruments is especially important due to the complexity of the technology and success will contribute to complex anatomical and molecular prognostic information that can be readily obtained in clinical use.

  14. Method and apparatus for frequency spectrum analysis

    NASA Technical Reports Server (NTRS)

    Cole, Steven W. (Inventor)

    1992-01-01

    A method for frequency spectrum analysis of an unknown signal in real-time is discussed. The method is based upon integration of 1-bit samples of signal voltage amplitude corresponding to sine or cosine phases of a controlled center frequency clock which is changed after each integration interval to sweep the frequency range of interest in steps. Integration of samples during each interval is carried out over a number of cycles of the center frequency clock spanning a number of cycles of an input signal to be analyzed. The invention may be used to detect the frequency of at least two signals simultaneously. By using a reference signal of known frequency and voltage amplitude (added to the two signals for parallel processing in the same way, but in a different channel with a sampling at the known frequency and phases of the reference signal), the absolute voltage amplitude of the other two signals may be determined by squaring the sine and cosine integrals of each channel and summing the squares to obtain relative power measurements in all three channels and, from the known voltage amplitude of the reference signal, obtaining an absolute voltage measurement for the other two signals by multiplying the known voltage of the reference signal with the ratio of the relative power of each of the other two signals to the relative power of the reference signal.

  15. Spatial early warning signals in a lake manipulation

    USGS Publications Warehouse

    Butitta, Vince L.; Carpenter, Stephen R.; Loken, Luke; Pace, Michael L.; Stanley, Emily H.

    2017-01-01

    Rapid changes in state have been documented for many of Earth's ecosystems. Despite a growing toolbox of methods for detecting declining resilience or early warning indicators (EWIs) of ecosystem transitions, these methods have rarely been evaluated in whole-ecosystem trials using reference ecosystems. In this study, we experimentally tested EWIs of cyanobacteria blooms based on changes in the spatial structure of a lake. We induced a cyanobacteria bloom by adding nutrients to an experimental lake and mapped fine-resolution spatial patterning of cyanobacteria using a mobile sensor platform. Prior to the bloom, we detected theoretically predicted spatial EWIs based on variance and spatial autocorrelation, as well as a new index based on the extreme values. Changes in EWIs were not discernible in an unenriched reference lake. Despite the fluid environment of a lake where spatial heterogeneity driven by biological processes may be overwhelmed by physical mixing, spatial EWIs detected an approaching bloom suggesting the utility of spatial metrics for signaling ecological thresholds.

  16. Comparative Evaluation of Veriflow®Listeria Species to USDA Culture-Based Method for the Detection of Listeria spp. in Food and Environmental Samples.

    PubMed

    Joelsson, Adam C; Terkhorn, Shawn P; Brown, Ashley S; Puri, Amrita; Pascal, Benjamin J; Gaudioso, Zara E; Siciliano, Nicholas A

    2017-09-01

    Veriflow® Listeria species (Veriflow LS) is a molecular-based assay for the presumptive detection of Listeria spp. from environmental surfaces (stainless steel, sealed concrete, plastic, and ceramic tile) and ready-to-eat (RTE) food matrixes (hot dogs and deli meat). The assay utilizes a PCR detection method coupled with a rapid, visual, flow-based assay that develops in 3 min post-PCR amplification and requires only a 24 h enrichment for maximum sensitivity. The Veriflow LS system eliminates the need for sample purification, gel electrophoresis, or fluorophore-based detection of target amplification and does not require complex data analysis. This Performance Tested MethodSM validation study demonstrated the ability of the Veriflow LS assay to detect low levels of artificially inoculated Listeria spp. in six distinct environmental and food matrixes. In each unpaired reference comparison study, probability of detection analysis indicated that there was no significant difference between the Veriflow LS method and the U.S. Department of Agriculture Food Safety and Inspection Service Microbiology Laboratory Guide Chapter 8.08 reference method. Fifty-one strains of various Listeria spp. were detected in the inclusivity study, and 35 nonspecific organisms went undetected in the exclusivity study. The study results show that the Veriflow LS is a sensitive, selective, and robust assay for the presumptive detection of Listeria spp. sampled from environmental surfaces (stainless steel, sealed concrete, plastic, and ceramic tile) and RTE food matrixes (hot dogs and deli meat).

  17. Using the charge-stabilization technique in the double ionization potential equation-of-motion calculations with dianion references.

    PubMed

    Kuś, Tomasz; Krylov, Anna I

    2011-08-28

    The charge-stabilization method is applied to double ionization potential equation-of-motion (EOM-DIP) calculations to stabilize unstable dianion reference functions. The auto-ionizing character of the dianionic reference states spoils the numeric performance of EOM-DIP limiting applications of this method. We demonstrate that reliable excitation energies can be computed by EOM-DIP using a stabilized resonance wave function instead of the lowest energy solution corresponding to the neutral + free electron(s) state of the system. The details of charge-stabilization procedure are discussed and illustrated by examples. The choice of optimal stabilizing Coulomb potential, which is strong enough to stabilize the dianion reference, yet, minimally perturbs the target states of the neutral, is the crux of the approach. Two algorithms of choosing optimal parameters of the stabilization potential are presented. One is based on the orbital energies, and another--on the basis set dependence of the total Hartree-Fock energy of the reference. Our benchmark calculations of the singlet-triplet energy gaps in several diradicals show a remarkable improvement of the EOM-DIP accuracy in problematic cases. Overall, the excitation energies in diradicals computed using the stabilized EOM-DIP are within 0.2 eV from the reference EOM spin-flip values. © 2011 American Institute of Physics

  18. Identification of candidate reference chemicals for in vitro steroidogenesis assays.

    PubMed

    Pinto, Caroline Lucia; Markey, Kristan; Dix, David; Browne, Patience

    2018-03-01

    The Endocrine Disruptor Screening Program (EDSP) is transitioning from traditional testing methods to integrating ToxCast/Tox21 in vitro high-throughput screening assays for identifying chemicals with endocrine bioactivity. The ToxCast high-throughput H295R steroidogenesis assay may potentially replace the low-throughput assays currently used in the EDSP Tier 1 battery to detect chemicals that alter the synthesis of androgens and estrogens. Herein, we describe an approach for identifying in vitro candidate reference chemicals that affect the production of androgens and estrogens in models of steroidogenesis. Candidate reference chemicals were identified from a review of H295R and gonad-derived in vitro assays used in methods validation and published in the scientific literature. A total of 29 chemicals affecting androgen and estrogen levels satisfied all criteria for positive reference chemicals, while an additional set of 21 and 15 chemicals partially fulfilled criteria for positive reference chemicals for androgens and estrogens, respectively. The identified chemicals included pesticides, pharmaceuticals, industrial and naturally-occurring chemicals with the capability to increase or decrease the levels of the sex hormones in vitro. Additionally, 14 and 15 compounds were identified as potential negative reference chemicals for effects on androgens and estrogens, respectively. These candidate reference chemicals will be informative for performance-based validation of in vitro steroidogenesis models. Copyright © 2017. Published by Elsevier Ltd.

  19. A preliminary verification of the floating reference measurement method for non-invasive blood glucose sensing

    NASA Astrophysics Data System (ADS)

    Min, Xiaolin; Liu, Rong; Fu, Bo; Xu, Kexin

    2017-06-01

    In the non-invasive sensing of blood glucose by near-infrared diffuse reflectance spectroscopy, the spectrum is highly susceptible to the unstable and complicated background variations from the human body and the environment. In in vitro analyses, background variations are usually corrected by the spectrum of a standard reference sample that has similar optical properties to the analyte of interest. However, it is hard to find a standard sample for the in vivo measurement. Therefore, the floating reference measurement method is proposed to enable relative measurements in vivo, where the spectra under some special source-detector distance, defined as the floating reference position, are insensitive to the changes in glucose concentration due to the absorption effect and scattering effect. Because the diffuse reflectance signals at the floating reference positions only reflect the information on background variations during the measurement, they can be used as the internal reference. In this paper, the theoretical basis of the floating reference positions in a semi-infinite turbid medium was discussed based on the steady-state diffusion equation and its analytical solutions in a semi-infinite turbid medium (under the extrapolated boundary conditions). Then, Monte-Carlo (MC) simulations and in vitro experiments based on a custom-built continuous-moving spatially resolving double-fiber NIR measurement system, configured with two types of light source, a super luminescent diode (SLD) and a super-continuum laser, were carried out to verify the existence of the floating reference position in 5%, 10% and 20% Intralipid solutions. The results showed that the simulation values of the floating reference positions are close to the theoretical results, with a maximum deviation of approximately 0.3 mm in 1100-1320 nm. Great differences can be observed in 1340-1400 nm because the optical properties of Intralipid in this region don not satisfy the conditions of the steady-state diffusion equation. For the in vitro experiments, floating reference positions exist in 1220 nm and 1320 nm under two types of light source, and the results are quite close. However, the reference positions obtained from experiments are further from the light source compared with those obtained in the MC simulation. For the turbid media and the wavelengths investigated, the difference is up to 1 mm. This study is important for the design of optical fibers to be applied in the floating reference measurement.

  20. Development of Extended Ray-tracing method including diffraction, polarization and wave decay effects

    NASA Astrophysics Data System (ADS)

    Yanagihara, Kota; Kubo, Shin; Dodin, Ilya; Nakamura, Hiroaki; Tsujimura, Toru

    2017-10-01

    Geometrical Optics Ray-tracing is a reasonable numerical analytic approach for describing the Electron Cyclotron resonance Wave (ECW) in slowly varying spatially inhomogeneous plasma. It is well known that the result with this conventional method is adequate in most cases. However, in the case of Helical fusion plasma which has complicated magnetic structure, strong magnetic shear with a large scale length of density can cause a mode coupling of waves outside the last closed flux surface, and complicated absorption structure requires a strong focused wave for ECH. Since conventional Ray Equations to describe ECW do not have any terms to describe the diffraction, polarization and wave decay effects, we can not describe accurately a mode coupling of waves, strong focus waves, behavior of waves in inhomogeneous absorption region and so on. For fundamental solution of these problems, we consider the extension of the Ray-tracing method. Specific process is planned as follows. First, calculate the reference ray by conventional method, and define the local ray-base coordinate system along the reference ray. Then, calculate the evolution of the distributions of amplitude and phase on ray-base coordinate step by step. The progress of our extended method will be presented.

  1. Fluorescence photon migration techniques for the on-farm measurement of somatic cell count in fresh cow's milk

    NASA Astrophysics Data System (ADS)

    Khoo, Geoffrey; Kuennemeyer, Rainer; Claycomb, Rod W.

    2005-04-01

    Currently, the state of the art of mastitis detection in dairy cows is the laboratory-based measurement of somatic cell count (SCC), which is time consuming and expensive. Alternative, rapid, and reliable on-farm measurement methods are required for effective farm management. We have investigated whether fluorescence lifetime measurements can determine SCC in fresh, unprocessed milk. The method is based on the change in fluorescence lifetime of ethidium bromide when it binds to DNA from the somatic cells. Milk samples were obtained from a Fullwood Merlin Automated Milking System and analysed within a twenty-four hour period, over which the SCC does not change appreciably. For reference, the milk samples were also sent to a testing laboratory where the SCC was determined by traditional methods. The results show that we can quantify SCC using the fluorescence photon migration method from a lower bound of 4x105 cells mL-1 to an upper bound of 1 x 107 cells mL-1. The upper bound is due to the reference method used while the cause of the lower boundary is unknown, yet.

  2. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    PubMed

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  3. Publications on dementia in Medline 1974-2009: a quantitative bibliometric study.

    PubMed

    Theander, Sten S; Gustafson, Lars

    2013-05-01

    The aim is to describe the development of the scientific literature on dementia. We present a quantitative, bibliometric study of the literature on dementia, based on Medline, covering 36 years (1974-2009). Two samples of references to dementia papers were retrieved: The main sample based on the MeSH term Dementia holds more than 88,500 references. We have compared the annual additions of references on dementia with the addition to total Medline. Changes of 'the Dementia to Medline ratio' (%) give the best information on the development. Publications on dementia increased 5.6 times faster than Medline. Most of this relative acceleration took place during 1980-1997, when the references on dementia increased from 0.17 to 0.78%. During the recent 12 years, the publications on dementia have been keeping pace with Medline and have stabilized around 0.8%. We have shown a large increase of the literature on dementia, relative both to the development of all medical research and to all psychiatric research. The bibliometric approach may be questioned as quantitative methods treat articles as being of equal value, what is not true. If, for example, during a certain period, the research output is 'inflated' by a great number of repetitive papers, the quantitative method will give an unfair picture of the development. Our relative method, however, will give relevant results as, at each point of time, the proportion of 'valuable research' ought to be about the same in the dementia group as in total Medline. Copyright © 2012 John Wiley & Sons, Ltd.

  4. Communication: Density functional theory model for multi-reference systems based on the exact-exchange hole normalization

    NASA Astrophysics Data System (ADS)

    Laqua, Henryk; Kussmann, Jörg; Ochsenfeld, Christian

    2018-03-01

    The correct description of multi-reference electronic ground states within Kohn-Sham density functional theory (DFT) requires an ensemble-state representation, employing fractionally occupied orbitals. However, the use of fractional orbital occupation leads to non-normalized exact-exchange holes, resulting in large fractional-spin errors for conventional approximative density functionals. In this communication, we present a simple approach to directly include the exact-exchange-hole normalization into DFT. Compared to conventional functionals, our model strongly improves the description for multi-reference systems, while preserving the accuracy in the single-reference case. We analyze the performance of our proposed method at the example of spin-averaged atoms and spin-restricted bond dissociation energy surfaces.

  5. Optical profile determining apparatus and associated methods including the use of a plurality of wavelengths in the reference beam and a plurality of wavelengths in a reflective transit beam

    NASA Technical Reports Server (NTRS)

    Montgomery, Robert M. (Inventor)

    2006-01-01

    An optical profile determining apparatus includes an optical detector and an optical source. The optical source generates a transmit beam including a plurality of wavelengths, and generates a reference beam including the plurality of wavelengths. Optical elements direct the transmit beam to a target, direct a resulting reflected transmit beam back from the target to the optical detector, and combine the reference beam with the reflected transmit beam so that a profile of the target is based upon fringe contrast produced by the plurality of wavelengths in the reference beam and the plurality of wavelengths in the reflected transmit beam.

  6. Communication: Density functional theory model for multi-reference systems based on the exact-exchange hole normalization.

    PubMed

    Laqua, Henryk; Kussmann, Jörg; Ochsenfeld, Christian

    2018-03-28

    The correct description of multi-reference electronic ground states within Kohn-Sham density functional theory (DFT) requires an ensemble-state representation, employing fractionally occupied orbitals. However, the use of fractional orbital occupation leads to non-normalized exact-exchange holes, resulting in large fractional-spin errors for conventional approximative density functionals. In this communication, we present a simple approach to directly include the exact-exchange-hole normalization into DFT. Compared to conventional functionals, our model strongly improves the description for multi-reference systems, while preserving the accuracy in the single-reference case. We analyze the performance of our proposed method at the example of spin-averaged atoms and spin-restricted bond dissociation energy surfaces.

  7. Investigation of Aerosol Surface Area Estimation from Number and Mass Concentration Measurements: Particle Density Effect.

    PubMed

    Ku, Bon Ki; Evans, Douglas E

    2012-04-01

    For nanoparticles with nonspherical morphologies, e.g., open agglomerates or fibrous particles, it is expected that the actual density of agglomerates may be significantly different from the bulk material density. It is further expected that using the material density may upset the relationship between surface area and mass when a method for estimating aerosol surface area from number and mass concentrations (referred to as "Maynard's estimation method") is used. Therefore, it is necessary to quantitatively investigate how much the Maynard's estimation method depends on particle morphology and density. In this study, aerosol surface area estimated from number and mass concentration measurements was evaluated and compared with values from two reference methods: a method proposed by Lall and Friedlander for agglomerates and a mobility based method for compact nonspherical particles using well-defined polydisperse aerosols with known particle densities. Polydisperse silver aerosol particles were generated by an aerosol generation facility. Generated aerosols had a range of morphologies, count median diameters (CMD) between 25 and 50 nm, and geometric standard deviations (GSD) between 1.5 and 1.8. The surface area estimates from number and mass concentration measurements correlated well with the two reference values when gravimetric mass was used. The aerosol surface area estimates from the Maynard's estimation method were comparable to the reference method for all particle morphologies within the surface area ratios of 3.31 and 0.19 for assumed GSDs 1.5 and 1.8, respectively, when the bulk material density of silver was used. The difference between the Maynard's estimation method and surface area measured by the reference method for fractal-like agglomerates decreased from 79% to 23% when the measured effective particle density was used, while the difference for nearly spherical particles decreased from 30% to 24%. The results indicate that the use of particle density of agglomerates improves the accuracy of the Maynard's estimation method and that an effective density should be taken into account, when known, when estimating aerosol surface area of nonspherical aerosol such as open agglomerates and fibrous particles.

  8. Adapting the ISO 20462 softcopy ruler method for online image quality studies

    NASA Astrophysics Data System (ADS)

    Burns, Peter D.; Phillips, Jonathan B.; Williams, Don

    2013-01-01

    In this paper we address the problem of Image Quality Assessment of no reference metrics, focusing on JPEG corrupted images. In general no reference metrics are not able to measure with the same performance the distortions within their possible range and with respect to different image contents. The crosstalk between content and distortion signals influences the human perception. We here propose two strategies to improve the correlation between subjective and objective quality data. The first strategy is based on grouping the images according to their spatial complexity. The second one is based on a frequency analysis. Both the strategies are tested on two databases available in the literature. The results show an improvement in the correlations between no reference metrics and psycho-visual data, evaluated in terms of the Pearson Correlation Coefficient.

  9. School-Based Assessment of ADHD: Purpose, Alignment with Best Practice Guidelines, and Training

    ERIC Educational Resources Information Center

    Ogg, Julia; Fefer, Sarah; Sundman-Wheat, Ashley; McMahan, Melanie; Stewart, Tiffany; Chappel, Ashley; Bateman, Lisa

    2013-01-01

    Youth exhibiting symptoms of attention deficit hyperactivity disorder are frequently referred to school psychologists because of academic, social, and behavioral difficulties that they face. To address these difficulties, evidence-based assessment methods have been outlined for multiple purposes of assessment. The goals of this study were to…

  10. Influence in Action in "Catch Me if You Can"

    ERIC Educational Resources Information Center

    Meyer, Gary; Roberto, Anthony J.

    2005-01-01

    For decades, scholars have worked to understand the precise manner in which messages affect attitudes and ultimately behaviors. The dominant paradigm suggests that there are two methods or routes to attitude change, one based on careful consideration of the messages and the other based on simple decision rules, often referred to as heuristics…

  11. The Effect of Web-Based Collaborative Learning Methods to the Accounting Courses in Technical Education

    ERIC Educational Resources Information Center

    Cheng, K. W. Kevin

    2009-01-01

    This study mainly explored the effect of applying web-based collaborative learning instruction to the accounting curriculum on student's problem-solving attitudes in Technical Education. The research findings and proposed suggestions would serve as a reference for the development of accounting-related curricula and teaching strategies. To achieve…

  12. Inquiry Based Teaching in Turkey: A Content Analysis of Research Reports

    ERIC Educational Resources Information Center

    Kizilaslan, Aydin; Sozbilir, Mustafa; Yasar, M. Diyaddin

    2012-01-01

    Inquiry-based learning [IBL] enhances students' critical thinking abilities and help students to act as a scientist through using scientific method while learning. Specifically, inquiry as a teaching approach has been defined in many ways, the most important one is referred to nature of constructing knowledge while the individuals possess a…

  13. Common and Specific Factors Approaches to Home-Based Treatment: I-FAST and MST

    ERIC Educational Resources Information Center

    Lee, Mo Yee; Greene, Gilbert J.; Fraser, J. Scott; Edwards, Shivani G.; Grove, David; Solovey, Andrew D.; Scott, Pamela

    2013-01-01

    Objectives: This study examined the treatment outcomes of integrated families and systems treatment (I-FAST), a moderated common factors approach, in reference to multisystemic therapy (MST), an established specific factor approach, for treating at risk children and adolescents and their families in an intensive community-based setting. Method:…

  14. Education Quality in Kazakhstan in the Context of Competence-Based Approach

    ERIC Educational Resources Information Center

    Nabi, Yskak; Zhaxylykova, Nuriya Ermuhametovna; Kenbaeva, Gulmira Kaparbaevna; Tolbayev, Abdikerim; Bekbaeva, Zeinep Nusipovna

    2016-01-01

    The background of this paper is to present how education system of Kazakhstan evolved during the last 24 years of independence, highlighting the contemporary transformational processes. We defined the aim to identify the education quality in the context of competence-based approach. Methods: Analysis of references, interviewing, experimental work.…

  15. Curriculum-based Measurement in Assessing Bilingual Students: A Promising New Direction.

    ERIC Educational Resources Information Center

    Bentz, Johnell; Pavri, Shireen

    2000-01-01

    This article discusses the problems with traditional methods of assessing bilingual students and describes curriculum-based measurement (CBM) for use with bilingual Hispanic students. Additional information about the features of CBM is presented along with issues related to the use of CBM with bilingual Hispanic students. (Contains references.)…

  16. Control of a HexaPOD treatment couch for robot-assisted radiotherapy.

    PubMed

    Hermann, Christian; Ma, Lei; Wilbert, Jürgen; Baier, Kurt; Schilling, Klaus

    2012-10-01

    Moving tumors, for example in the vicinity of the lungs, pose a challenging problem in radiotherapy, as healthy tissue should not be irradiated. Apart from gating approaches, one standard method is to irradiate the complete volume within which a tumor moves plus a safety margin containing a considerable volume of healthy tissue. This work deals with a system for tumor motion compensation using the HexaPOD® robotic treatment couch (Medical Intelligence GmbH, Schwabmünchen, Germany). The HexaPOD, carrying the patient during treatment, is instructed to perform translational movements such that the tumor motion, from the beams-eye view of the linear accelerator, is eliminated. The dynamics of the HexaPOD are characterized by time delays, saturations, and other non-linearities that make the design of control a challenging task. The focus of this work lies on two control methods for the HexaPOD that can be used for reference tracking. The first method uses a model predictive controller based on a model gained through system identification methods, and the second method uses a position control scheme useful for reference tracking. We compared the tracking performance of both methods in various experiments with real hardware using ideal reference trajectories, prerecorded patient trajectories, and human volunteers whose breathing motion was compensated by the system.

  17. Development and validation of a method for mercury determination in seawater for the process control of a candidate certified reference material.

    PubMed

    Sánchez, Raquel; Snell, James; Held, Andrea; Emons, Hendrik

    2015-08-01

    A simple, robust and reliable method for mercury determination in seawater matrices based on the combination of cold vapour generation and inductively coupled plasma mass spectrometry (CV-ICP-MS) and its complete in-house validation are described. The method validation covers parameters such as linearity, limit of detection (LOD), limit of quantification (LOQ), trueness, repeatability, intermediate precision and robustness. A calibration curve covering the whole working range was achieved with coefficients of determination typically higher than 0.9992. The repeatability of the method (RSDrep) was 0.5 %, and the intermediate precision was 2.3 % at the target mass fraction of 20 ng/kg. Moreover, the method was robust with respect to the salinity of the seawater. The limit of quantification was 2.7 ng/kg, which corresponds to 13.5 % of the target mass fraction in the future certified reference material (20 ng/kg). An uncertainty budget for the measurement of mercury in seawater has been established. The relative expanded (k = 2) combined uncertainty is 6 %. The performance of the validated method was demonstrated by generating results for process control and a homogeneity study for the production of a candidate certified reference material.

  18. Development and Validation of a Rapid (13)C6-Glucose Isotope Dilution UPLC-MRM Mass Spectrometry Method for Use in Determining System Accuracy and Performance of Blood Glucose Monitoring Devices.

    PubMed

    Matsunami, Risë K; Angelides, Kimon; Engler, David A

    2015-05-18

    There is currently considerable discussion about the accuracy of blood glucose concentrations determined by personal blood glucose monitoring systems (BGMS). To date, the FDA has allowed new BGMS to demonstrate accuracy in reference to other glucose measurement systems that use the same or similar enzymatic-based methods to determine glucose concentration. These types of reference measurement procedures are only comparative in nature and are subject to the same potential sources of error in measurement and system perturbations as the device under evaluation. It would be ideal to have a completely orthogonal primary method that could serve as a true standard reference measurement procedure for establishing the accuracy of new BGMS. An isotope-dilution liquid chromatography/mass spectrometry (ID-UPLC-MRM) assay was developed using (13)C6-glucose as a stable isotope analogue to specifically measure glucose concentration in human plasma, and validated for use against NIST standard reference materials, and against fresh isolates of whole blood and plasma into which exogenous glucose had been spiked. Assay performance was quantified to NIST-traceable dry weight measures for both glucose and (13)C6-glucose. The newly developed assay method was shown to be rapid, highly specific, sensitive, accurate, and precise for measuring plasma glucose levels. The assay displayed sufficient dynamic range and linearity to measure across the range of both normal and diabetic blood glucose levels. Assay performance was measured to within the same uncertainty levels (<1%) as the NIST definitive method for glucose measurement in human serum. The newly developed ID UPLC-MRM assay can serve as a validated reference measurement procedure to which new BGMS can be assessed for glucose measurement performance. © 2015 Diabetes Technology Society.

  19. A candidate reference method using ICP-MS for sweat chloride quantification.

    PubMed

    Collie, Jake T; Massie, R John; Jones, Oliver A H; Morrison, Paul D; Greaves, Ronda F

    2016-04-01

    The aim of the study was to develop a method for sweat chloride (Cl) quantification using Inductively Coupled Plasma Mass Spectrometry (ICP-MS) to present to the Joint Committee for Traceability in Laboratory Medicine (JCTLM) as a candidate reference method for the diagnosis of cystic fibrosis (CF). Calibration standards were prepared from sodium chloride (NaCl) to cover the expected range of sweat Cl values. Germanium (Ge) and scandium (Sc) were selected as on-line (instrument based) internal standards (IS) and gallium (Ga) as the off-line (sample based) IS. The method was validated through linearity, accuracy and imprecision studies as well as enrolment into the Royal College of Pathologists of Australasia Quality Assurance Program (RCPAQAP) for sweat electrolyte testing. Two variations of the ICP-MS method were developed, an on-line and off-line IS, and compared. Linearity was determined up to 225 mmol/L with a limit of quantitation of 7.4 mmol/L. The off-line IS demonstrated increased accuracy through the RCPAQAP performance assessment (CV of 1.9%, bias of 1.5 mmol/L) in comparison to the on-line IS (CV of 8.0%, bias of 3.8 mmol/L). Paired t-tests confirmed no significant differences between sample means of the two IS methods (p=0.53) or from each method against the RCPAQAP target values (p=0.08 and p=0.29). Both on and off-line IS methods generated highly reproducible results and excellent linear comparison to the RCPAQAP target results. ICP-MS is a highly accurate method with a low limit of quantitation for sweat Cl analysis and should be recognised as a candidate reference method for the monitoring and diagnosis of CF. Laboratories that currently practice sweat Cl analysis using ICP-MS should include an off-line IS to help negate any pre-analytical errors.

  20. Validation of amino-acids measurement in dried blood spot by FIA-MS/MS for PKU management.

    PubMed

    Bruno, C; Dufour-Rainfray, D; Patin, F; Vourc'h, P; Guilloteau, D; Maillot, F; Labarthe, F; Tardieu, M; Andres, C R; Emond, P; Blasco, H

    2016-09-01

    Phenylketonuria (PKU) is a metabolic disorder leading to high concentrations of phenylalanine (Phe) and low concentrations of tyrosine (Tyr) in blood and brain that may be neurotoxic. This disease requires a regular monitoring of plasma Phe and Tyr as well as branched-chain amino-acids concentrations to adapt the Phe-restricted diet and other therapy that may be prescribed in PKU. We validated a Flow Injection Analysis tandem Mass Spectrometry (FIA-MS/MS) to replace the enzymatic method routinely used for neonatal screening in order to monitor in parallel to Phe, Tyr and branched-chain amino-acids not detected by the enzymatic method. We ascertained the performances of the method: linearity, detection and quantification limits, contamination index, accuracy. We cross validated the FIA-MS/MS and enzymatic methods and we evaluated our own reference ranges to monitor Phe, Tyr, Leu, Val on 59 dried blood spots of normal controls. We also evaluated Tyr, Leu and Val concentrations in PKU patients to detect some potential abnormalities, not evaluated by the enzymatic method. We developed a rapid method with excellent performances including precision and accuracy <15%. We noted an excellent correlation of Phe concentrations between FIA-MS/MS and enzymatic methods (p<0.0001) based on our database which are similar to references ranges published. We observed that 50% of PKU patients had lower concentrations of Tyr, Leu and/or Val that could not be detected by the enzymatic method. Based on laboratory accreditation recommendations, we validated a robust, rapid and reliable FIA-MS/MS method to monitor plasma Phe concentrations but also Tyr, Leu and Val concentrations, suitable for PKU management. We evaluated our own reference ranges of concentration for a routine application of this method. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  1. Trends in chemical ecology revealed with a personal computer program for searching data bases of scientific references and abstracts.

    PubMed

    Byers, J A

    1992-09-01

    A compiled program, JCE-REFS.EXE (coded in the QuickBASIC language), for use on IBM-compatible personal computers is described. The program converts a DOS text file of current B-I-T-S (BIOSIS Information Transfer System) or BIOSIS Previews references into a DOS file of citations, including abstracts, in a general style used by scientific journals. The latter file can be imported directly into a word processor or the program can convert the file into a random access data base of the references. The program can search the data base for up to 40 text strings with Boolean logic. Selected references in the data base can be exported as a DOS text file of citations. Using the search facility, articles in theJournal of Chemical Ecology from 1975 to 1991 were searched for certain key words in regard to semiochemicals, taxa, methods, chemical classes, and biological terms to determine trends in usage over the period. Positive trends were statistically significant in the use of the words: semiochemical, allomone, allelochemic, deterrent, repellent, plants, angiosperms, dicots, wind tunnel, olfactometer, electrophysiology, mass spectrometry, ketone, evolution, physiology, herbivore, defense, and receptor. Significant negative trends were found for: pheromone, vertebrates, mammals, Coleoptera, Scolytidae,Dendroctonus, lactone, isomer, and calling.

  2. Establishment of reference intervals of clinical chemistry analytes for the adult population in Saudi Arabia: a study conducted as a part of the IFCC global study on reference values.

    PubMed

    Borai, Anwar; Ichihara, Kiyoshi; Al Masaud, Abdulaziz; Tamimi, Waleed; Bahijri, Suhad; Armbuster, David; Bawazeer, Ali; Nawajha, Mustafa; Otaibi, Nawaf; Khalil, Haitham; Kawano, Reo; Kaddam, Ibrahim; Abdelaal, Mohamed

    2016-05-01

    This study is a part of the IFCC-global study to derive reference intervals (RIs) for 28 chemistry analytes in Saudis. Healthy individuals (n=826) aged ≥18 years were recruited using the global study protocol. All specimens were measured using an Architect analyzer. RIs were derived by both parametric and non-parametric methods for comparative purpose. The need for secondary exclusion of reference values based on latent abnormal values exclusion (LAVE) method was examined. The magnitude of variation attributable to gender, ages and regions was calculated by the standard deviation ratio (SDR). Sources of variations: age, BMI, physical exercise and smoking levels were investigated by using the multiple regression analysis. SDRs for gender, age and regional differences were significant for 14, 8 and 2 analytes, respectively. BMI-related changes in test results were noted conspicuously for CRP. For some metabolic related parameters the ranges of RIs by non-parametric method were wider than by the parametric method and RIs derived using the LAVE method were significantly different than those without it. RIs were derived with and without gender partition (BMI, drugs and supplements were considered). RIs applicable to Saudis were established for the majority of chemistry analytes, whereas gender, regional and age RI partitioning was required for some analytes. The elevated upper limits of metabolic analytes reflects the existence of high prevalence of metabolic syndrome in Saudi population.

  3. Physically consistent data assimilation method based on feedback control for patient-specific blood flow analysis.

    PubMed

    Ii, Satoshi; Adib, Mohd Azrul Hisham Mohd; Watanabe, Yoshiyuki; Wada, Shigeo

    2018-01-01

    This paper presents a novel data assimilation method for patient-specific blood flow analysis based on feedback control theory called the physically consistent feedback control-based data assimilation (PFC-DA) method. In the PFC-DA method, the signal, which is the residual error term of the velocity when comparing the numerical and reference measurement data, is cast as a source term in a Poisson equation for the scalar potential field that induces flow in a closed system. The pressure values at the inlet and outlet boundaries are recursively calculated by this scalar potential field. Hence, the flow field is physically consistent because it is driven by the calculated inlet and outlet pressures, without any artificial body forces. As compared with existing variational approaches, although this PFC-DA method does not guarantee the optimal solution, only one additional Poisson equation for the scalar potential field is required, providing a remarkable improvement for such a small additional computational cost at every iteration. Through numerical examples for 2D and 3D exact flow fields, with both noise-free and noisy reference data as well as a blood flow analysis on a cerebral aneurysm using actual patient data, the robustness and accuracy of this approach is shown. Moreover, the feasibility of a patient-specific practical blood flow analysis is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Automatic parquet block sorting using real-time spectral classification

    NASA Astrophysics Data System (ADS)

    Astrom, Anders; Astrand, Erik; Johansson, Magnus

    1999-03-01

    This paper presents a real-time spectral classification system based on the PGP spectrograph and a smart image sensor. The PGP is a spectrograph which extracts the spectral information from a scene and projects the information on an image sensor, which is a method often referred to as Imaging Spectroscopy. The classification is based on linear models and categorizes a number of pixels along a line. Previous systems adopting this method have used standard sensors, which often resulted in poor performance. The new system, however, is based on a patented near-sensor classification method, which exploits analogue features on the smart image sensor. The method reduces the enormous amount of data to be processed at an early stage, thus making true real-time spectral classification possible. The system has been evaluated on hardwood parquet boards showing very good results. The color defects considered in the experiments were blue stain, white sapwood, yellow decay and red decay. In addition to these four defect classes, a reference class was used to indicate correct surface color. The system calculates a statistical measure for each parquet block, giving the pixel defect percentage. The patented method makes it possible to run at very high speeds with a high spectral discrimination ability. Using a powerful illuminator, the system can run with a line frequency exceeding 2000 line/s. This opens up the possibility to maintain high production speed and still measure with good resolution.

  5. Rapid identification of oral Actinomyces species cultivated from subgingival biofilm by MALDI-TOF-MS

    PubMed Central

    Stingu, Catalina S.; Borgmann, Toralf; Rodloff, Arne C.; Vielkind, Paul; Jentsch, Holger; Schellenberger, Wolfgang; Eschrich, Klaus

    2015-01-01

    Background Actinomyces are a common part of the residential flora of the human intestinal tract, genitourinary system and skin. Isolation and identification of Actinomyces by conventional methods is often difficult and time consuming. In recent years, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) has become a rapid and simple method to identify bacteria. Objective The present study evaluated a new in-house algorithm using MALDI-TOF-MS for rapid identification of different species of oral Actinomyces cultivated from subgingival biofilm. Design Eleven reference strains and 674 clinical strains were used in this study. All the strains were preliminarily identified using biochemical methods and then subjected to MALDI-TOF-MS analysis using both similarity-based analysis and classification methods (support vector machine [SVM]). The genotype of the reference strains and of 232 clinical strains was identified by sequence analysis of the 16S ribosomal RNA (rRNA). Results The sequence analysis of the 16S rRNA gene of all references strains confirmed their previous identification. The MALDI-TOF-MS spectra obtained from the reference strains and the other clinical strains undoubtedly identified as Actinomyces by 16S rRNA sequencing were used to create the mass spectra reference database. Already a visual inspection of the mass spectra of different species reveals both similarities and differences. However, the differences between them are not large enough to allow a reliable differentiation by similarity analysis. Therefore, classification methods were applied as an alternative approach for differentiation and identification of Actinomyces at the species level. A cross-validation of the reference database representing 14 Actinomyces species yielded correct results for all species which were represented by more than two strains in the database. Conclusions Our results suggest that a combination of MALDI-TOF-MS with powerful classification algorithms, such as SVMs, provide a useful tool for the differentiation and identification of oral Actinomyces. PMID:25597306

  6. Validation of no-reference image quality index for the assessment of digital mammographic images

    NASA Astrophysics Data System (ADS)

    de Oliveira, Helder C. R.; Barufaldi, Bruno; Borges, Lucas R.; Gabarda, Salvador; Bakic, Predrag R.; Maidment, Andrew D. A.; Schiabel, Homero; Vieira, Marcelo A. C.

    2016-03-01

    To ensure optimal clinical performance of digital mammography, it is necessary to obtain images with high spatial resolution and low noise, keeping radiation exposure as low as possible. These requirements directly affect the interpretation of radiologists. The quality of a digital image should be assessed using objective measurements. In general, these methods measure the similarity between a degraded image and an ideal image without degradation (ground-truth), used as a reference. These methods are called Full-Reference Image Quality Assessment (FR-IQA). However, for digital mammography, an image without degradation is not available in clinical practice; thus, an objective method to assess the quality of mammograms must be performed without reference. The purpose of this study is to present a Normalized Anisotropic Quality Index (NAQI), based on the Rényi entropy in the pseudo-Wigner domain, to assess mammography images in terms of spatial resolution and noise without any reference. The method was validated using synthetic images acquired through an anthropomorphic breast software phantom, and the clinical exposures on anthropomorphic breast physical phantoms and patient's mammograms. The results reported by this noreference index follow the same behavior as other well-established full-reference metrics, e.g., the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Reductions of 50% on the radiation dose in phantom images were translated as a decrease of 4dB on the PSNR, 25% on the SSIM and 33% on the NAQI, evidencing that the proposed metric is sensitive to the noise resulted from dose reduction. The clinical results showed that images reduced to 53% and 30% of the standard radiation dose reported reductions of 15% and 25% on the NAQI, respectively. Thus, this index may be used in clinical practice as an image quality indicator to improve the quality assurance programs in mammography; hence, the proposed method reduces the subjectivity inter-observers in the reporting of image quality assessment.

  7. Applying Suffix Rules to Organization Name Recognition

    NASA Astrophysics Data System (ADS)

    Inui, Takashi; Murakami, Koji; Hashimoto, Taiichi; Utsumi, Kazuo; Ishikawa, Masamichi

    This paper presents a method for boosting the performance of the organization name recognition, which is a part of named entity recognition (NER). Although gazetteers (lists of the NEs) have been known as one of the effective features for supervised machine learning approaches on the NER task, the previous methods which have applied the gazetteers to the NER were very simple. The gazetteers have been used just for searching the exact matches between input text and NEs included in them. The proposed method generates regular expression rules from gazetteers, and, with these rules, it can realize a high-coverage searches based on looser matches between input text and NEs. To generate these rules, we focus on the two well-known characteristics of NE expressions; 1) most of NE expressions can be divided into two parts, class-reference part and instance-reference part, 2) for most of NE expressions the class-reference parts are located at the suffix position of them. A pattern mining algorithm runs on the set of NEs in the gazetteers, and some frequent word sequences from which NEs are constructed are found. Then, we employ only word sequences which have the class-reference part at the suffix position as suffix rules. Experimental results showed that our proposed method improved the performance of the organization name recognition, and achieved the 84.58 F-value for evaluation data.

  8. Body fatness or anthropometry for assessment of unhealthy weight status? Comparison between methods in South African children and adolescents.

    PubMed

    Craig, Eva; Reilly, John; Bland, Ruth

    2013-11-01

    A variety of methods are available for defining undernutrition (thinness/underweight/under-fat) and overnutrition (overweight/obesity/over-fat). The extent to which these definitions agree is unclear. The present cross-sectional study aimed to assess agreement between widely used methods of assessing nutritional status in children and adolescents, and to examine the benefit of body composition estimates. The main objective of the cross-sectional study was to assess underweight, overweight and obesity using four methods: (i) BMI-for-age using WHO (2007) reference data; (ii) BMI-for-age using Cole et al. and International Obesity Taskforce cut-offs; (iii) weight-for-age using the National Centre for Health Statistics/WHO growth reference 1977; and (iv) body fat percentage estimated by bio-impedance (body fat reference curves for children of McCarthy et al., 2006). Comparisons were made between methods using weighted kappa analyses. Rural South Africa. Individuals (n 1519) in three age groups (school grade 1, mean age 7 years; grade 5, mean age 11 years; grade 9, mean age 15 years). In boys, prevalence of unhealthy weight status (both under- and overnutrition) was much higher at all ages with body fatness measures than with simple anthropometric proxies for body fatness; agreement between fatness and weight-based measures was fair or slight using Landis and Koch categories. In girls, prevalence of unhealthy weight status was also higher with body fatness than with proxies, although agreement between measures ranged from fair to substantial. Methods for defining under- and overnutrition should not be considered equivalent. Weight-based measures provide highly conservative estimates of unhealthy weight status, possibly more conservative in boys. Simple body composition measures may be more informative than anthropometry for nutritional surveillance of children and adolescents.

  9. Accuracy of metric sex analysis of skeletal remains using Fordisc based on a recent skull collection.

    PubMed

    Ramsthaler, F; Kreutz, K; Verhoff, M A

    2007-11-01

    It has been generally accepted in skeletal sex determination that the use of metric methods is limited due to the population dependence of the multivariate algorithms. The aim of the study was to verify the applicability of software-based sex estimations outside the reference population group for which discriminant equations have been developed. We examined 98 skulls from recent forensic cases of known age, sex, and Caucasian ancestry from cranium collections in Frankfurt and Mainz (Germany) to determine the accuracy of sex determination using the statistical software solution Fordisc which derives its database and functions from the US American Forensic Database. In a comparison between metric analysis using Fordisc and morphological determination of sex, average accuracy for both sexes was 86 vs 94%, respectively, and males were identified more accurately than females. The ratio of the true test result rate to the false test result rate was not statistically different for the two methodological approaches at a significance level of 0.05 but was statistically different at a level of 0.10 (p=0.06). Possible explanations for this difference comprise different ancestry, age distribution, and socio-economic status compared to the Fordisc reference sample. It is likely that a discriminant function analysis on the basis of more similar European reference samples will lead to more valid and reliable sexing results. The use of Fordisc as a single method for the estimation of sex of recent skeletal remains in Europe cannot be recommended without additional morphological assessment and without a built-in software update based on modern European reference samples.

  10. Understanding a reference-free impedance method using collocated piezoelectric transducers

    NASA Astrophysics Data System (ADS)

    Kim, Eun Jin; Kim, Min Koo; Sohn, Hoon; Park, Hyun Woo

    2010-03-01

    A new concept of a reference-free impedance method, which does not require direct comparison with a baseline impedance signal, is proposed for damage detection in a plate-like structure. A single pair of piezoelectric (PZT) wafers collocated on both surfaces of a plate are utilized for extracting electro-mechanical signatures (EMS) associated with mode conversion due to damage. A numerical simulation is conducted to investigate the EMS of collocated PZT wafers in the frequency domain at the presence of damage through spectral element analysis. Then, the EMS due to mode conversion induced by damage are extracted using the signal decomposition technique based on the polarization characteristics of the collocated PZT wafers. The effects of the size and the location of damage on the decomposed EMS are investigated as well. Finally, the applicability of the decomposed EMS to the reference-free damage diagnosis is discussed.

  11. SCUD: fast structure clustering of decoys using reference state to remove overall rotation.

    PubMed

    Li, Hongzhi; Zhou, Yaoqi

    2005-08-01

    We developed a method for fast decoy clustering by using reference root-mean-squared distance (rRMSD) rather than commonly used pairwise RMSD (pRMSD) values. For 41 proteins with 2000 decoys each, the computing efficiency increases nine times without a significant change in the accuracy of near-native selections. Tests on additional protein decoys based on different reference conformations confirmed this result. Further analysis indicates that the pRMSD and rRMSD values are highly correlated (with an average correlation coefficient of 0.82) and the clusters obtained from pRMSD and rRMSD values are highly similar (the representative structures of the top five largest clusters from the two methods are 74% identical). SCUD (Structure ClUstering of Decoys) with an automatic cutoff value is available at http://theory.med.buffalo.edu. (c) 2005 Wiley Periodicals, Inc.

  12. Accelerated Enveloping Distribution Sampling: Enabling Sampling of Multiple End States while Preserving Local Energy Minima.

    PubMed

    Perthold, Jan Walther; Oostenbrink, Chris

    2018-05-17

    Enveloping distribution sampling (EDS) is an efficient approach to calculate multiple free-energy differences from a single molecular dynamics (MD) simulation. However, the construction of an appropriate reference-state Hamiltonian that samples all states efficiently is not straightforward. We propose a novel approach for the construction of the EDS reference-state Hamiltonian, related to a previously described procedure to smoothen energy landscapes. In contrast to previously suggested EDS approaches, our reference-state Hamiltonian preserves local energy minima of the combined end-states. Moreover, we propose an intuitive, robust and efficient parameter optimization scheme to tune EDS Hamiltonian parameters. We demonstrate the proposed method with established and novel test systems and conclude that our approach allows for the automated calculation of multiple free-energy differences from a single simulation. Accelerated EDS promises to be a robust and user-friendly method to compute free-energy differences based on solid statistical mechanics.

  13. Laboratory twinning to build capacity for rabies diagnosis.

    PubMed

    Fooks, Anthony R; Drew, Trevor W; Tu, Changchun

    2016-03-05

    In 2009, the UK's OIE Reference Laboratory for rabies, based at the APHA in Weybridge, was awarded a project to twin with the Changchun Veterinary Research Institute in the People's Republic of China to help the institute develop the skills and methods necessary to become an OIE Reference Laboratory itself. Here, Tony Fooks, Trevor Drew and Changchun Tu describe the OIE's twinning project and the success that has since been realised in China. British Veterinary Association.

  14. DISCUSSION AND EVALUATION OF THE VOLATILITY TEST FOR EQUIVALENCY OF OTHER METHODS TO THE FEDERAL REFERENCE METHOD FOR FINE PARTICULATE MATTER

    EPA Science Inventory

    In July 1997, EPA promulgated a new National Ambient Air Quality Standard (NAAQS) for fine particulate matter (PM2.5). This new standard was based on collection of an integrated mass sample on a filter. Field studies have demonstrated that the collection of semivolatile compoun...

  15. A Machine Learning Approach to Measurement of Text Readability for EFL Learners Using Various Linguistic Features

    ERIC Educational Resources Information Center

    Kotani, Katsunori; Yoshimi, Takehiko; Isahara, Hitoshi

    2011-01-01

    The present paper introduces and evaluates a readability measurement method designed for learners of EFL (English as a foreign language). The proposed readability measurement method (a regression model) estimates the text readability based on linguistic features, such as lexical, syntactic and discourse features. Text readability refers to the…

  16. Using the Metropolis Algorithm to Calculate Thermodynamic Quantities: An Undergraduate Computational Experiment

    ERIC Educational Resources Information Center

    Beddard, Godfrey S.

    2011-01-01

    Thermodynamic quantities such as the average energy, heat capacity, and entropy are calculated using a Monte Carlo method based on the Metropolis algorithm. This method is illustrated with reference to the harmonic oscillator but is particularly useful when the partition function cannot be evaluated; an example using a one-dimensional spin system…

  17. Critical review of analytical techniques for safeguarding the thorium-uranium fuel cycle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hakkila, E.A.

    1978-10-01

    Conventional analytical methods applicable to the determination of thorium, uranium, and plutonium in feed, product, and waste streams from reprocessing thorium-based nuclear reactor fuels are reviewed. Separations methods of interest for these analyses are discussed. Recommendations concerning the applicability of various techniques to reprocessing samples are included. 15 tables, 218 references.

  18. 40 CFR 53.4 - Applications for reference or equivalent method determinations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Required or recommended routine, periodic, and preventative maintenance and maintenance schedules. (J) Any... methods for PM2.5 and PM10−2.5 must be described in sufficient detail, based on the elements described in... Table A-1 to this subpart) will be met throughout the warranty period and that the applicant accepts...

  19. 40 CFR 53.4 - Applications for reference or equivalent method determinations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Required or recommended routine, periodic, and preventative maintenance and maintenance schedules. (J) Any... methods for PM2.5 and PM10−2.5 must be described in sufficient detail, based on the elements described in... Table A-1 to this subpart) will be met throughout the warranty period and that the applicant accepts...

  20. Deviation of landmarks in accordance with methods of establishing reference planes in three-dimensional facial CT evaluation.

    PubMed

    Yoon, Kaeng Won; Yoon, Suk-Ja; Kang, Byung-Cheol; Kim, Young-Hee; Kook, Min Suk; Lee, Jae-Seo; Palomo, Juan Martin

    2014-09-01

    This study aimed to investigate the deviation of landmarks from horizontal or midsagittal reference planes according to the methods of establishing reference planes. Computed tomography (CT) scans of 18 patients who received orthodontic and orthognathic surgical treatment were reviewed. Each CT scan was reconstructed by three methods for establishing three orthogonal reference planes (namely, the horizontal, midsagittal, and coronal reference planes). The horizontal (bilateral porions and bilateral orbitales) and midsagittal (crista galli, nasion, prechiasmatic point, opisthion, and anterior nasal spine) landmarks were identified on each CT scan. Vertical deviation of the horizontal landmarks and horizontal deviation of the midsagittal landmarks were measured. The porion and orbitale, which were not involved in establishing the horizontal reference plane, were found to deviate vertically from the horizontal reference plane in the three methods. The midsagittal landmarks, which were not used for the midsagittal reference plane, deviated horizontally from the midsagittal reference plane in the three methods. In a three-dimensional facial analysis, the vertical and horizontal deviations of the landmarks from the horizontal and midsagittal reference planes could vary depending on the methods of establishing reference planes.

Top