Yamagata, Tetsuo; Zanelli, Ugo; Gallemann, Dieter; Perrin, Dominique; Dolgos, Hugues; Petersson, Carl
2017-09-01
1. We compared direct scaling, regression model equation and the so-called "Poulin et al." methods to scale clearance (CL) from in vitro intrinsic clearance (CL int ) measured in human hepatocytes using two sets of compounds. One reference set comprised of 20 compounds with known elimination pathways and one external evaluation set based on 17 compounds development in Merck (MS). 2. A 90% prospective confidence interval was calculated using the reference set. This interval was found relevant for the regression equation method. The three outliers identified were justified on the basis of their elimination mechanism. 3. The direct scaling method showed a systematic underestimation of clearance in both the reference and evaluation sets. The "Poulin et al." and the regression equation methods showed no obvious bias in either the reference or evaluation sets. 4. The regression model equation was slightly superior to the "Poulin et al." method in the reference set and showed a better absolute average fold error (AAFE) of value 1.3 compared to 1.6. A larger difference was observed in the evaluation set were the regression method and "Poulin et al." resulted in an AAFE of 1.7 and 2.6, respectively (removing the three compounds with known issues mentioned above). A similar pattern was observed for the correlation coefficient. Based on these data we suggest the regression equation method combined with a prospective confidence interval as the first choice for the extrapolation of human in vivo hepatic metabolic clearance from in vitro systems.
A reference estimator based on composite sensor pattern noise for source device identification
NASA Astrophysics Data System (ADS)
Li, Ruizhe; Li, Chang-Tsun; Guan, Yu
2014-02-01
It has been proved that Sensor Pattern Noise (SPN) can serve as an imaging device fingerprint for source camera identification. Reference SPN estimation is a very important procedure within the framework of this application. Most previous works built reference SPN by averaging the SPNs extracted from 50 images of blue sky. However, this method can be problematic. Firstly, in practice we may face the problem of source camera identification in the absence of the imaging cameras and reference SPNs, which means only natural images with scene details are available for reference SPN estimation rather than blue sky images. It is challenging because the reference SPN can be severely contaminated by image content. Secondly, the number of available reference images sometimes is too few for existing methods to estimate a reliable reference SPN. In fact, existing methods lack consideration of the number of available reference images as they were designed for the datasets with abundant images to estimate the reference SPN. In order to deal with the aforementioned problem, in this work, a novel reference estimator is proposed. Experimental results show that our proposed method achieves better performance than the methods based on the averaged reference SPN, especially when few reference images used.
Intra prediction using face continuity in 360-degree video coding
NASA Astrophysics Data System (ADS)
Hanhart, Philippe; He, Yuwen; Ye, Yan
2017-09-01
This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.
Yu, Hui; Qi, Dan; Li, Heng-da; Xu, Ke-xin; Yuan, Wei-jie
2012-03-01
Weak signal, low instrument signal-to-noise ratio, continuous variation of human physiological environment and the interferences from other components in blood make it difficult to extract the blood glucose information from near infrared spectrum in noninvasive blood glucose measurement. The floating-reference method, which analyses the effect of glucose concentration variation on absorption coefficient and scattering coefficient, gets spectrum at the reference point and the measurement point where the light intensity variations from absorption and scattering are counteractive and biggest respectively. By using the spectrum from reference point as reference, floating-reference method can reduce the interferences from variation of physiological environment and experiment circumstance. In the present paper, the effectiveness of floating-reference method working on improving prediction precision and stability was assessed through application experiments. The comparison was made between models whose data were processed with and without floating-reference method. The results showed that the root mean square error of prediction (RMSEP) decreased by 34.7% maximally. The floating-reference method could reduce the influences of changes of samples' state, instrument noises and drift, and improve the models' prediction precision and stability effectively.
Comparison of analytical methods for the determination of histamine in reference canned fish samples
NASA Astrophysics Data System (ADS)
Jakšić, S.; Baloš, M. Ž.; Mihaljev, Ž.; Prodanov Radulović, J.; Nešić, K.
2017-09-01
Two screening methods for histamine in canned fish, an enzymatic test and a competitive direct enzyme-linked immunosorbent assay (CD-ELISA), were compared with the reversed-phase liquid chromatography (RP-HPLC) standard method. For enzymatic and CD-ELISA methods, determination was conducted according to producers’ manuals. For RP-HPLC, histamine was derivatized with dansyl-chloride, followed by RP-HPLC and diode array detection. Results of analysis of canned fish, supplied as reference samples for proficiency testing, showed good agreement when histamine was present at higher concentrations (above 100 mg kg-1). At a lower level (16.95 mg kg-1), the enzymatic test produced some higher results. Generally, analysis of four reference samples according to CD-ELISA and RP-HPLC showed good agreement for histamine determination (r=0.977 in concentration range 16.95-216 mg kg-1) The results show that the applied enzymatic test and CD-ELISA appeared to be suitable screening methods for the determination of histamine in canned fish.
A self-reference PRF-shift MR thermometry method utilizing the phase gradient
NASA Astrophysics Data System (ADS)
Langley, Jason; Potter, William; Phipps, Corey; Huang, Feng; Zhao, Qun
2011-12-01
In magnetic resonance (MR) imaging, the most widely used and accurate method for measuring temperature is based on the shift in proton resonance frequency (PRF). However, inter-scan motion and bulk magnetic field shifts can lead to inaccurate temperature measurements in the PRF-shift MR thermometry method. The self-reference PRF-shift MR thermometry method was introduced to overcome such problems by deriving a reference image from the heated or treated image, and approximates the reference phase map with low-order polynomial functions. In this note, a new approach is presented to calculate the baseline phase map in self-reference PRF-shift MR thermometry. The proposed method utilizes the phase gradient to remove the phase unwrapping step inherent to other self-reference PRF-shift MR thermometry methods. The performance of the proposed method was evaluated using numerical simulations with temperature distributions following a two-dimensional Gaussian function as well as phantom and in vivo experimental data sets. The results from both the numerical simulations and experimental data show that the proposed method is a promising technique for measuring temperature.
Decentralized model reference adaptive control of large flexible structures
NASA Technical Reports Server (NTRS)
Lee, Fu-Ming; Fong, I-Kong; Lin, Yu-Hwan
1988-01-01
A decentralized model reference adaptive control (DMRAC) method is developed for large flexible structures (LFS). The development follows that of a centralized model reference adaptive control for LFS that have been shown to be feasible. The proposed method is illustrated using a simply supported beam with collocated actuators and sensors. Results show that the DMRAC can achieve either output regulation or output tracking with adequate convergence, provided the reference model inputs and their time derivatives are integrable, bounded, and approach zero as t approaches infinity.
NASA Astrophysics Data System (ADS)
Kosnicki, Ely; Sefick, Stephen A.; Paller, Michael H.; Jarrell, Miller S.; Prusha, Blair A.; Sterrett, Sean C.; Tuberville, Tracey D.; Feminella, Jack W.
2014-09-01
The Sand Hills subdivision of the Southeastern Plains ecoregion has been impacted by historical land uses over the past two centuries and, with the additive effects of contemporary land use, determining reference condition for streams in this region is a challenge. We identified reference condition based on the combined use of 3 independent selection methods. Method 1 involved use of a multivariate disturbance gradient derived from several stressors, method 2 was based on variation in channel morphology, and method 3 was based on passing 6 of 7 environmental criteria. Sites selected as reference from all 3 methods were considered primary reference, whereas those selected by 2 or 1 methods were considered secondary or tertiary reference, respectively. Sites not selected by any of the methods were considered non-reference. In addition, best professional judgment (BPJ) was used to exclude some sites from any reference class, and comparisons were made to examine the utility of BPJ. Non-metric multidimensional scaling indicated that use of BPJ may help designate non-reference sites when unidentified stressors are present. The macroinvertebrate community measures Ephemeroptera, Plecoptera, Trichoptera richness and North Carolina Biotic Index showed no differences between primary and secondary reference sites when BPJ was ignored. However, there was no significant difference among primary, secondary, and tertiary reference sites when BPJ was used. We underscore the importance of classifying reference conditions, especially in regions that have endured significant anthropogenic activity. We suggest that the use of secondary reference sites may enable construction of models that target a broader set of management interests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reda, Ibrahim M.; Andreas, Afshin M.
2017-08-01
Accurate pyranometer calibrations, traceable to internationally recognized standards, are critical for solar irradiance measurements. One calibration method is the component summation method, where the pyranometers are calibrated outdoors under clear sky conditions, and the reference global solar irradiance is calculated as the sum of two reference components, the diffuse horizontal and subtended beam solar irradiances. The beam component is measured with pyrheliometers traceable to the World Radiometric Reference, while there is no internationally recognized reference for the diffuse component. In the absence of such a reference, we present a method to consistently calibrate pyranometers for measuring the diffuse component. Themore » method is based on using a modified shade/unshade method and a pyranometer with less than 0.5 W/m2 thermal offset. The calibration result shows that the responsivity of Hukseflux SR25 pyranometer equals 10.98 uV/(W/m2) with +/-0.86 percent uncertainty.« less
Overgaard, Martin; Pedersen, Susanne Møller
2017-10-26
Hyperprolactinemia diagnosis and treatment is often compromised by the presence of biologically inactive and clinically irrelevant higher-molecular-weight complexes of prolactin, macroprolactin. The objective of this study was to evaluate the performance of two macroprolactin screening regimes across commonly used automated immunoassay platforms. Parametric total and monomeric gender-specific reference intervals were determined for six immunoassay methods using female (n=96) and male sera (n=127) from healthy donors. The reference intervals were validated using 27 hyperprolactinemic and macroprolactinemic sera, whose presence of monomeric and macroforms of prolactin were determined using gel filtration chromatography (GFC). Normative data for six prolactin assays included the range of values (2.5th-97.5th percentiles). Validation sera (hyperprolactinemic and macroprolactinemic; n=27) showed higher discordant classification [mean=2.8; 95% confidence interval (CI) 1.2-4.4] for the monomer reference interval method compared to the post-polyethylene glycol (PEG) recovery cutoff method (mean=1.8; 95% CI 0.8-2.8). The two monomer/macroprolactin discrimination methods did not differ significantly (p=0.089). Among macroprolactinemic sera evaluated by both discrimination methods, the Cobas and Architect/Kryptor prolactin assays showed the lowest and the highest number of misclassifications, respectively. Current automated immunoassays for prolactin testing require macroprolactin screening methods based on PEG precipitation in order to discriminate truly from falsely elevated serum prolactin. While the recovery cutoff and monomeric reference interval macroprolactin screening methods demonstrate similar discriminative ability, the latter method also provides the clinician with an easy interpretable monomeric prolactin concentration along with a monomeric reference interval.
Cross-reference identification within a PDF document
NASA Astrophysics Data System (ADS)
Li, Sida; Gao, Liangcai; Tang, Zhi; Yu, Yinyan
2015-01-01
Cross-references, such like footnotes, endnotes, figure/table captions, references, are a common and useful type of page elements to further explain their corresponding entities in the target document. In this paper, we focus on cross-reference identification in a PDF document, and present a robust method as a case study of identifying footnotes and figure references. The proposed method first extracts footnotes and figure captions, and then matches them with their corresponding references within a document. A number of novel features within a PDF document, i.e., page layout, font information, lexical and linguistic features of cross-references, are utilized for the task. Clustering is adopted to handle the features that are stable in one document but varied in different kinds of documents so that the process of identification is adaptive with document types. In addition, this method leverages results from the matching process to provide feedback to the identification process and further improve the algorithm accuracy. The primary experiments in real document sets show that the proposed method is promising to identify cross-reference in a PDF document.
Heinrich, Andreas; Teichgräber, Ulf K; Güttler, Felix V
2015-12-01
The standard ASTM F2119 describes a test method for measuring the size of a susceptibility artifact based on the example of a passive implant. A pixel in an image is considered to be a part of an image artifact if the intensity is changed by at least 30% in the presence of a test object, compared to a reference image in which the test object is absent (reference value). The aim of this paper is to simplify and accelerate the test method using a histogram-based reference value. Four test objects were scanned parallel and perpendicular to the main magnetic field, and the largest susceptibility artifacts were measured using two methods of reference value determination (reference image-based and histogram-based reference value). The results between both methods were compared using the Mann-Whitney U-test. The difference between both reference values was 42.35 ± 23.66. The difference of artifact size was 0.64 ± 0.69 mm. The artifact sizes of both methods did not show significant differences; the p-value of the Mann-Whitney U-test was between 0.710 and 0.521. A standard-conform method for a rapid, objective, and reproducible evaluation of susceptibility artifacts could be implemented. The result of the histogram-based method does not significantly differ from the ASTM-conform method.
[Selection of reference genes of Siraitia grosvenorii by real-time PCR].
Tu, Dong-ping; Mo, Chang-ming; Ma, Xiao-jun; Zhao, Huan; Tang, Qi; Huang, Jie; Pan, Li-mei; Wei, Rong-chang
2015-01-01
Siraitia grosvenorii is a traditional Chinese medicine also as edible food. This study selected six candidate reference genes by real-time quantitative PCR, the expression stability of the candidate reference genes in the different samples was analyzed by using the software and methods of geNorm, NormFinder, BestKeeper, Delta CT method and RefFinder, reference genes for S. grosvenorii were selected for the first time. The results showed that 18SrRNA expressed most stable in all samples, was the best reference gene in the genetic analysis. The study has a guiding role for the analysis of gene expression using qRT-PCR methods, providing a suitable reference genes to ensure the results in the study on differential expressed gene in synthesis and biological pathways, also other genes of S. grosvenorii.
Estimation of low back moments from video analysis: a validation study.
Coenen, Pieter; Kingma, Idsart; Boot, Cécile R L; Faber, Gert S; Xu, Xu; Bongers, Paulien M; van Dieën, Jaap H
2011-09-02
This study aimed to develop, compare and validate two versions of a video analysis method for assessment of low back moments during occupational lifting tasks since for epidemiological studies and ergonomic practice relatively cheap and easily applicable methods to assess low back loads are needed. Ten healthy subjects participated in a protocol comprising 12 lifting conditions. Low back moments were assessed using two variants of a video analysis method and a lab-based reference method. Repeated measures ANOVAs showed no overall differences in peak moments between the two versions of the video analysis method and the reference method. However, two conditions showed a minor overestimation of one of the video analysis method moments. Standard deviations were considerable suggesting that errors in the video analysis were random. Furthermore, there was a small underestimation of dynamic components and overestimation of the static components of the moments. Intraclass correlations coefficients for peak moments showed high correspondence (>0.85) of the video analyses with the reference method. It is concluded that, when a sufficient number of measurements can be taken, the video analysis method for assessment of low back loads during lifting tasks provides valid estimates of low back moments in ergonomic practice and epidemiological studies for lifts up to a moderate level of asymmetry. Copyright © 2011 Elsevier Ltd. All rights reserved.
Zan, Ke; Cui, Gan; Guo, Li-Nong; Ma, Shuang-Cheng; Zheng, Jian
2018-05-01
High price and difficult to get of reference substance have become obstacles to HPLC assay of ethnic medicine. A new method based on quantitative reference herb (QRH) was proposed. Specific chromatograms in fruits of Capsicum frutescens were employed to determine peak positions, and HPLC quantitative reference herb was prepared from fruits of C. frutescens. The content of capsaicin and dihydrocapsaicin in the quantitative control herb was determined by HPLC. Eleven batches of fruits of C. frutescens were analyzed with quantitative reference herb and reference substance respectively. The results showed no difference. The present method is feasible for quality control of ethnic medicines and quantitative reference herb is suitable to replace reference substances in assay. Copyright© by the Chinese Pharmaceutical Association.
Selection of reference standard during method development using the analytical hierarchy process.
Sun, Wan-yang; Tong, Ling; Li, Dong-xiang; Huang, Jing-yi; Zhou, Shui-ping; Sun, Henry; Bi, Kai-shun
2015-03-25
Reference standard is critical for ensuring reliable and accurate method performance. One important issue is how to select the ideal one from the alternatives. Unlike the optimization of parameters, the criteria of the reference standard are always immeasurable. The aim of this paper is to recommend a quantitative approach for the selection of reference standard during method development based on the analytical hierarchy process (AHP) as a decision-making tool. Six alternative single reference standards were assessed in quantitative analysis of six phenolic acids from Salvia Miltiorrhiza and its preparations by using ultra-performance liquid chromatography. The AHP model simultaneously considered six criteria related to reference standard characteristics and method performance, containing feasibility to obtain, abundance in samples, chemical stability, accuracy, precision and robustness. The priority of each alternative was calculated using standard AHP analysis method. The results showed that protocatechuic aldehyde is the ideal reference standard, and rosmarinic acid is about 79.8% ability as the second choice. The determination results successfully verified the evaluation ability of this model. The AHP allowed us comprehensive considering the benefits and risks of the alternatives. It was an effective and practical tool for optimization of reference standards during method development. Copyright © 2015 Elsevier B.V. All rights reserved.
Wang, Zhu-lou; Zhang, Wan-jie; Li, Chen-xi; Chen, Wen-liang; Xu, Ke-xin
2015-02-01
There are some challenges in near-infrared non-invasive blood glucose measurement, such as the low signal to noise ratio of instrument, the unstable measurement conditions, the unpredictable and irregular changes of the measured object, and etc. Therefore, it is difficult to extract the information of blood glucose concentrations from the complicated signals accurately. Reference measurement method is usually considered to be used to eliminate the effect of background changes. But there is no reference substance which changes synchronously with the anylate. After many years of research, our research group has proposed the floating reference method, which is succeeded in eliminating the spectral effects induced by the instrument drifts and the measured object's background variations. But our studies indicate that the reference-point will changes following the changing of measurement location and wavelength. Therefore, the effects of floating reference method should be verified comprehensively. In this paper, keeping things simple, the Monte Carlo simulation employing Intralipid solution with the concentrations of 5% and 10% is performed to verify the effect of floating reference method used into eliminating the consequences of the light source drift. And the light source drift is introduced through varying the incident photon number. The effectiveness of the floating reference method with corresponding reference-points at different wavelengths in eliminating the variations of the light source drift is estimated. The comparison of the prediction abilities of the calibration models with and without using this method shows that the RMSEPs of the method are decreased by about 98.57% (5%Intralipid)and 99.36% (10% Intralipid)for different Intralipid. The results indicate that the floating reference method has obvious effect in eliminating the background changes.
Takei, Izumi; Hoshino, Tadao; Tominaga, Makoto; Ishibashi, Midori; Kuwa, Katsuhiko; Umemoto, Masao; Tani, Wataru; Okahashi, Mikiko; Yasukawa, Keiko; Kohzuma, Takuji; Sato, Asako
2016-01-01
Glycated albumin is an intermediate glycaemic control marker for which there are several measurement procedures with entirely different reference intervals. We have developed a reference measurement procedure for the purpose of standardizing glycated albumin measurements. The isotope dilution liquid chromatography/tandem mass spectrometry method was developed as a reference measurement procedure for glycated albumin. The stable isotopes of lysine and fructosyl-lysine, which serve as an internal standard, were added to albumin isolated from serum, followed by hydrogenation. After hydrolysis of albumin with hot hydrochloric acid, the liberated lysine and fructosyl-lysine were measured by liquid chromatography/tandem mass spectrometry, and their concentrations were determined from each isotope ratio. The reference materials (JCCRM611) for determining of glycated albumin were prepared from pooled patient blood samples. The isotope dilution-tandem mass spectrometry calibration curve of fructosyl-lysine and lysine showed good linearity (r = 0.999). The inter-assay and intra-assay coefficient of variation values of glycated albumin measurement were 1.2 and 1.4%, respectively. The glycated albumin values of serum in patients with diabetes assessed through the use of this method showed a good relationship with routine measurement procedures (r = 0.997). The relationship of glycated albumin values of the reference material (JCCRM611) between these two methods was the same as the relationship with the patient serum samples. The Committee on Diabetes Mellitus Indices of the Japan Society of Clinical Chemistry recommends the isotope dilution liquid chromatography/tandem mass spectrometry method as a reference measurement procedure, and JCCRM611 as a certified reference material for glycated albumin measurement. In addition, we recommend the traceability system for glycated albumin measurement. © The Author(s) 2015.
Using pseudoalignment and base quality to accurately quantify microbial community composition
Novembre, John
2018-01-01
Pooled DNA from multiple unknown organisms arises in a variety of contexts, for example microbial samples from ecological or human health research. Determining the composition of pooled samples can be difficult, especially at the scale of modern sequencing data and reference databases. Here we propose a novel method for taxonomic profiling in pooled DNA that combines the speed and low-memory requirements of k-mer based pseudoalignment with a likelihood framework that uses base quality information to better resolve multiply mapped reads. We apply the method to the problem of classifying 16S rRNA reads using a reference database of known organisms, a common challenge in microbiome research. Using simulations, we show the method is accurate across a variety of read lengths, with different length reference sequences, at different sample depths, and when samples contain reads originating from organisms absent from the reference. We also assess performance in real 16S data, where we reanalyze previous genetic association data to show our method discovers a larger number of quantitative trait associations than other widely used methods. We implement our method in the software Karp, for k-mer based analysis of read pools, to provide a novel combination of speed and accuracy that is uniquely suited for enhancing discoveries in microbial studies. PMID:29659582
Ender, Andreas; Mehl, Albert
2015-01-01
To investigate the accuracy of conventional and digital impression methods used to obtain full-arch impressions by using an in-vitro reference model. Eight different conventional (polyether, POE; vinylsiloxanether, VSE; direct scannable vinylsiloxanether, VSES; and irreversible hydrocolloid, ALG) and digital (CEREC Bluecam, CER; CEREC Omnicam, OC; Cadent iTero, ITE; and Lava COS, LAV) full-arch impressions were obtained from a reference model with a known morphology, using a highly accurate reference scanner. The impressions obtained were then compared with the original geometry of the reference model and within each test group. A point-to-point measurement of the surface of the model using the signed nearest neighbour method resulted in a mean (10%-90%)/2 percentile value for the difference between the impression and original model (trueness) as well as the difference between impressions within a test group (precision). Trueness values ranged from 11.5 μm (VSE) to 60.2 μm (POE), and precision ranged from 12.3 μm (VSE) to 66.7 μm (POE). Among the test groups, VSE, VSES, and CER showed the highest trueness and precision. The deviation pattern varied with the impression method. Conventional impressions showed high accuracy across the full dental arch in all groups, except POE and ALG. Conventional and digital impression methods show differences regarding full-arch accuracy. Digital impression systems reveal higher local deviations of the full-arch model. Digital intraoral impression systems do not show superior accuracy compared to highly accurate conventional impression techniques. However, they provide excellent clinical results within their indications applying the correct scanning technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lalonde, A; Bouchard, H
Purpose: To develop a general method for human tissue characterization with dual-and multi-energy CT and evaluate its performance in determining elemental compositions and the associated proton stopping power relative to water (SPR) and photon mass absorption coefficients (EAC). Methods: Principal component analysis is used to extract an optimal basis of virtual materials from a reference dataset of tissues. These principal components (PC) are used to perform two-material decomposition using simulated DECT data. The elemental mass fraction and the electron density in each tissue is retrieved by measuring the fraction of each PC. A stoichiometric calibration method is adapted to themore » technique to make it suitable for clinical use. The present approach is compared with two others: parametrization and three-material decomposition using the water-lipid-protein (WLP) triplet. Results: Monte Carlo simulations using TOPAS for four reference tissues shows that characterizing them with only two PC is enough to get a submillimetric precision on proton range prediction. Based on the simulated DECT data of 43 references tissues, the proposed method is in agreement with theoretical values of protons SPR and low-kV EAC with a RMS error of 0.11% and 0.35%, respectively. In comparison, parametrization and WLP respectively yield RMS errors of 0.13% and 0.29% on SPR, and 2.72% and 2.19% on EAC. Furthermore, the proposed approach shows potential applications for spectral CT. Using five PC and five energy bins reduces the SPR RMS error to 0.03%. Conclusion: The proposed method shows good performance in determining elemental compositions from DECT data and physical quantities relevant to radiotherapy dose calculation and generally shows better accuracy and unbiased results compared to reference methods. The proposed method is particularly suitable for Monte Carlo calculations and shows promise in using more than two energies to characterize human tissue with CT.« less
X-ray Moiré deflectometry using synthetic reference images
Stutman, Dan; Valdivia, Maria Pia; Finkenthal, Michael
2015-06-25
Moiré fringe deflectometry with grating interferometers is a technique that enables refraction-based x-ray imaging using a single exposure of an object. To obtain the refraction image, the method requires a reference fringe pattern (without the object). Our study shows that, in order to avoid artifacts, the reference pattern must be exactly matched in phase with the object fringe pattern. In experiments, however, it is difficult to produce a perfectly matched reference pattern due to unavoidable interferometer drifts. We present a simple method to obtain matched reference patterns using a phase-scan procedure to generate synthetic Moiré images. As a result, themore » method will enable deflectometric diagnostics of transient phenomena such as laser-produced plasmas and could improve the sensitivity and accuracy of medical phase-contrast imaging.« less
NASA Astrophysics Data System (ADS)
Ettahir, Aziz; Boned, Christian; Lagourette, Bernard; Kettani, Kamal; Amarrayi, Khaoula; Garoumi, Mohammed
2017-10-01
The studied predictive model of behavior viscosimetric is the model of K.A. Petersen [1]. The dominant idea of this method is to characterize the viscosity of a fluid from two models taken as a reference in passing through a reduced pressure. The method is corresponding state with two references. This study shows that this method is dependent on the choice of reference and for each of the possibilities of C10/C6H6 and C1/C10 references . The results were investigated for four different weight ratios. It shows that the introduction of an adjusted coefficient does not improve significantly compared to results without adjustment factor, which appears to be the best choice. Regarding the influence of the choice of references, generally the two couples appear suitable but we noted that the choice is not necessary. In the case of mixtures containing at least one aromatic, the results are correct, especially if one takes the ratio of adjustment and our ratio without adjustment compared to that of K. A. PETERSEN[1]. The experimental results of the viscosity exhibit a good agreement with the calculated values. We can predict that the relative improvement is the finding that the introduction of the second body of reference (C10) from the model states corresponding to a reference (C1) of the authors.
Models of Reference Services in Australian Academic Libraries
ERIC Educational Resources Information Center
Burke, Liz
2008-01-01
This article reports on a project which was undertaken in 2006 to investigate the current modes and methods for delivering reference services in Australian academic libraries. The project included a literature review to assist in providing a definition of reference services as well as a snapshot of statistics showing staff and patron numbers from…
Chen, Xing; Pavan, Matteo; Heinzer-Schweizer, Susanne; Boesiger, Peter; Henning, Anke
2012-01-01
This report describes our efforts on quantification of tissue metabolite concentrations in mM by nuclear Overhauser enhanced and proton decoupled (13) C magnetic resonance spectroscopy and the Electric Reference To access In vivo Concentrations (ERETIC) method. Previous work showed that a calibrated synthetic magnetic resonance spectroscopy-like signal transmitted through an optical fiber and inductively coupled into a transmit/receive coil represents a reliable reference standard for in vivo (1) H magnetic resonance spectroscopy quantification on a clinical platform. In this work, we introduce a related implementation that enables simultaneous proton decoupling and ERETIC-based metabolite quantification and hence extends the applicability of the ERETIC method to nuclear Overhauser enhanced and proton decoupled in vivo (13) C magnetic resonance spectroscopy. In addition, ERETIC signal stability under the influence of simultaneous proton decoupling is investigated. The proposed quantification method was cross-validated against internal and external reference standards on human skeletal muscle. The ERETIC signal intensity stability was 100.65 ± 4.18% over 3 months including measurements with and without proton decoupling. Glycogen and unsaturated fatty acid concentrations measured with the ERETIC method were in excellent agreement with internal creatine and external phantom reference methods, showing a difference of 1.85 ± 1.21% for glycogen and 1.84 ± 1.00% for unsaturated fatty acid between ERETIC and creatine-based quantification, whereas the deviations between external reference and creatine-based quantification are 6.95 ± 9.52% and 3.19 ± 2.60%, respectively. Copyright © 2011 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Kuś, Tomasz; Bartlett, Rodney J.
2008-09-01
The doublet and quartet excited states of the formyl radical have been studied by the equation-of-motion (EOM) coupled cluster (CC) method. The Sz spin-conserving singles and doubles (EOM-EE-CCSD) and singles, doubles, and triples (EOM-EE-CCSDT) approaches, as well as the spin-flipped singles and doubles (EOM-SF-CCSD) method have been applied, subject to unrestricted Hartree-Fock (HF), restricted open-shell HF, and quasirestricted HF references. The structural parameters, vertical and adiabatic excitation energies, and harmonic vibrational frequencies have been calculated. The issue of the reference function choice for the spin-flipped (SF) method and its impact on the results has been discussed using the experimental data and theoretical results available. The results show that if the appropriate reference function is chosen so that target states differ from the reference by only single excitations, then EOM-EE-CCSD and EOM-SF-CCSD methods give a very good description of the excited states. For the states that have a non-negligible contribution of the doubly excited configurations one is able to use the SF method with such a reference function, that in most cases the performance of the EOM-SF-CCSD method is better than that of the EOM-EE-CCSD approach.
Gillard, Jonathan
2015-12-01
This article re-examines parametric methods for the calculation of time specific reference intervals where there is measurement error present in the time covariate. Previous published work has commonly been based on the standard ordinary least squares approach, weighted where appropriate. In fact, this is an incorrect method when there are measurement errors present, and in this article, we show that the use of this approach may, in certain cases, lead to referral patterns that may vary with different values of the covariate. Thus, it would not be the case that all patients are treated equally; some subjects would be more likely to be referred than others, hence violating the principle of equal treatment required by the International Federation for Clinical Chemistry. We show, by using measurement error models, that reference intervals are produced that satisfy the requirement for equal treatment for all subjects. © The Author(s) 2011.
Watanabe, Ayumi; Inoue, Yusuke; Asano, Yuji; Kikuchi, Kei; Miyatake, Hiroki; Tokushige, Takanobu
2017-01-01
The specific binding ratio (SBR) was first reported by Tossici-Bolt et al. for quantitative indicators for dopamine transporter (DAT) imaging. It is defined as the ratio of the specific binding concentration of the striatum to the non-specific binding concentration of the whole brain other than the striatum. The non-specific binding concentration is calculated based on the region of interest (ROI), which is set 20 mm inside the outer contour, defined by a threshold technique. Tossici-Bolt et al. used a 50% threshold, but sometimes we couldn't define the ROI of non-specific binding concentration (reference region) and calculate SBR appropriately with a 50% threshold. Therefore, we sought a new method for determining the reference region when calculating SBR. We used data from 20 patients who had undergone DAT imaging in our hospital, to calculate the non-specific binding concentration by the following methods, the threshold to define a reference region was fixed at some specific values (the fixing method) and reference region was visually optimized by an examiner at every examination (the visual optimization method). First, we assessed the reference region of each method visually, and afterward, we quantitatively compared SBR calculated based on each method. In the visual assessment, the scores of the fixing method at 30% and visual optimization method were higher than the scores of the fixing method at other values, with or without scatter correction. In the quantitative assessment, the SBR obtained by visual optimization of the reference region, based on consensus of three radiological technologists, was used as a baseline (the standard method). The values of SBR showed good agreement between the standard method and both the fixing method at 30% and the visual optimization method, with or without scatter correction. Therefore, the fixing method at 30% and the visual optimization method were equally suitable for determining the reference region.
Coordinates and intervals in graph-based reference genomes.
Rand, Knut D; Grytten, Ivar; Nederbragt, Alexander J; Storvik, Geir O; Glad, Ingrid K; Sandve, Geir K
2017-05-18
It has been proposed that future reference genomes should be graph structures in order to better represent the sequence diversity present in a species. However, there is currently no standard method to represent genomic intervals, such as the positions of genes or transcription factor binding sites, on graph-based reference genomes. We formalize offset-based coordinate systems on graph-based reference genomes and introduce methods for representing intervals on these reference structures. We show the advantage of our methods by representing genes on a graph-based representation of the newest assembly of the human genome (GRCh38) and its alternative loci for regions that are highly variable. More complex reference genomes, containing alternative loci, require methods to represent genomic data on these structures. Our proposed notation for genomic intervals makes it possible to fully utilize the alternative loci of the GRCh38 assembly and potential future graph-based reference genomes. We have made a Python package for representing such intervals on offset-based coordinate systems, available at https://github.com/uio-cels/offsetbasedgraph . An interactive web-tool using this Python package to visualize genes on a graph created from GRCh38 is available at https://github.com/uio-cels/genomicgraphcoords .
Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min
2016-07-22
One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account.
Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min
2016-01-01
One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account. PMID:27445105
Noise-free recovery of optodigital encrypted and multiplexed images.
Henao, Rodrigo; Rueda, Edgar; Barrera, John F; Torroba, Roberto
2010-02-01
We present a method that allows storing multiple encrypted data using digital holography and a joint transform correlator architecture with a controllable angle reference wave. In this method, the information is multiplexed by using a key and a different reference wave angle for each object. In the recovering process, the use of different reference wave angles prevents noise produced by the nonrecovered objects from being superimposed on the recovered object; moreover, the position of the recovered object in the exit plane can be fully controlled. We present the theoretical analysis and the experimental results that show the potential and applicability of the method.
NASA Astrophysics Data System (ADS)
Darbandi, Masoud; Abrar, Bagher
2018-01-01
The spectral-line weighted-sum-of-gray-gases (SLW) model is considered as a modern global model, which can be used in predicting the thermal radiation heat transfer within the combustion fields. The past SLW model users have mostly employed the reference approach to calculate the local values of gray gases' absorption coefficient. This classical reference approach assumes that the absorption spectra of gases at different thermodynamic conditions are scalable with the absorption spectrum of gas at a reference thermodynamic state in the domain. However, this assumption cannot be reasonable in combustion fields, where the gas temperature is very different from the reference temperature. Consequently, the results of SLW model incorporated with the classical reference approach, say the classical SLW method, are highly sensitive to the reference temperature magnitude in non-isothermal combustion fields. To lessen this sensitivity, the current work combines the SLW model with a modified reference approach, which is a particular one among the eight possible reference approach forms reported recently by Solovjov, et al. [DOI: 10.1016/j.jqsrt.2017.01.034, 2017]. The combination is called "modified SLW method". This work shows that the modified reference approach can provide more accurate total emissivity calculation than the classical reference approach if it is coupled with the SLW method. This would be particularly helpful for more accurate calculation of radiation transfer in highly non-isothermal combustion fields. To approve this, we use both the classical and modified SLW methods and calculate the radiation transfer in such fields. It is shown that the modified SLW method can almost eliminate the sensitivity of achieved results to the chosen reference temperature in treating highly non-isothermal combustion fields.
Kerr, Kathleen F; Serikawa, Kyle A; Wei, Caimiao; Peters, Mette A; Bumgarner, Roger E
2007-01-01
The reference design is a practical and popular choice for microarray studies using two-color platforms. In the reference design, the reference RNA uses half of all array resources, leading investigators to ask: What is the best reference RNA? We propose a novel method for evaluating reference RNAs and present the results of an experiment that was specially designed to evaluate three common choices of reference RNA. We found no compelling evidence in favor of any particular reference. In particular, a commercial reference showed no advantage in our data. Our experimental design also enabled a new way to test the effectiveness of pre-processing methods for two-color arrays. Our results favor using intensity normalization and foregoing background subtraction. Finally, we evaluate the sensitivity and specificity of data quality filters, and we propose a new filter that can be applied to any experimental design and does not rely on replicate hybridizations.
Effect of defuzzification method of fuzzy modeling
NASA Astrophysics Data System (ADS)
Lapohos, Tibor; Buchal, Ralph O.
1994-10-01
Imprecision can arise in fuzzy relational modeling as a result of fuzzification, inference and defuzzification. These three sources of imprecision are difficult to separate. We have determined through numerical studies that an important source of imprecision is the defuzzification stage. This imprecision adversely affects the quality of the model output. The most widely used defuzzification algorithm is known by the name of `center of area' (COA) or `center of gravity' (COG). In this paper, we show that this algorithm not only maps the near limit values of the variables improperly but also introduces errors for middle domain values of the same variables. Furthermore, the behavior of this algorithm is a function of the shape of the reference sets. We compare the COA method to the weighted average of cluster centers (WACC) procedure in which the transformation is carried out based on the values of the cluster centers belonging to each of the reference membership functions instead of using the functions themselves. We show that this procedure is more effective and computationally much faster than the COA. The method is tested for a family of reference sets satisfying certain constraints, that is, for any support value the sum of reference membership function values equals one and the peak values of the two marginal membership functions project to the boundaries of the universe of discourse. For all the member sets of this family of reference sets the defuzzification errors do not get bigger as the linguistic variables tend to their extreme values. In addition, the more reference sets that are defined for a certain linguistic variable, the less the average defuzzification error becomes. In case of triangle shaped reference sets there is no defuzzification error at all. Finally, an alternative solution is provided that improves the performance of the COA method.
Method modification of the Legipid® Legionella fast detection test kit.
Albalat, Guillermo Rodríguez; Broch, Begoña Bedrina; Bono, Marisa Jiménez
2014-01-01
Legipid(®) Legionella Fast Detection is a test based on combined magnetic immunocapture and enzyme-immunoassay (CEIA) for the detection of Legionella in water. The test is based on the use of anti-Legionella antibodies immobilized on magnetic microspheres. Target microorganism is preconcentrated by filtration. Immunomagnetic analysis is applied on these preconcentrated water samples in a final test portion of 9 mL. The test kit was certified by the AOAC Research Institute as Performance Tested Method(SM) (PTM) No. 111101 in a PTM validation which certifies the performance claims of the test method in comparison to the ISO reference method 11731-1998 and the revision 11731-2004 "Water Quality: Detection and Enumeration of Legionella pneumophila" in potable water, industrial water, and waste water. The modification of this test kit has been approved. The modification includes increasing the target analyte from L. pneumophila to Legionella species and adding an optical reader to the test method. In this study, 71 strains of Legionella spp. other than L. pneumophila were tested to determine its reactivity with the kit based on CEIA. All the strains of Legionella spp. tested by the CEIA test were confirmed positive by reference standard method ISO 11731. This test (PTM 111101) has been modified to include a final optical reading. A methods comparison study was conducted to demonstrate the equivalence of this modification to the reference culture method. Two water matrixes were analyzed. Results show no statistically detectable difference between the test method and the reference culture method for the enumeration of Legionella spp. The relative level of detection was 93 CFU/volume examined (LOD50). For optical reading, the LOD was 40 CFU/volume examined and the LOQ was 60 CFU/volume examined. Results showed that the test Legipid Legionella Fast Detection is equivalent to the reference culture method for the enumeration of Legionella spp.
Application of solid/liquid extraction for the gravimetric determination of lipids in royal jelly.
Antinelli, Jean-François; Davico, Renée; Rognone, Catherine; Faucon, Jean-Paul; Lizzani-Cuvelier, Louisette
2002-04-10
Gravimetric lipid determination is a major parameter for the characterization and the authentication of royal jelly quality. A solid/liquid extraction was compared to the reference method, which is based on liquid/liquid extraction. The amount of royal jelly and the time of the extraction were optimized in comparison to the reference method. Boiling/rinsing ratio and spread of royal jelly onto the extraction thimble were identified as critical parameters, resulting in good accuracy and precision for the alternative method. Comparison of reproducibility and repeatability of both methods associated with gas chromatographic analysis of the composition of the extracted lipids showed no differences between the two methods. As the intra-laboratory validation tests were comparable to the reference method, while offering rapidity and a decrease in amount of solvent used, it was concluded that the proposed method should be used with no modification of quality criteria and norms established for royal jelly characterization.
Alles, Susan; Peng, Linda X; Mozola, Mark A
2009-01-01
A modification to Performance-Tested Method (PTM) 070601, Reveal Listeria Test (Reveal), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there was a statistically significant difference in performance between the Reveal and reference culture [U.S. Food and Drug Administration's Bacteriological Analytical Manual (FDA/BAM) or U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS)] methods for only a single food in one trial (pasteurized crab meat) at the 27 h enrichment time point, with more positive results obtained with the FDA/BAM reference method. No foods showed statistically significant differences in method performance at the 30 h time point. Independent laboratory testing of 3 foods again produced a statistically significant difference in results for crab meat at the 27 h time point; otherwise results of the Reveal and reference methods were statistically equivalent. Overall, considering both internal and independent laboratory trials, sensitivity of the Reveal method relative to the reference culture procedures in testing of foods was 85.9% at 27 h and 97.1% at 30 h. Results from 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the Reveal method was more productive than the reference USDA-FSIS culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the Reveal method at the 24 h time point. Overall, sensitivity of the Reveal method at 24 h relative to that of the USDA-FSIS method was 153%. The Reveal method exhibited extremely high specificity, with only a single false-positive result in all trials combined for overall specificity of 99.5%.
Wang, Wenguang; Ma, Xiaoli; Guo, Xiaoyu; Zhao, Mingbo; Tu, Pengfei; Jiang, Yong
2015-09-18
In order to solve the bottleneck of reference standards shortage for comprehensive quality control of traditional Chinese medicines (TCMs), a series of strategies, including one single reference standard to determine multi-compounds (SSDMC), quantitative analysis by standardized reference extract (QASRE), and quantitative nuclear magnetic resonance spectroscopy (qNMR) were proposed, and Mahoniae Caulis was selected as an example to develop and validate these methods for simultaneous determination of four alkaloids, columbamine, jatrorrhizine, palmatine, and berberine. Comprehensive comparisons among these methods and with the conventional external standard method (ESM) were carried out. The relative expanded uncertainty of measurement was firstly used to compare their credibility. The results showed that all these three new developed methods can accurately accomplish the quantification by using only one purified reference standard, but each of them has its own advantages and disadvantages as well as the specific application scope, which were also discussed in detail in this paper. Copyright © 2015 Elsevier B.V. All rights reserved.
[Standard sample preparation method for quick determination of trace elements in plastic].
Yao, Wen-Qing; Zong, Rui-Long; Zhu, Yong-Fa
2011-08-01
Reference sample was prepared by masterbatch method, containing heavy metals with known concentration of electronic information products (plastic), the repeatability and precision were determined, and reference sample preparation procedures were established. X-Ray fluorescence spectroscopy (XRF) analysis method was used to determine the repeatability and uncertainty in the analysis of the sample of heavy metals and bromine element. The working curve and the metrical methods for the reference sample were carried out. The results showed that the use of the method in the 200-2000 mg x kg(-1) concentration range for Hg, Pb, Cr and Br elements, and in the 20-200 mg x kg(-1) range for Cd elements, exhibited a very good linear relationship, and the repeatability of analysis methods for six times is good. In testing the circuit board ICB288G and ICB288 from the Mitsubishi Heavy Industry Company, results agreed with the recommended values.
Wang, Li; Ren, Yi; Gao, Yaozong; Tang, Zhen; Chen, Ken-Chung; Li, Jianfu; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Xia, James J.; Shen, Dinggang
2015-01-01
Purpose: A significant number of patients suffer from craniomaxillofacial (CMF) deformity and require CMF surgery in the United States. The success of CMF surgery depends on not only the surgical techniques but also an accurate surgical planning. However, surgical planning for CMF surgery is challenging due to the absence of a patient-specific reference model. Currently, the outcome of the surgery is often subjective and highly dependent on surgeon’s experience. In this paper, the authors present an automatic method to estimate an anatomically correct reference shape of jaws for orthognathic surgery, a common type of CMF surgery. Methods: To estimate a patient-specific jaw reference model, the authors use a data-driven method based on sparse shape composition. Given a dictionary of normal subjects, the authors first use the sparse representation to represent the midface of a patient by the midfaces of the normal subjects in the dictionary. Then, the derived sparse coefficients are used to reconstruct a patient-specific reference jaw shape. Results: The authors have validated the proposed method on both synthetic and real patient data. Experimental results show that the authors’ method can effectively reconstruct the normal shape of jaw for patients. Conclusions: The authors have presented a novel method to automatically estimate a patient-specific reference model for the patient suffering from CMF deformity. PMID:26429255
Solving multi-objective optimization problems in conservation with the reference point method
Dujardin, Yann; Chadès, Iadine
2018-01-01
Managing the biodiversity extinction crisis requires wise decision-making processes able to account for the limited resources available. In most decision problems in conservation biology, several conflicting objectives have to be taken into account. Most methods used in conservation either provide suboptimal solutions or use strong assumptions about the decision-maker’s preferences. Our paper reviews some of the existing approaches to solve multi-objective decision problems and presents new multi-objective linear programming formulations of two multi-objective optimization problems in conservation, allowing the use of a reference point approach. Reference point approaches solve multi-objective optimization problems by interactively representing the preferences of the decision-maker with a point in the criteria (objectives) space, called the reference point. We modelled and solved the following two problems in conservation: a dynamic multi-species management problem under uncertainty and a spatial allocation resource management problem. Results show that the reference point method outperforms classic methods while illustrating the use of an interactive methodology for solving combinatorial problems with multiple objectives. The method is general and can be adapted to a wide range of ecological combinatorial problems. PMID:29293650
Evaluation of home allergen sampling devices.
Sercombe, J K; Liu-Brennan, D; Garcia, M L; Tovey, E R
2005-04-01
Simple, inexpensive methods of sampling from allergen reservoirs are necessary for large-scale studies or low-cost householder-operated allergen measurement. We tested two commercial devices: the Indoor Biotechnologies Mitest Dust Collector and the Drager Bio-Check Allergen Control; two devices of our own design: the Electrostatic Cloth Sampler (ECS) and the Press Tape Sampler (PTS); and a Vacuum Sampler as used in many allergen studies (our Reference Method). Devices were used to collect dust mite allergen samples from 16 domestic carpets. Results were examined for correlations between the sampling methods. With mite allergen concentration expressed as microg/g, the Mitest, the ECS and the PTS correlated with the Reference Method but not with each other. When mite allergen concentration was expressed as microg/m2 the Mitest and the ECS correlated with the Reference Method but the PTS did not. In the high allergen conditions of this study, the Drager Bio-Check did not relate to any methods. The Mitest Dust Collector, the ECS and the PTS show performance consistent with the Reference Method. Many techniques can be used to collect dust mite allergen samples. More investigation is needed to prove any method as superior for estimating allergen exposure.
Zhang, Ting-Ting; Wu, Yi; Hang, Tai-Jun
2009-05-01
To establish a stable and repeatable HPLC fingerprint standard and evaluate the flavonoids from Houttuynia cordata qualitatively and quantitatively. HPLC separation was performed on a C18 column with methanol-0.1% phosphoric acid mixed solution as mobile phase in gradient elution mode. The fingerprint reference was determined as one of the most typical chromatograms and used to be compared with other samples through Cosine and Relative Euclid Distance methods, thus the chromatographic fingerprints of flavonoids from Houttuynia cordata were evaluated by constitutes and contents, respectively. Fourteen mutual peaks were fixed in the HPLC fingerprint of flavonoids from Houttaynia cordata. It showed good results in validation tests in which the quercitrin's peak was set as the reference peak to calculate relative retention time and area of other peaks in the chromatograms, and the RSD were less than 0.2% and 5.0%, respectively. The linear ranges for quercitrin was 1.07-83.4 microg/mL (r=0.9999) and the average recovery was 100.3%. The method shows good repeatability, ruggedness and reliability. Comparing with the established reference fingerprint, the evaluation system including Cosine and Relative Euclid Distance methods lays dependable foundation for controlling the quality of Houttuynia cordata.
NASA Astrophysics Data System (ADS)
Wang, Jian; Meng, Xiaohong; Zheng, Wanqiu
2017-10-01
The elastic-wave reverse-time migration of inhomogeneous anisotropic media is becoming the hotspot of research today. In order to ensure the accuracy of the migration, it is necessary to separate the wave mode into P-wave and S-wave before migration. For inhomogeneous media, the Kelvin-Christoffel equation can be solved in the wave-number domain by using the anisotropic parameters of the mesh nodes, and the polarization vector of the P-wave and S-wave at each node can be calculated and transformed into the space domain to obtain the quasi-differential operators. However, this method is computationally expensive, especially for the process of quasi-differential operators. In order to reduce the computational complexity, the wave-mode separation of mixed domain can be realized on the basis of a reference model in the wave-number domain. But conventional interpolation methods and reference model selection methods reduce the separation accuracy. In order to further improve the separation effect, this paper introduces an inverse-distance interpolation method involving position shading and uses the reference model selection method of random points scheme. This method adds the spatial weight coefficient K, which reflects the orientation of the reference point on the conventional IDW algorithm, and the interpolation process takes into account the combined effects of the distance and azimuth of the reference points. Numerical simulation shows that the proposed method can separate the wave mode more accurately using fewer reference models and has better practical value.
Garrido, M; Larrechi, M S; Rius, F X
2007-03-07
This paper reports the validation of the results obtained by combining near infrared spectroscopy and multivariate curve resolution-alternating least squares (MCR-ALS) and using high performance liquid chromatography as a reference method, for the model reaction of phenylglycidylether (PGE) and aniline. The results are obtained as concentration profiles over the reaction time. The trueness of the proposed method has been evaluated in terms of lack of bias. The joint test for the intercept and the slope showed that there were no significant differences between the profiles calculated spectroscopically and the ones obtained experimentally by means of the chromatographic reference method at an overall level of confidence of 5%. The uncertainty of the results was estimated by using information derived from the process of assessment of trueness. Such operational aspects as the cost and availability of instrumentation and the length and cost of the analysis were evaluated. The method proposed is a good way of monitoring the reactions of epoxy resins, and it adequately shows how the species concentration varies over time.
Transport equations of electrodiffusion processes in the laboratory reference frame.
Garrido, Javier
2006-02-23
The transport equations of electrodiffusion processes use three reference frames for defining the fluxes: Fick's reference in diffusion, solvent-fixed reference in transference numbers, and laboratory fluxes in electric conductivity. The convenience of using only one reference frame is analyzed here from the point of view of the thermodynamics of irreversible processes. A relation between the fluxes of ions and solvent and the electric current density is deduced first from a mass and volume balance. This is then used to show that (i) the laboratory and Fick's diffusion coefficients are identical and (ii) the transference numbers of both the solvent and the ion in the laboratory reference frame are related. Finally, four experimental methods for the measurement of ion transference numbers are analyzed critically. New expressions for evaluating transference numbers for the moving boundary method and the chronopotentiometry technique are deduced. It is concluded that the ion transport equation in the laboratory reference frame plays a key role in the description of electrodiffusion processes.
Modelling of Vortex-Induced Loading on a Single-Blade Installation Setup
NASA Astrophysics Data System (ADS)
Skrzypiński, Witold; Gaunaa, Mac; Heinz, Joachim
2016-09-01
Vortex-induced integral loading fluctuations on a single suspended blade at various inflow angles were modeled in the presents work by means of stochastic modelling methods. The reference time series were obtained by 3D DES CFD computations carried out on the DTU 10MW reference wind turbine blade. In the reference time series, the flapwise force component, Fx, showed both higher absolute values and variation than the chordwise force component, Fz, for every inflow angle considered. For this reason, the present paper focused on modelling of the Fx and not the Fz whereas Fz would be modelled using exactly the same procedure. The reference time series were significantly different, depending on the inflow angle. This made the modelling of all the time series with a single and relatively simple engineering model challenging. In order to find model parameters, optimizations were carried out, based on the root-mean-square error between the Single-Sided Amplitude Spectra of the reference and modelled time series. In order to model well defined frequency peaks present at certain inflow angles, optimized sine functions were superposed on the stochastically modelled time series. The results showed that the modelling accuracy varied depending on the inflow angle. None the less, the modelled and reference time series showed a satisfactory general agreement in terms of their visual and frequency characteristics. This indicated that the proposed method is suitable to model loading fluctuations on suspended blades.
Pérez de Isla, Leopoldo; Casanova, Carlos; Almería, Carlos; Rodrigo, José Luis; Cordeiro, Pedro; Mataix, Luis; Aubele, Ada Lia; Lang, Roberto; Zamorano, José Luis
2007-12-01
Several studies have shown a wide variability among different methods to determine the valve area in patients with rheumatic mitral stenosis. Our aim was to evaluate if 3D-echo planimetry is more accurate than the Gorlin method to measure the valve area. Twenty-six patients with mitral stenosis underwent 2D and 3D-echo echocardiographic examinations and catheterization. Valve area was estimated by different methods. A median value of the mitral valve area, obtained from the measurements of three classical non-invasive methods (2D planimetry, pressure half-time and PISA method), was used as the reference method and it was compared with 3D-echo planimetry and Gorlin's method. Our results showed that the accuracy of 3D-echo planimetry is superior to the accuracy of the Gorlin method for the assessment of mitral valve area. We should keep in mind the fact that 3D-echo planimetry may be a better reference method than the Gorlin method to assess the severity of rheumatic mitral stenosis.
Khan, Muhammad Khalid; Khan, Muhammad Farid; Mustafa, Ghulam; Sualah, Mohammed
2012-01-01
Ciprofloxacin was given orally to 28 healthy male volunteers for single oral dose of 500mg; Plasma samples were collected at different time's interval between 0 and 12h and analyzed both by high pressure liquid chromatography and by a microbiological assay. The detection limits (LOD) were 0.02μg/ml and 0.1μg/ml, for both methods respectively. For each method, coefficients of variation (R(2)) were 0.9995 and 0.9918 in plasma and limit of quantitation (LOQ).02 and 0.5μg/ml. The Comparison of means maximum concentration 2.68 μg/ml at 1.5 hr for test and 2.43 μg/ml are attain in HPLC method of Reference at 2hrs respectively. The plasma concentrations measured by microbiological assay of reference tablet are 3.95μg/ml (mean ± SE) at 1 hour and 3.80μg/ml (mean ± SE) at 1 hour. The concentrations in plasma measured by microbiological method were markedly higher than the high-pressure liquid chromatography values which indicates the presence of antimicrobially active metabolites. The mean ± SE values of pharmacokinetic parameters calculated by HPLC method, for total area under the curve (AUC 0-oo) were 13.11, and 11.91 h.mg/l for both test and reference tablets respectively. The mean ± SE values of clearance measured in l/h were 44.91 and 48.42 respectively. The elimination rate constant Kel [l/h] showed 0.17 l/h for test and 0.15 l/h reference tablets and likewise, absorption half-life expressed in hours shown 0.67 h for test and 1.04 h for reference respectively. The Mean Residence Time for test is 5.48 h and 5.49 h for reference. The mean ± SE values of pharmacokinetic parameters (Microbiological assay) for total area under the curve (AUC 0-oo) were 22.11 and 19.33 h.mg/l for both test and reference tablets respectively. The mean ± SE values of clearance measured in l/h were 29.02 and 31.63 respectively. The elimination rate constant Kel [l/h] showed 0.21 l/h for test and 0.20 l/h reference tablets and likewise, absorption half-life expressed in hours shown 0.86h for test and 0.56 h for reference respectively. The Mean Residence Time for test is 5.27 h and 4.67 h for reference. Significant difference observed between two methods.
Reveal Listeria 2.0 test for detection of Listeria spp. in foods and environmental samples.
Alles, Susan; Curry, Stephanie; Almy, David; Jagadeesan, Balamurugan; Rice, Jennifer; Mozola, Mark
2012-01-01
A Performance Tested Method validation study was conducted for a new lateral flow immunoassay (Reveal Listeria 2.0) for detection of Listeria spp. in foods and environmental samples. Results of inclusivity testing showed that the test detects all species of Listeria, with the exception of L. grayi. In exclusivity testing conducted under nonselective growth conditions, all non-listeriae tested produced negative Reveal assay results, except for three strains of Lactobacillus spp. However, these lactobacilli are inhibited by the selective Listeria Enrichment Single Step broth enrichment medium used with the Reveal method. Six foods were tested in parallel by the Reveal method and the U.S. Food and Drug Administration/Bacteriological Analytical Manual (FDA/BAM) reference culture procedure. Considering data from both internal and independent laboratory trials, overall sensitivity of the Reveal method relative to that of the FDA/BAM procedure was 101%. Four foods were tested in parallel by the Reveal method and the U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS) reference culture procedure. Overall sensitivity of the Reveal method relative to that of the USDA-FSIS procedure was 98.2%. There were no statistically significant differences in the number of positives obtained by the Reveal and reference culture procedures in any food trials. In testing of swab or sponge samples from four types of environmental surfaces, sensitivity of Reveal relative to that of the USDA-FSIS reference culture procedure was 127%. For two surface types, differences in the number of positives obtained by the Reveal and reference methods were statistically significant, with more positives by the Reveal method in both cases. Specificity of the Reveal assay was 100%, as there were no unconfirmed positive results obtained in any phase of the testing. Results of ruggedness experiments showed that the Reveal assay is tolerant of modest deviations in test sample volume and device incubation time.
Hua, Yang; Kaplan, Shannon; Reshatoff, Michael; Hu, Ernie; Zukowski, Alexis; Schweis, Franz; Gin, Cristal; Maroni, Brett; Becker, Michael; Wisniewski, Michele
2012-01-01
The Roka Listeria Detection Assay was compared to the reference culture methods for nine select foods and three select surfaces. The Roka method used Half-Fraser Broth for enrichment at 35 +/- 2 degrees C for 24-28 h. Comparison of Roka's method to reference methods requires an unpaired approach. Each method had a total of 545 samples inoculated with a Listeria strain. Each food and surface was inoculated with a different strain of Listeria at two different levels per method. For the dairy products (Brie cheese, whole milk, and ice cream), our method was compared to AOAC Official Method(SM) 993.12. For the ready-to-eat meats (deli chicken, cured ham, chicken salad, and hot dogs) and environmental surfaces (sealed concrete, stainless steel, and plastic), these samples were compared to the U.S. Department of Agriculture/Food Safety and Inspection Service-Microbiology Laboratory Guidebook (USDA/FSIS-MLG) method MLG 8.07. Cold-smoked salmon and romaine lettuce were compared to the U.S. Food and Drug Administration/Bacteriological Analytical Manual, Chapter 10 (FDA/BAM) method. Roka's method had 358 positives out of 545 total inoculated samples compared to 332 positive for the reference methods. Overall the probability of detection analysis of the results showed better or equivalent performance compared to the reference methods.
Intercomparison of Lab-Based Soil Water Extraction Methods for Stable Water Isotope Analysis
NASA Astrophysics Data System (ADS)
Pratt, D.; Orlowski, N.; McDonnell, J.
2016-12-01
The effect of pore water extraction technique on resultant isotopic signature is poorly understood. Here we present results of an intercomparison of five common lab-based soil water extraction techniques: high pressure mechanical squeezing, centrifugation, direct vapor equilibration, microwave extraction, and cryogenic extraction. We applied five extraction methods to two physicochemically different standard soil types (silty sand and clayey loam) that were oven-dried and rewetted with water of known isotopic composition at three different gravimetric water contents (8, 20, and 30%). We tested the null hypothisis that all extraction techniques would provide the same isotopic result independent from soil type and water content. Our results showed that the extraction technique had a significant effect on the soil water isotopic composition. Each method exhibited deviations from spiked reference water, with soil type and water content showing a secondary effect. Cryogenic extraction showed the largest deviations from the reference water, whereas mechanical squeezing and centrifugation provided the closest match to the reference water for both soil types. We also compared results for each extraction technique that produced liquid water on both an OA-ICOS and IRMS; differences between them were negligible.
Postural stabilization after single-leg vertical jump in individuals with chronic ankle instability.
Nunes, Guilherme S; de Noronha, Marcos
2016-11-01
To investigate the impact different ways to define reference balance can have when analysing time to stabilization (TTS). Secondarily, to investigate the difference in TTS between people with chronic ankle instability (CAI) and healthy controls. Cross-sectional study. Laboratory. Fifty recreational athletes (25 CAI, 25 controls). TTS of the center of pressure (CoP) after maximal single-leg vertical jump using as reference method the single-leg stance, pre-jump period, and post-jump period; and the CoP variability during the reference methods. The post-jump reference period had lower values for TTS in the anterior-posterior (AP) direction when compared to single-leg stance (P = 0.001) and to pre-jump (P = 0.002). For TTS in the medio-lateral (ML) direction, the post-jump reference period showed lower TTS when compared to single-leg stance (P = 0.01). We found no difference between CAI and control group for TTS for any direction. The CAI group showed more CoP variability than control group in the single-leg stance reference period for both directions. Different reference periods will produce different results for TTS. There is no difference in TTS after a maximum vertical jump between groups. People with CAI have more CoP variability in both directions during single-leg stance. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Hua; Zeng, Luan
2017-11-01
Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.
Browning, Brian L.; Browning, Sharon R.
2009-01-01
We present methods for imputing data for ungenotyped markers and for inferring haplotype phase in large data sets of unrelated individuals and parent-offspring trios. Our methods make use of known haplotype phase when it is available, and our methods are computationally efficient so that the full information in large reference panels with thousands of individuals is utilized. We demonstrate that substantial gains in imputation accuracy accrue with increasingly large reference panel sizes, particularly when imputing low-frequency variants, and that unphased reference panels can provide highly accurate genotype imputation. We place our methodology in a unified framework that enables the simultaneous use of unphased and phased data from trios and unrelated individuals in a single analysis. For unrelated individuals, our imputation methods produce well-calibrated posterior genotype probabilities and highly accurate allele-frequency estimates. For trios, our haplotype-inference method is four orders of magnitude faster than the gold-standard PHASE program and has excellent accuracy. Our methods enable genotype imputation to be performed with unphased trio or unrelated reference panels, thus accounting for haplotype-phase uncertainty in the reference panel. We present a useful measure of imputation accuracy, allelic R2, and show that this measure can be estimated accurately from posterior genotype probabilities. Our methods are implemented in version 3.0 of the BEAGLE software package. PMID:19200528
Alles, Susan; Peng, Linda X; Mozola, Mark A
2009-01-01
A modification to Performance-Tested Method 010403, GeneQuence Listeria Test (DNAH method), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C, and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there were statistically significant differences in method performance between the DNAH method and reference culture procedures for only 2 foods (pasteurized crab meat and lettuce) at the 27 h enrichment time point and for only a single food (pasteurized crab meat) in one trial at the 30 h enrichment time point. Independent laboratory testing with 3 foods showed statistical equivalence between the methods for all foods, and results support the findings of the internal trials. Overall, considering both internal and independent laboratory trials, sensitivity of the DNAH method relative to the reference culture procedures was 90.5%. Results of testing 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the DNAH method was more productive than the reference U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS) culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the DNAH method at the 24 h time point. Overall, sensitivity of the DNAH method at 24 h relative to that of the USDA-FSIS method was 152%. The DNAH method exhibited extremely high specificity, with only 1% false-positive reactions overall.
Howie, Bryan N.; Donnelly, Peter; Marchini, Jonathan
2009-01-01
Genotype imputation methods are now being widely used in the analysis of genome-wide association studies. Most imputation analyses to date have used the HapMap as a reference dataset, but new reference panels (such as controls genotyped on multiple SNP chips and densely typed samples from the 1,000 Genomes Project) will soon allow a broader range of SNPs to be imputed with higher accuracy, thereby increasing power. We describe a genotype imputation method (IMPUTE version 2) that is designed to address the challenges presented by these new datasets. The main innovation of our approach is a flexible modelling framework that increases accuracy and combines information across multiple reference panels while remaining computationally feasible. We find that IMPUTE v2 attains higher accuracy than other methods when the HapMap provides the sole reference panel, but that the size of the panel constrains the improvements that can be made. We also find that imputation accuracy can be greatly enhanced by expanding the reference panel to contain thousands of chromosomes and that IMPUTE v2 outperforms other methods in this setting at both rare and common SNPs, with overall error rates that are 15%–20% lower than those of the closest competing method. One particularly challenging aspect of next-generation association studies is to integrate information across multiple reference panels genotyped on different sets of SNPs; we show that our approach to this problem has practical advantages over other suggested solutions. PMID:19543373
Quality evaluation of no-reference MR images using multidirectional filters and image statistics.
Jang, Jinseong; Bang, Kihun; Jang, Hanbyol; Hwang, Dosik
2018-09-01
This study aimed to develop a fully automatic, no-reference image-quality assessment (IQA) method for MR images. New quality-aware features were obtained by applying multidirectional filters to MR images and examining the feature statistics. A histogram of these features was then fitted to a generalized Gaussian distribution function for which the shape parameters yielded different values depending on the type of distortion in the MR image. Standard feature statistics were established through a training process based on high-quality MR images without distortion. Subsequently, the feature statistics of a test MR image were calculated and compared with the standards. The quality score was calculated as the difference between the shape parameters of the test image and the undistorted standard images. The proposed IQA method showed a >0.99 correlation with the conventional full-reference assessment methods; accordingly, this proposed method yielded the best performance among no-reference IQA methods for images containing six types of synthetic, MR-specific distortions. In addition, for authentically distorted images, the proposed method yielded the highest correlation with subjective assessments by human observers, thus demonstrating its superior performance over other no-reference IQAs. Our proposed IQA was designed to consider MR-specific features and outperformed other no-reference IQAs designed mainly for photographic images. Magn Reson Med 80:914-924, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Wu, Jing; Huang, Junbing; Wu, Hanping; Gu, Hongcan; Tang, Bo
2014-12-01
In order to verify the validity of the regional reference grating method in solve the strain/temperature cross sensitive problem in the actual ship structural health monitoring system, and to meet the requirements of engineering, for the sensitivity coefficients of regional reference grating method, national standard measurement equipment is used to calibrate the temperature sensitivity coefficient of selected FBG temperature sensor and strain sensitivity coefficient of FBG strain sensor in this modal. And the thermal expansion sensitivity coefficient of the steel for ships is calibrated with water bath method. The calibration results show that the temperature sensitivity coefficient of FBG temperature sensor is 28.16pm/°C within -10~30°C, and its linearity is greater than 0.999, the strain sensitivity coefficient of FBG strain sensor is 1.32pm/μɛ within -2900~2900μɛ whose linearity is almost to 1, the thermal expansion sensitivity coefficient of the steel for ships is 23.438pm/°C within 30~90°C, and its linearity is greater than 0.998. Finally, the calibration parameters are used in the actual ship structure health monitoring system for temperature compensation. The results show that the effect of temperature compensation is good, and the calibration parameters meet the engineering requirements, which provide an important reference for fiber Bragg grating sensor is widely used in engineering.
Long-Term Variations of the EOP and ICRF2
NASA Technical Reports Server (NTRS)
Zharov, Vladimir; Sazhin, Mikhail; Sementsov, Valerian; Sazhina, Olga
2010-01-01
We analyzed the time series of the coordinates of the ICRF radio sources. We show that part of the radio sources, including the defining sources, shows a significant apparent motion. The stability of the celestial reference frame is provided by a no-net-rotation condition applied to the defining sources. In our case this condition leads to a rotation of the frame axes with time. We calculated the effect of this rotation on the Earth orientation parameters (EOP). In order to improve the stability of the celestial reference frame we suggest a new method for the selection of the defining sources. The method consists of two criteria: the first one we call cosmological and the second one kinematical. It is shown that a subset of the ICRF sources selected according to cosmological criteria provides the most stable reference frame for the next decade.
A structural SVM approach for reference parsing.
Zhang, Xiaoli; Zou, Jie; Le, Daniel X; Thoma, George R
2011-06-09
Automated extraction of bibliographic data, such as article titles, author names, abstracts, and references is essential to the affordable creation of large citation databases. References, typically appearing at the end of journal articles, can also provide valuable information for extracting other bibliographic data. Therefore, parsing individual reference to extract author, title, journal, year, etc. is sometimes a necessary preprocessing step in building citation-indexing systems. The regular structure in references enables us to consider reference parsing a sequence learning problem and to study structural Support Vector Machine (structural SVM), a newly developed structured learning algorithm on parsing references. In this study, we implemented structural SVM and used two types of contextual features to compare structural SVM with conventional SVM. Both methods achieve above 98% token classification accuracy and above 95% overall chunk-level accuracy for reference parsing. We also compared SVM and structural SVM to Conditional Random Field (CRF). The experimental results show that structural SVM and CRF achieve similar accuracies at token- and chunk-levels. When only basic observation features are used for each token, structural SVM achieves higher performance compared to SVM since it utilizes the contextual label features. However, when the contextual observation features from neighboring tokens are combined, SVM performance improves greatly, and is close to that of structural SVM after adding the second order contextual observation features. The comparison of these two methods with CRF using the same set of binary features show that both structural SVM and CRF perform better than SVM, indicating their stronger sequence learning ability in reference parsing.
Suka, Machi; Yoshida, Katsumi; Kawai, Tadashi; Aoki, Yoshikazu; Yamane, Noriyuki; Yamauchi, Kuniaki
2005-07-01
To determine age- and sex-specific reference intervals for 10 health examination items in Japanese adults. Health examination data were accumulated from 24 different prefectural health service associations affiliated with the Japan Association of Health Service. Those who were non-smokers, drank less than 7 days/week, and had a body mass index of 18.5-24.9kg/m2 were sampled as a reference population (n = 737,538; 224,947 men and 512,591 women). After classified by age and sex, reference intervals for 10 health examination items (systolic blood pressure, diastolic blood pressure, total cholesterol, triglyceride, glucose, uric acid, AST, ALT, gamma-GT, and hemoglobin) were estimated using the parametric and nonparametric methods. In every item except for hemoglobin, men had higher reference intervals than women. Systolic blood pressure, total cholesterol, and glucose showed an upward trend in values with increasing age. Hemoglobin showed a downward trend in values with increasing age. Triglyceride, ALT, and gamma-GT reached a peak in middle age. Overall, parametric estimates showed narrower reference intervals than non-parametric estimates. Reference intervals vary with age and sex. Age- and sex-specific reference intervals may contribute to better assessment of health examination data.
Torque ripple reduction of brushless DC motor based on adaptive input-output feedback linearization.
Shirvani Boroujeni, M; Markadeh, G R Arab; Soltani, J
2017-09-01
Torque ripple reduction of Brushless DC Motors (BLDCs) is an interesting subject in variable speed AC drives. In this paper at first, a mathematical expression for torque ripple harmonics is obtained. Then for a non-ideal BLDC motor with known harmonic contents of back-EMF, calculation of desired reference current amplitudes, which are required to eliminate some selected harmonics of torque ripple, are reviewed. In order to inject the reference harmonic currents to the motor windings, an Adaptive Input-Output Feedback Linearization (AIOFBL) control is proposed, which generates the reference voltages for three phases voltage source inverter in stationary reference frame. Experimental results are presented to show the capability and validity of the proposed control method and are compared with the vector control in Multi-Reference Frame (MRF) and Pseudo-Vector Control (P-VC) method results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yost, Shane R.; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu; Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720
2016-08-07
In this paper we introduce two size consistent forms of the non-orthogonal configuration interaction with second-order Møller-Plesset perturbation theory method, NOCI-MP2. We show that the original NOCI-MP2 formulation [S. R. Yost, T. Kowalczyk, and T. VanVoorh, J. Chem. Phys. 193, 174104 (2013)], which is a perturb-then-diagonalize multi-reference method, is not size consistent. We also show that this causes significant errors in large systems like the linear acenes. By contrast, the size consistent versions of the method give satisfactory results for singlet and triplet excited states when compared to other multi-reference methods that include dynamic correlation. For NOCI-MP2 however, the numbermore » of required determinants to yield similar levels of accuracy is significantly smaller. These results show the promise of the NOCI-MP2 method, though work still needs to be done in creating a more consistent black-box approach to computing the determinants that comprise the many-electron NOCI basis.« less
Automatic dynamic range adjustment for ultrasound B-mode imaging.
Lee, Yeonhwa; Kang, Jinbum; Yoo, Yangmo
2015-02-01
In medical ultrasound imaging, dynamic range (DR) is defined as the difference between the maximum and minimum values of the displayed signal to display and it is one of the most essential parameters that determine its image quality. Typically, DR is given with a fixed value and adjusted manually by operators, which leads to low clinical productivity and high user dependency. Furthermore, in 3D ultrasound imaging, DR values are unable to be adjusted during 3D data acquisition. A histogram matching method, which equalizes the histogram of an input image based on that from a reference image, can be applied to determine the DR value. However, it could be lead to an over contrasted image. In this paper, a new Automatic Dynamic Range Adjustment (ADRA) method is presented that adaptively adjusts the DR value by manipulating input images similar to a reference image. The proposed ADRA method uses the distance ratio between the log average and each extreme value of a reference image. To evaluate the performance of the ADRA method, the similarity between the reference and input images was measured by computing a correlation coefficient (CC). In in vivo experiments, the CC values were increased by applying the ADRA method from 0.6872 to 0.9870 and from 0.9274 to 0.9939 for kidney and liver data, respectively, compared to the fixed DR case. In addition, the proposed ADRA method showed to outperform the histogram matching method with in vivo liver and kidney data. When using 3D abdominal data with 70 frames, while the CC value from the ADRA method is slightly increased (i.e., 0.6%), the proposed method showed improved image quality in the c-plane compared to its fixed counterpart, which suffered from a shadow artifact. These results indicate that the proposed method can enhance image quality in 2D and 3D ultrasound B-mode imaging by improving the similarity between the reference and input images while eliminating unnecessary manual interaction by the user. Copyright © 2014 Elsevier B.V. All rights reserved.
Flip-avoiding interpolating surface registration for skull reconstruction.
Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye
2018-03-30
Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.
Ethnic identity salience improves recognition memory in Tibetan students via priming.
Li, Hongxia; Wang, Echo Xue; Jin, Shenghua; Wu, Song
2016-04-01
Social identity salience affects group-reference effect in memory. However, limited studies have examined the influence of ethnic identity salience on group-reference effect among minority group people in conditions where the minority group dominates. In the present research, we aim to investigate, in a Tibetan-dominant context, whether the salience of ethnic identity among Tibetan students could display an influence on their group-reference effect via priming method. We recruited 50 Tibetan and 62 Han Chinese students from Tibetan University in Lhasa, the capital of Tibet Autonomous Region, where Tibetans were the majority. A month before the experiment, we tested the baseline of ethnic identity salience of both Tibetan and Han Chinese students using the Twenty Statements Test. In the formal experiment, we assessed the effectiveness of priming method first and then conducted a recognition memory test 2 week later via priming approach. The results showed that the ethnic identity both of Tibetan and Han Chinese participants was not salient in the baseline assessment. However, it was successfully induced via priming among Tibetan students. Tibetan students showed a significant group-reference effect in recognition memory task when their ethnic identity was induced via priming. On the contrary, Han Chinese students did not show increased ethnic awareness and superiority of ethnic in-group reference memory after being primed. Current research provides new evidence for the influence of salience of ethnic identity on group-reference effect, contributing to the application and extension of social identity theory among minority group people. (c) 2016 APA, all rights reserved).
Effect of genotyped cows in the reference population on the genomic evaluation of Holstein cattle.
Uemoto, Y; Osawa, T; Saburi, J
2017-03-01
This study evaluated the dependence of reliability and prediction bias on the prediction method, the contribution of including animals (bulls or cows), and the genetic relatedness, when including genotyped cows in the progeny-tested bull reference population. We performed genomic evaluation using a Japanese Holstein population, and assessed the accuracy of genomic enhanced breeding value (GEBV) for three production traits and 13 linear conformation traits. A total of 4564 animals for production traits and 4172 animals for conformation traits were genotyped using Illumina BovineSNP50 array. Single- and multi-step methods were compared for predicting GEBV in genotyped bull-only and genotyped bull-cow reference populations. No large differences in realized reliability and regression coefficient were found between the two reference populations; however, a slight difference was found between the two methods for production traits. The accuracy of GEBV determined by single-step method increased slightly when genotyped cows were included in the bull reference population, but decreased slightly by multi-step method. A validation study was used to evaluate the accuracy of GEBV when 800 additional genotyped bulls (POPbull) or cows (POPcow) were included in the base reference population composed of 2000 genotyped bulls. The realized reliabilities of POPbull were higher than those of POPcow for all traits. For the gain of realized reliability over the base reference population, the average ratios of POPbull gain to POPcow gain for production traits and conformation traits were 2.6 and 7.2, respectively, and the ratios depended on heritabilities of the traits. For regression coefficient, no large differences were found between the results for POPbull and POPcow. Another validation study was performed to investigate the effect of genetic relatedness between cows and bulls in the reference and test populations. The effect of genetic relationship among bulls in the reference population was also assessed. The results showed that it is important to account for relatedness among bulls in the reference population. Our studies indicate that the prediction method, the contribution ratio of including animals, and genetic relatedness could affect the prediction accuracy in genomic evaluation of Holstein cattle, when including genotyped cows in the reference population.
Perich, C; Ricós, C; Alvarez, V; Biosca, C; Boned, B; Cava, F; Doménech, M V; Fernández-Calle, P; Fernández-Fernández, P; García-Lario, J V; Minchinela, J; Simón, M; Jansen, R
2014-05-15
Current external quality assurance schemes have been classified into six categories, according to their ability to verify the degree of standardization of the participating measurement procedures. SKML (Netherlands) is a Category 1 EQA scheme (commutable EQA materials with values assigned by reference methods), whereas SEQC (Spain) is a Category 5 scheme (replicate analyses of non-commutable materials with no values assigned by reference methods). The results obtained by a group of Spanish laboratories participating in a pilot study organized by SKML are examined, with the aim of pointing out the improvements over our current scheme that a Category 1 program could provide. Imprecision and bias are calculated for each analyte and laboratory, and compared with quality specifications derived from biological variation. Of the 26 analytes studied, 9 had results comparable with those from reference methods, and 10 analytes did not have comparable results. The remaining 7 analytes measured did not have available reference method values, and in these cases, comparison with the peer group showed comparable results. The reasons for disagreement in the second group can be summarized as: use of non-standard methods (IFCC without exogenous pyridoxal phosphate for AST and ALT, Jaffé kinetic at low-normal creatinine concentrations and with eGFR); non-commutability of the reference material used to assign values to the routine calibrator (calcium, magnesium and sodium); use of reference materials without established commutability instead of reference methods for AST and GGT, and lack of a systematic effort by manufacturers to harmonize results. Results obtained in this work demonstrate the important role of external quality assurance programs using commutable materials with values assigned by reference methods to correctly monitor the standardization of laboratory tests with consequent minimization of risk to patients. Copyright © 2013 Elsevier B.V. All rights reserved.
Kim, Min-A; Sim, Hye-Min; Lee, Hye-Seong
2016-11-01
As reformulations and processing changes are increasingly needed in the food industry to produce healthier, more sustainable, and cost effective products while maintaining superior quality, reliable measurements of consumers' sensory perception and discrimination are becoming more critical. Consumer discrimination methods using a preferred-reference duo-trio test design have been shown to be effective in improving the discrimination performance by customizing sample presentation sequences. However, this design can add complexity to the discrimination task for some consumers, resulting in more errors in sensory discrimination. The objective of the present study was to investigate the effects of different types of test instructions using the preference-reference duo-trio test design where a paired-preference test is followed by 6 repeated preferred-reference duo-trio tests, in comparison to the analytical method using the balanced-reference duo-trio. Analyses of d' estimates (product-related measure) and probabilistic sensory discriminators in momentary numbers of subjects showing statistical significance (subject-related measure) revealed that only preferred-reference duo-trio test using affective reference-framing, either by providing no information about the reference or information on a previously preferred sample, improved the sensory discrimination more than the analytical method. No decrease in discrimination performance was observed with any type of instruction, confirming that consumers could handle the test methods. These results suggest that when repeated tests are feasible, using the affective discrimination method would be operationally more efficient as well as ecologically more reliable for measuring consumers' sensory discrimination ability. Copyright © 2016 Elsevier Ltd. All rights reserved.
Rigourd, V; Barnier, J P; Ferroni, A; Nicloux, M; Hachem, T; Magny, J F; Lapillonne, A; Frange, P; Nassif, X; Bille, E
2018-05-03
Three cases of Bacillus cereus infection or colonization occurred in the same region in France, and milk from the milk bank was suspected as a possible common source of contamination. All Batches delivered to the three cases complied with the requirements of the bacteriological reference method recommended by good practices guidelines. Still, a retrospective analysis with a more sensitive method showed one batch to contain B. cereus, however straincomparison revealed no epidemiological link betweenisolates from patients and those from the milk. Consequently, in accordance with the precautionary principle, we developed a new sensitive method for the screening of pasteurized milk for pathogenic bacteria. From January 1 to August 31, 2017, 2526 samples of pasteurized milk were prospectively included in the study. We showed that a 20 mL sample of pasteurized milk incubated for 18 h at 37 °C under aerobic conditions was favoring the detection of B. Cereus. The nonconformity rate was 6.3% for the reference method and 12.6% for the improved method (p < 0.0001). Nonconformity was due to the presence of B. cereus in 88.5% of cases for the improved method and 53% of cases for the reference method (p < 0.0001). Thus our new method is improves the microbiological safety of the product distributed and only moderately increases the rate of bacteriological nonconformity .
2009-10-01
military equipment is the interpretation of incomparabilities. We show how this was done in the PROMETHEE methods by the “GAIA” plane representation...refer to [11]. In this paper we will concentrate on the well-known PROMETHEE methods. In other contributions Report Documentation Page Form...the acquisition process of military equipment is the interpretation of incomparabilities. We show how this was done in the PROMETHEE methods by the GAIA
Trijsburg, Laura; de Vries, Jeanne Hm; Hollman, Peter Ch; Hulshof, Paul Jm; van 't Veer, Pieter; Boshuizen, Hendriek C; Geelen, Anouk
2018-05-08
To compare the performance of the commonly used 24 h recall (24hR) with the more distinct duplicate portion (DP) as reference method for validation of fatty acid intake estimated with an FFQ. Intakes of SFA, MUFA, n-3 fatty acids and linoleic acid (LA) were estimated by chemical analysis of two DP and by on average five 24hR and two FFQ. Plasma n-3 fatty acids and LA were used to objectively compare ranking of individuals based on DP and 24hR. Multivariate measurement error models were used to estimate validity coefficients and attenuation factors for the FFQ with the DP and 24hR as reference methods. Wageningen, the Netherlands. Ninety-two men and 106 women (aged 20-70 years). Validity coefficients for the fatty acid estimates by the FFQ tended to be lower when using the DP as reference method compared with the 24hR. Attenuation factors for the FFQ tended to be slightly higher based on the DP than those based on the 24hR as reference method. Furthermore, when using plasma fatty acids as reference, the DP showed comparable to slightly better ranking of participants according to their intake of n-3 fatty acids (0·33) and n-3:LA (0·34) than the 24hR (0·22 and 0·24, respectively). The 24hR gives only slightly different results compared with the distinctive but less feasible DP, therefore use of the 24hR seems appropriate as the reference method for FFQ validation of fatty acid intake.
NASA Astrophysics Data System (ADS)
Zhang, Yufeng; Long, Man; Luo, Sida; Bao, Yu; Shen, Hanxia
2015-12-01
Transit route choice model is the key technology of public transit systems planning and management. Traditional route choice models are mostly based on expected utility theory which has an evident shortcoming that it cannot accurately portray travelers' subjective route choice behavior for their risk preferences are not taken into consideration. Cumulative prospect theory (CPT), a brand new theory, can be used to describe travelers' decision-making process under the condition of uncertainty of transit supply and risk preferences of multi-type travelers. The method to calibrate the reference point, a key parameter to CPT-based transit route choice model, determines the precision of the model to a great extent. In this paper, a new method is put forward to obtain the value of reference point which combines theoretical calculation and field investigation results. Comparing the proposed method with traditional method, it shows that the new method can promote the quality of CPT-based model by improving the accuracy in simulating travelers' route choice behaviors based on transit trip investigation from Nanjing City, China. The proposed method is of great significance to logical transit planning and management, and to some extent makes up the defect that obtaining the reference point is solely based on qualitative analysis.
Passive field reflectance measurements
NASA Astrophysics Data System (ADS)
Weber, Christian; Schinca, Daniel C.; Tocho, Jorge O.; Videla, Fabian
2008-10-01
The results of reflectance measurements performed with a three-band passive radiometer with independent channels for solar irradiance reference are presented. Comparative operation between the traditional method that uses downward-looking field and reference white panel measurements and the new approach involving duplicated downward- and upward-looking spectral channels (each latter one with its own diffuser) is analyzed. The results indicate that the latter method performs in very good agreement with the standard method and is more suitable for passive sensors under rapidly changing atmospheric conditions (such as clouds, dust, mist, smog and other scatterers), since a more reliable synchronous recording of reference and incident light is achieved. Besides, having separate channels for the reference and the signal allows a better balancing of gains in the amplifiers for each spectral channel. We show the results obtained in the determination of the normalized difference vegetation index (NDVI) corresponding to the period 2004-2007 field experiments concerning weed detection in soybean stubbles and fertilizer level assessment in wheat. The method may be used to refine sensor-based nitrogen fertilizer rate recommendations and to determine suitable zones for herbicide applications.
Kawai, Y; Nagai, Y; Ogawa, E; Kondo, H
2017-04-01
To provide target values for the manufacturers' survey of the Japanese Society for Laboratory Hematology (JSLH), accurate standard data from healthy volunteers were needed for the five-part differential leukocyte count. To obtain such data, JSLH required an antibody panel that achieved high specificity (particularly for mononuclear cells) using simple gating procedures. We developed a flow cytometric method for determining the differential leukocyte count (JSLH-Diff) and validated it by comparison with the flow cytometric differential leukocyte count of the International Council for Standardization in Haematology (ICSH-Diff) and the manual differential count obtained by microscopy (Manual-Diff). First, the reference laboratory performed an imprecision study of JSLH-Diff and ICSH-Diff, as well as performing comparison among JSLH-Diff, Manual-Diff, and ICSH-Diff. Then two reference laboratories and seven participating laboratories performed imprecision and accuracy studies of JSLH-Diff, Manual-Diff, and ICSH-Diff. Simultaneously, six manufacturers' laboratories provided their own representative values by using automated hematology analyzers. The precision of both JSLH-Diff and ICSH-Diff methods was adequate. Comparison by the reference laboratory showed that all correlation coefficients, slopes and intercepts obtained by the JSLH-Diff, ICSH-Diff, and Manual-Diff methods conformed to the criteria. When the imprecision and accuracy of JSLH-Diff were assessed at seven laboratories, the CV% for lymphocytes, neutrophils, monocytes, eosinophils, and basophils was 0.5~0.9%, 0.3~0.7%, 1.7~2.6%, 3.0~7.9%, and 3.8~10.4%, respectively. More than 99% of CD45 positive leukocytes were identified as normal leukocytes by JSLH-Diff. When JSLH-Diff method were validated by comparison with Manual-Diff and ICSH-Diff, JSLH-Diff showed good performance as a reference method. © 2016 John Wiley & Sons Ltd.
Reassessment of the Access Testosterone chemiluminescence assay and comparison with LC-MS method.
Dittadi, Ruggero; Matteucci, Mara; Meneghetti, Elisa; Ndreu, Rudina
2018-03-01
To reassess the imprecision and Limit of Quantitation, to evaluate the cross-reaction with dehydroepiandrosterone-sulfate (DHEAS), the accuracy toward liquid chromatography-mass spectrometry (LC-MS) and the reference interval of the Access Testosterone method, performed by DxI immunoassay platform (Beckman Coulter). Imprecision was evaluated testing six pool samples assayed in 20 different run using two reagents lots. The cross-reaction with DHEAS was studied both by a displacement curve and by spiking DHEAS standard in two serum samples with known amount of testosterone. The comparison with LC-MS was evaluated by Passing-Bablock analysis in 21 routine serum samples and 19 control samples from an External Quality Assurance (EQA) scheme. The reference interval was verified by an indirect estimation on 2445 male and 2838 female outpatients. The imprecision study showed a coefficient of variation (CV) between 2.7% and 34.7% for serum pools from 16.3 and 0.27 nmol/L. The value of Limit of Quantitation at 20% CV was 0.53 nmol/L. The DHEAS showed a cross-reaction of 0.0074%. A comparison with LC-MS showed a trend toward a slight underestimation of immunoassay vs LC-MS (Passing-Bablock equations: DxI=-0.24+0.906 LCMS in serum samples and DxI=-0.299+0.981 LCMS in EQA samples). The verification of reference interval showed a 2.5th-97.5th percentile distribution of 6.6-24.3 nmol/L for male over 14 years and <0.5-2.78 nmol/L for female subjects, in accord with the reference intervals reported by the manufacturer. The Access Testosterone method could be considered an adequately reliable tool for the testosterone measurement. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Xu, Xianfeng; Cai, Luzhong; Li, Dailin; Mao, Jieying
2010-04-01
In phase-shifting interferometry (PSI) the reference wave is usually supposed to be an on-axis plane wave. But in practice a slight tilt of reference wave often occurs, and this tilt will introduce unexpected errors of the reconstructed object wave-front. Usually the least-square method with iterations, which is time consuming, is employed to analyze the phase errors caused by the tilt of reference wave. Here a simple effective algorithm is suggested to detect and then correct this kind of errors. In this method, only some simple mathematic operation is used, avoiding using least-square equations as needed in most methods reported before. It can be used for generalized phase-shifting interferometry with two or more frames for both smooth and diffusing objects, and the excellent performance has been verified by computer simulations. The numerical simulations show that the wave reconstruction errors can be reduced by 2 orders of magnitude.
Li, Ya; Wang, Yongchun; Zhang, Baoqiang; Wang, Yonghui; Zhou, Xiaolin
2018-01-01
Dynamically evaluating the outcomes of our actions and thoughts is a fundamental cognitive ability. Given its excellent temporal resolution, the event-related potential (ERP) technology has been used to address this issue. The feedback-related negativity (FRN) component of ERPs has been studied intensively with the averaged linked mastoid reference method (LM). However, it is unknown whether FRN can be induced by an expectancy violation in an antonym relations context and whether LM is the most suitable reference approach. To address these issues, the current research directly compared the ERP components induced by expectancy violations in antonym expectation and gambling tasks with a within-subjects design and investigated the effect of the reference approach on the experimental effects. Specifically, we systematically compared the influence of the LM, reference electrode standardization technique (REST) and average reference (AVE) approaches on the amplitude, scalp distribution and magnitude of ERP effects as a function of expectancy violation type. The expectancy deviation in the antonym expectation task elicited an N400 effect that differed from the FRN effect induced in the gambling task; this difference was confirmed by all the three reference methods. Both the amplitudes of the ERP effects (N400 and FRN) and the magnitude as the expectancy violation increased were greater under the LM approach than those under the REST approach, followed by those under the AVE approach. Based on the statistical results, the electrode sites that showed the N400 and FRN effects critically depended on the reference method, and the results of the REST analysis were consistent with previous ERP studies. Combined with evidence from simulation studies, we suggest that REST is an optional reference method to be used in future ERP data analysis. PMID:29615858
2012-01-01
Background Haemophilus parasuis is the causative agent of Glässer’s disease and is a pathogen of swine in high-health status herds. Reports on serotyping of field strains from outbreaks describe that approximately 30% of them are nontypeable and therefore cannot be traced. Molecular typing methods have been used as alternatives to serotyping. This study was done to compare random amplified polymorphic DNA (RAPD) profiles and whole cell protein (WCP) lysate profiles as methods for distinguishing H. parasuis reference strains and field isolates. Results The DNA and WCP lysate profiles of 15 reference strains and 31 field isolates of H. parasuis were analyzed using the Dice and neighbor joining algorithms. The results revealed unique and reproducible DNA and protein profiles among the reference strains and field isolates studied. Simpson’s index of diversity showed significant discrimination between isolates when three 10mer primers were combined for the RAPD method and also when both the RAPD and WCP lysate typing methods were combined. Conclusions The RAPD profiles seen among the reference strains and field isolates did not appear to change over time which may reflect a lack of DNA mutations in the genes of the samples. The recent field isolates had different WCP lysate profiles than the reference strains, possibly because the number of passages of the type strains may affect their protein expression. PMID:22703293
Study on the calibration and optimization of double theodolites baseline
NASA Astrophysics Data System (ADS)
Ma, Jing-yi; Ni, Jin-ping; Wu, Zhi-chao
2018-01-01
For the double theodolites measurement system baseline as the benchmark of the scale of the measurement system and affect the accuracy of the system, this paper puts forward a method for calibration and optimization of the double theodolites baseline. Using double theodolites to measure the known length of the reference ruler, and then reverse the baseline formula. Based on the error propagation law, the analyses show that the baseline error function is an important index to measure the accuracy of the system, and the reference ruler position, posture and so on have an impact on the baseline error. The optimization model is established and the baseline error function is used as the objective function, and optimizes the position and posture of the reference ruler. The simulation results show that the height of the reference ruler has no effect on the baseline error; the posture is not uniform; when the reference ruler is placed at x=500mm and y=1000mm in the measurement space, the baseline error is the smallest. The experimental results show that the experimental results are consistent with the theoretical analyses in the measurement space. In this paper, based on the study of the placement of the reference ruler, for improving the accuracy of the double theodolites measurement system has a reference value.
A fast and automatic mosaic method for high-resolution satellite images
NASA Astrophysics Data System (ADS)
Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing
2015-12-01
We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.
Weykamp, C W; Penders, T J; Miedema, K; Muskiet, F A; van der Slik, W
1995-01-01
We investigated the effect of calibration with lyophilized calibrators on whole-blood glycohemoglobin (glyHb) results. One hundred three laboratories, using 20 different methods, determined glyHb in two lyophilized calibrators and two whole-blood samples. For whole-blood samples with low (5%) and high (9%) glyHb percentages, respectively, calibration decreased overall interlaboratory variation (CV) from 16% to 9% and from 11% to 6% and decreased intermethod variation from 14% to 6% and from 12% to 5%. Forty-seven laboratories, using 14 different methods, determined mean glyHb percentages in self-selected groups of 10 nondiabetic volunteers each. With calibration their overall mean (2SD) was 5.0% (0.5%), very close to the 5.0% (0.3%) derived from the reference method used in the Diabetes Control and Complications Trial. In both experiments the Abbott IMx and Vision showed deviating results. We conclude that, irrespective of the analytical method used, calibration enables standardization of glyHb results, reference values, and interpretation criteria.
Diagnostic Performance of a Molecular Test versus Clinician Assessment of Vaginitis
Gaydos, Charlotte A.; Nyirjesy, Paul; Paradis, Sonia; Kodsi, Salma; Cooper, Charles K.
2018-01-01
ABSTRACT Vaginitis is a common complaint, diagnosed either empirically or using Amsel's criteria and wet mount microscopy. This study sought to determine characteristics of an investigational test (a molecular test for vaginitis), compared to reference, for detection of bacterial vaginosis, Candida spp., and Trichomonas vaginalis. Vaginal specimens from a cross-sectional study were obtained from 1,740 women (≥18 years old), with vaginitis symptoms, during routine clinic visits (across 10 sites in the United States). Specimens were analyzed using a commercial PCR/fluorogenic probe-based investigational test that detects bacterial vaginosis, Candida spp., and Trichomonas vaginalis. Clinician diagnosis and in-clinic testing (Amsel's test, potassium hydroxide preparation, and wet mount) were also employed to detect the three vaginitis causes. All testing methods were compared to the respective reference methods (Nugent Gram stain for bacterial vaginosis, detection of the Candida gene its2, and Trichomonas vaginalis culture). The investigational test, clinician diagnosis, and in-clinic testing were compared to reference methods for bacterial vaginosis, Candida spp., and Trichomonas vaginalis. The investigational test resulted in significantly higher sensitivity and negative predictive value than clinician diagnosis or in-clinic testing. In addition, the investigational test showed a statistically higher overall percent agreement with each of the three reference methods than did clinician diagnosis or in-clinic testing. The investigational test showed significantly higher sensitivity for detecting vaginitis, involving more than one cause, than did clinician diagnosis. Taken together, these results suggest that a molecular investigational test can facilitate accurate detection of vaginitis. PMID:29643195
Production and Comprehension of Time Reference in Korean Nonfluent Aphasia
Lee, Jiyeon; Kwon, Miseon; Na, Hae Ri; Bastiaanse, Roelien; Thompson, Cynthia K.
2015-01-01
Objectives Individuals with nonfluent agrammatic aphasia show impaired production and comprehension of time reference via verbal morphology. However, cross-linguistic findings to date suggest inconsistent evidence as to whether tense processing in general is impaired or time reference to the past is selectively difficult in this population. This study examined production and comprehension of time reference via verb morphology in Korean-speaking individuals with nonfluent aphasia. Methods A group of 9 healthy controls and 8 individuals with nonfluent aphasia (5 for the production task) participated in the study. Sentence priming production and auditory sentence to picture matching tasks were used, parallel with the previous cross-linguistic experiments in English, Chinese, Turkish, and others. Results The participants with nonfluent aphasia showed different patterns of impairment in production and comprehension. In production, they were impaired in all time references with errors being dominated by substitution of incorrect time references and other morpho-phonologically well-formed errors, indicating a largely intact morphological affixation process. In comprehension, they showed selective impairment of the past, consistent with the cross-linguistic evidence from English, Chinese, Turkish, and others. Conclusion The findings suggest that interpretation of past time reference poses particular difficulty in nonfluent aphasia irrespective of typological characteristics of languages; however, in production, language-specific morpho-semantic functions of verbal morphology may play a significant role in selective breakdowns of time reference. PMID:26290861
Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images
NASA Astrophysics Data System (ADS)
Kamble, V. M.; Bhurchandi, K.
2018-03-01
Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.
NASA Astrophysics Data System (ADS)
Rafhay, Quentin; Beug, M. Florian; Duane, Russell
2007-04-01
This paper presents an experimental comparison of dummy cell extraction methods of the gate capacitance coupling coefficient for floating gate non-volatile memory structures from different geometries and technologies. These results show the significant influence of mismatching floating gate devices and reference transistors on the extraction of the gate capacitance coupling coefficient. In addition, it demonstrates the accuracy of the new bulk bias dummy cell extraction method and the importance of the β function, introduced recently in [Duane R, Beug F, Mathewson A. Novel capacitance coupling coefficient measurement methodology for floating gate non-volatile memory devices. IEEE Electr Dev Lett 2005;26(7):507-9], to determine matching pairs of floating gate memory and reference transistor.
Validation of powder X-ray diffraction following EN ISO/IEC 17025.
Eckardt, Regina; Krupicka, Erik; Hofmeister, Wolfgang
2012-05-01
Powder X-ray diffraction (PXRD) is used widely in forensic science laboratories with the main focus of qualitative phase identification. Little is found in literature referring to the topic of validation of PXRD in the field of forensic sciences. According to EN ISO/IEC 17025, the method has to be tested for several parameters. Trueness, specificity, and selectivity of PXRD were tested using certified reference materials or a combination thereof. All three tested parameters showed the secure performance of the method. Sample preparation errors were simulated to evaluate the robustness of the method. These errors were either easily detected by the operator or nonsignificant for phase identification. In case of the detection limit, a statistical evaluation of the signal-to-noise ratio showed that a peak criterion of three sigma is inadequate and recommendations for a more realistic peak criterion are given. Finally, the results of an international proficiency test showed the secure performance of PXRD. © 2012 American Academy of Forensic Sciences.
Sports Training Support Method by Self-Coaching with Humanoid Robot
NASA Astrophysics Data System (ADS)
Toyama, S.; Ikeda, F.; Yasaka, T.
2016-09-01
This paper proposes a new training support method called self-coaching with humanoid robots. In the proposed method, two small size inexpensive humanoid robots are used because of their availability. One robot called target robot reproduces motion of a target player and another robot called reference robot reproduces motion of an expert player. The target player can recognize a target technique from the reference robot and his/her inadequate skill from the target robot. Modifying the motion of the target robot as self-coaching, the target player could get advanced cognition. Some experimental results show some possibility as the new training method and some issues of the self-coaching interface program as a future work.
Krimmel, R.M.
1999-01-01
Net mass balance has been measured since 1958 at South Cascade Glacier using the 'direct method,' e.g. area averages of snow gain and firn and ice loss at stakes. Analysis of cartographic vertical photography has allowed measurement of mass balance using the 'geodetic method' in 1970, 1975, 1977, 1979-80, and 1985-97. Water equivalent change as measured by these nearly independent methods should give similar results. During 1970-97, the direct method shows a cumulative balance of about -15 m, and the geodetic method shows a cumulative balance of about -22 m. The deviation between the two methods is fairly consistent, suggesting no gross errors in either, but rather a cumulative systematic error. It is suspected that the cumulative error is in the direct method because the geodetic method is based on a non-changing reference, the bedrock control, whereas the direct method is measured with reference to only the previous year's summer surface. Possible sources of mass loss that are missing from the direct method are basal melt, internal melt, and ablation on crevasse walls. Possible systematic measurement errors include under-estimation of the density of lost material, sinking stakes, or poorly represented areas.
In vitro testing of Nd:YAG laser processed calcium phosphate coatings.
De Carlos, A; Lusquiños, F; Pou, J; León, B; Pérez-Amor, M; Driessens, F C M; Hing, K; Best, S; Bonfield, W
2006-11-01
Nd:YAG laser cladding is a new method for deposition of a calcium phosphate onto metallic surfaces of interest in implantology. The aim of this study was to compare the biologic response of MG-63 human osteoblast-like cells grown on Ti-6Al-4V substrates coated with a calcium phosphate layer applied using different methods: plasma spraying as reference material and Nd:YAG laser cladding as test material. Tissue culture polystyrene was used as negative control. The Nd:YAG laser clad material showed a behaviour similar to the reference material, plasma spray, respective to cell morphology (SEM observations), cell proliferation (AlamarBlue assay) and cytotoxicity of extracts (MTT assay). Proliferation, as measured by the AlamarBlue assay, showed little difference in the metabolic activity of the cells on the materials over an 18 day culture period. There were no significant differences in the cellular growth response on the test material when compared to the ones exhibited by the reference material. In the solvent extraction test all the extracts had some detrimental effect on cellular activity at 100% concentration, although cells incubated in the test material extract showed a proliferation rate similar to that of the reference material. To better understand the scope of these results it should be taken into account that the Nd:YAG clad coating has recently been developed. The fact that its in vitro performance is comparable to that produced by plasma spray, a material commercially available for more than ten years, indicates that this new laser based method could be of commercial interest in the near future.
Collaborative derivation of reference intervals for major clinical laboratory tests in Japan.
Ichihara, Kiyoshi; Yomamoto, Yoshikazu; Hotta, Taeko; Hosogaya, Shigemi; Miyachi, Hayato; Itoh, Yoshihisa; Ishibashi, Midori; Kang, Dongchon
2016-05-01
Three multicentre studies of reference intervals were conducted recently in Japan. The Committee on Common Reference Intervals of the Japan Society of Clinical Chemistry sought to establish common reference intervals for 40 laboratory tests which were measured in common in the three studies and regarded as well harmonized in Japan. The study protocols were comparable with recruitment mostly from hospital workers with body mass index ≤28 and no medications. Age and sex distributions were made equal to obtain a final data size of 6345 individuals. Between-subgroup differences were expressed as the SD ratio (between-subgroup SD divided by SD representing the reference interval). Between-study differences were all within acceptable levels, and thus the three datasets were merged. By adopting SD ratio ≥0.50 as a guide, sex-specific reference intervals were necessary for 12 assays. Age-specific reference intervals for females partitioned at age 45 were required for five analytes. The reference intervals derived by the parametric method resulted in appreciable narrowing of the ranges by applying the latent abnormal values exclusion method in 10 items which were closely associated with prevalent disorders among healthy individuals. Sex- and age-related profiles of reference values, derived from individuals with no abnormal results in major tests, showed peculiar patterns specific to each analyte. Common reference intervals for nationwide use were developed for 40 major tests, based on three multicentre studies by advanced statistical methods. Sex- and age-related profiles of reference values are of great relevance not only for interpreting test results, but for applying clinical decision limits specified in various clinical guidelines. © The Author(s) 2015.
Gong, Zu-Kang; Wang, Shuang-Jie; Huang, Yong-Qi; Zhao, Rui-Qiang; Zhu, Qi-Fang; Lin, Wen-Zhen
2014-12-01
RT-qPCR is a commonly used method for evaluating gene expression; however, its accuracy and reliability are dependent upon the choice of appropriate reference gene(s), and there is limited information available on suitable reference gene(s) that can be used in mouse testis at different stages. In this study, using the RT-qPCR method, we investigated the expression variations of six reference genes representing different functional classes (Actb, Gapdh, Ppia, Tbp, Rps29, Hprt1) in mice testis during embryonic and postnatal development. The expression stabilities of putative reference genes were evaluated using five algorithms: geNorm, NormFinder, Bestkeeper, the comparative delta C(t) method and integrated tool RefFinder. Analysis of the results showed that Ppia, Gapdh and Actb were identified as the most stable genes and the geometric mean of Ppia, Gapdh and Actb constitutes an appropriate normalization factor for gene expression studies. The mRNA expression of AT1 as a test gene of interest varied depending upon which of the reference gene(s) was used as an internal control(s). This study suggested that Ppia, Gapdh and Actb are suitable reference genes among the six genes used for RT-qPCR normalization and provide crucial information for transcriptional analyses in future studies of gene expression in the developing mouse testis.
Wohlsen, T; Bates, J; Vesey, G; Robinson, W A; Katouli, M
2006-04-01
To use BioBall cultures as a precise reference standard to evaluate methods for enumeration of Escherichia coli and other coliform bacteria in water samples. Eight methods were evaluated including membrane filtration, standard plate count (pour and spread plate methods), defined substrate technology methods (Colilert and Colisure), the most probable number method and the Petrifilm disposable plate method. Escherichia coli and Enterobacter aerogenes BioBall cultures containing 30 organisms each were used. All tests were performed using 10 replicates. The mean recovery of both bacteria varied with the different methods employed. The best and most consistent results were obtained with Petrifilm and the pour plate method. Other methods either yielded a low recovery or showed significantly high variability between replicates. The BioBall is a very suitable quality control tool for evaluating the efficiency of methods for bacterial enumeration in water samples.
Adaptive nonlinear control for autonomous ground vehicles
NASA Astrophysics Data System (ADS)
Black, William S.
We present the background and motivation for ground vehicle autonomy, and focus on uses for space-exploration. Using a simple design example of an autonomous ground vehicle we derive the equations of motion. After providing the mathematical background for nonlinear systems and control we present two common methods for exactly linearizing nonlinear systems, feedback linearization and backstepping. We use these in combination with three adaptive control methods: model reference adaptive control, adaptive sliding mode control, and extremum-seeking model reference adaptive control. We show the performances of each combination through several simulation results. We then consider disturbances in the system, and design nonlinear disturbance observers for both single-input-single-output and multi-input-multi-output systems. Finally, we show the performance of these observers with simulation results.
2013-01-01
Background The measurement of the Erythrocyte Sedimentation Rate (ESR) value is a standard procedure performed during a typical blood test. In order to formulate a unified standard of establishing reference ESR values, this paper presents a novel prediction model in which local normal ESR values and corresponding geographical factors are used to predict reference ESR values using multi-layer feed-forward artificial neural networks (ANN). Methods and findings Local normal ESR values were obtained from hospital data, while geographical factors that include altitude, sunshine hours, relative humidity, temperature and precipitation were obtained from the National Geographical Data Information Centre in China. The results show that predicted values are statistically in agreement with measured values. Model results exhibit significant agreement between training data and test data. Consequently, the model is used to predict the unseen local reference ESR values. Conclusions Reference ESR values can be established with geographical factors by using artificial intelligence techniques. ANN is an effective method for simulating and predicting reference ESR values because of its ability to model nonlinear and complex relationships. PMID:23497145
Ice-Accretion Scaling Using Water-Film Thickness Parameters
NASA Technical Reports Server (NTRS)
Anderson, David N.; Feo, Alejandro
2003-01-01
Studies were performed at INTA in Spain to determine water-film thickness on a stagnation-point probe inserted in a simulated cloud. The measurements were correlated with non-dimensional parameters describing the flow and the cloud conditions. Icing scaling tests in the NASA Glenn Icing Research Tunnel were then conducted using the Ruff scaling method with the scale velocity found by matching scale and reference values of either the INTA non-dimensional water-film thickness or a Weber number based on that film thickness. For comparison, tests were also performed using the constant drop-size Weber number and the average-velocity methods. The reference and scale models were both aluminum, 61-cm-span, NACA 0012 airfoil sections at 0 deg. AOA. The reference had a 53-cm-chord and the scale, 27 cm (1/2 size). Both models were mounted vertically in the center of the IRT test section. Tests covered a freezing fraction range of 0.28 to 1.0. Rime ice (n = 1.0) tests showed the consistency of the IRT calibration over a range of velocities. At a freezing fraction of 0.76, there was no significant difference in the scale ice shapes produced by the different methods. For freezing fractions of 0.40, 0.52 and 0.61, somewhat better agreement with the reference horn angles was typically achieved with the average-velocity and constant-film thickness methods than when either of the two Weber numbers was matched to the reference value. At a freezing fraction of 0.28, the four methods were judged equal in providing simulations of the reference shape.
NASA Technical Reports Server (NTRS)
McCorkel, Joel; Thome, Kurtis; Lockwood, Ronald
2012-01-01
An inter-calibration method is developed to provide absolute radiometric calibration of narrow-swath imaging sensors with reference to non-coincident wide-swath sensors. The method predicts at-sensor radiance using non-coincident imagery from the reference sensor and knowledge of spectral reflectance of the test site. The imagery of the reference sensor is restricted to acquisitions that provide similar view and solar illumination geometry to reduce uncertainties due to directional reflectance effects. Spectral reflectance of the test site is found with a simple iterative radiative transfer method using radiance values of a well-understood wide-swath sensor and spectral shape information based on historical ground-based measurements. At-sensor radiance is calculated for the narrow-swath sensor using this spectral reflectance and atmospheric parameters that are also based on historical in situ measurements. Results of the inter-calibration method show agreement on the 2 5 percent level in most spectral regions with the vicarious calibration technique relying on coincident ground-based measurements referred to as the reflectance-based approach. While the variability of the inter-calibration method based on non-coincident image pairs is significantly larger, results are consistent with techniques relying on in situ measurements. The method is also insensitive to spectral differences between the sensors by transferring to surface spectral reflectance prior to prediction of at-sensor radiance. The utility of this inter-calibration method is made clear by its flexibility to utilize image pairings with acquisition dates differing in excess of 30 days allowing frequent absolute calibration comparisons between wide- and narrow-swath sensors.
Ichihara, Kiyoshi; Ozarda, Yesim; Barth, Julian H; Klee, George; Qiu, Ling; Erasmus, Rajiv; Borai, Anwar; Evgina, Svetlana; Ashavaid, Tester; Khan, Dilshad; Schreier, Laura; Rolle, Reynan; Shimizu, Yoshihisa; Kimura, Shogo; Kawano, Reo; Armbruster, David; Mori, Kazuo; Yadav, Binod K
2017-04-01
The IFCC Committee on Reference Intervals and Decision Limits coordinated a global multicenter study on reference values (RVs) to explore rational and harmonizable procedures for derivation of reference intervals (RIs) and investigate the feasibility of sharing RIs through evaluation of sources of variation of RVs on a global scale. For the common protocol, rather lenient criteria for reference individuals were adopted to facilitate harmonized recruitment with planned use of the latent abnormal values exclusion (LAVE) method. As of July 2015, 12 countries had completed their study with total recruitment of 13,386 healthy adults. 25 analytes were measured chemically and 25 immunologically. A serum panel with assigned values was measured by all laboratories. RIs were derived by parametric and nonparametric methods. The effect of LAVE methods is prominent in analytes which reflect nutritional status, inflammation and muscular exertion, indicating that inappropriate results are frequent in any country. The validity of the parametric method was confirmed by the presence of analyte-specific distribution patterns and successful Gaussian transformation using the modified Box-Cox formula in all countries. After successful alignment of RVs based on the panel test results, nearly half the analytes showed variable degrees of between-country differences. This finding, however, requires confirmation after adjusting for BMI and other sources of variation. The results are reported in the second part of this paper. The collaborative study enabled us to evaluate rational methods for deriving RIs and comparing the RVs based on real-world datasets obtained in a harmonized manner. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Accuracy in Dental Medicine, A New Way to Measure Trueness and Precision
Ender, Andreas; Mehl, Albert
2014-01-01
Reference scanners are used in dental medicine to verify a lot of procedures. The main interest is to verify impression methods as they serve as a base for dental restorations. The current limitation of many reference scanners is the lack of accuracy scanning large objects like full dental arches, or the limited possibility to assess detailed tooth surfaces. A new reference scanner, based on focus variation scanning technique, was evaluated with regards to highest local and general accuracy. A specific scanning protocol was tested to scan original tooth surface from dental impressions. Also, different model materials were verified. The results showed a high scanning accuracy of the reference scanner with a mean deviation of 5.3 ± 1.1 µm for trueness and 1.6 ± 0.6 µm for precision in case of full arch scans. Current dental impression methods showed much higher deviations (trueness: 20.4 ± 2.2 µm, precision: 12.5 ± 2.5 µm) than the internal scanning accuracy of the reference scanner. Smaller objects like single tooth surface can be scanned with an even higher accuracy, enabling the system to assess erosive and abrasive tooth surface loss. The reference scanner can be used to measure differences for a lot of dental research fields. The different magnification levels combined with a high local and general accuracy can be used to assess changes of single teeth or restorations up to full arch changes. PMID:24836007
Rapid Antimicrobial Susceptibility Testing Using Forward Laser Light Scatter Technology
Clinton, Lani K.; Hewitt, Carolyn; Koyamatsu, Terri; Sun, Yilun; Jamison, Ginger; Perkins, Rosalie; Tang, Li; Pounds, Stanley; Bankowski, Matthew J.
2016-01-01
The delayed reporting of antimicrobial susceptibility testing remains a limiting factor in clinical decision-making in the treatment of bacterial infection. This study evaluates the use of forward laser light scatter (FLLS) to measure bacterial growth for the early determination of antimicrobial susceptibility. Three isolates each (two clinical isolates and one reference strain) of Staphylococcus aureus, Escherichia coli, and Pseudomonas aeruginosa were tested in triplicate using two commercial antimicrobial testing systems, the Vitek2 and the MicroScan MIC panel, to challenge the BacterioScan FLLS. The BacterioScan FLLS showed a high degree of categorical concordance with the commercial methods. Pairwise comparison with each commercial system serving as a reference standard showed 88.9% agreement with MicroScan (two minor errors) and 72.2% agreement with Vitek (five minor errors). FLLS using the BacterioScan system shows promise as a novel method for the rapid and accurate determination of antimicrobial susceptibility. PMID:27558176
INAA Application for Trace Element Determination in Biological Reference Material
NASA Astrophysics Data System (ADS)
Atmodjo, D. P. D.; Kurniawati, S.; Lestiani, D. D.; Adventini, N.
2017-06-01
Trace element determination in biological samples is often used in the study of health and toxicology. Determination change to its essentiality and toxicity of trace element require an accurate determination method, which implies that a good Quality Control (QC) procedure should be performed. In this study, QC for trace element determination in biological samples was applied by analyzing the Standard Reference Material (SRM) Bovine muscle 8414 NIST using Instrumental Neutron Activation Analysis (INAA). Three selected trace element such as Fe, Zn, and Se were determined. Accuracy of the elements showed as %recovery and precision as %coefficient of variance (%CV). The result showed that %recovery of Fe, Zn, and Se were in the range between 99.4-107%, 92.7-103%, and 91.9-112%, respectively, whereas %CV were 2.92, 3.70, and 5.37%, respectively. These results showed that INAA method is precise and accurate for trace element determination in biological matrices.
Accuracy of complete-arch dental impressions: a new method of measuring trueness and precision.
Ender, Andreas; Mehl, Albert
2013-02-01
A new approach to both 3-dimensional (3D) trueness and precision is necessary to assess the accuracy of intraoral digital impressions and compare them to conventionally acquired impressions. The purpose of this in vitro study was to evaluate whether a new reference scanner is capable of measuring conventional and digital intraoral complete-arch impressions for 3D accuracy. A steel reference dentate model was fabricated and measured with a reference scanner (digital reference model). Conventional impressions were made from the reference model, poured with Type IV dental stone, scanned with the reference scanner, and exported as digital models. Additionally, digital impressions of the reference model were made and the digital models were exported. Precision was measured by superimposing the digital models within each group. Superimposing the digital models on the digital reference model assessed the trueness of each impression method. Statistical significance was assessed with an independent sample t test (α=.05). The reference scanner delivered high accuracy over the entire dental arch with a precision of 1.6 ±0.6 µm and a trueness of 5.3 ±1.1 µm. Conventional impressions showed significantly higher precision (12.5 ±2.5 µm) and trueness values (20.4 ±2.2 µm) with small deviations in the second molar region (P<.001). Digital impressions were significantly less accurate with a precision of 32.4 ±9.6 µm and a trueness of 58.6 ±15.8µm (P<.001). More systematic deviations of the digital models were visible across the entire dental arch. The new reference scanner is capable of measuring the precision and trueness of both digital and conventional complete-arch impressions. The digital impression is less accurate and shows a different pattern of deviation than the conventional impression. Copyright © 2013 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Lee, Sam; Addy, Harold E. Jr.; Broeren, Andy P.; Orchard, David M.
2017-01-01
A test was conducted at NASA Icing Research Tunnel to evaluate altitude scaling methods for thermal ice protection system. Two new scaling methods based on Weber number were compared against a method based on Reynolds number. The results generally agreed with the previous set of tests conducted in NRCC Altitude Icing Wind Tunnel where the three methods of scaling were also tested and compared along with reference (altitude) icing conditions. In those tests, the Weber number-based scaling methods yielded results much closer to those observed at the reference icing conditions than the Reynolds number-based icing conditions. The test in the NASA IRT used a much larger, asymmetric airfoil with an ice protection system that more closely resembled designs used in commercial aircraft. Following the trends observed during the AIWT tests, the Weber number based scaling methods resulted in smaller runback ice than the Reynolds number based scaling, and the ice formed farther upstream. The results show that the new Weber number based scaling methods, particularly the Weber number with water loading scaling, continue to show promise for ice protection system development and evaluation in atmospheric icing tunnels.
Wen, Shuxiang; Chen, Xiaoling; Xu, Fuzhou; Sun, Huiling
2016-01-01
Real-time quantitative reverse transcription PCR (qRT-PCR) offers a robust method for measurement of gene expression levels. Selection of reliable reference gene(s) for gene expression study is conducive to reduce variations derived from different amounts of RNA and cDNA, the efficiency of the reverse transcriptase or polymerase enzymes. Until now reference genes identified for other members of the family Pasteurellaceae have not been validated for Avibacterium paragallinarum. The aim of this study was to validate nine reference genes of serovars A, B, and C strains of A. paragallinarum in different growth phase by qRT-PCR. Three of the most widely used statistical algorithms, geNorm, NormFinder and ΔCT method were used to evaluate the expression stability of reference genes. Data analyzed by overall rankings showed that in exponential and stationary phase of serovar A, the most stable reference genes were gyrA and atpD respectively; in exponential and stationary phase of serovar B, the most stable reference genes were atpD and recN respectively; in exponential and stationary phase of serovar C, the most stable reference genes were rpoB and recN respectively. This study provides recommendations for stable endogenous control genes for use in further studies involving measurement of gene expression levels.
NASA Astrophysics Data System (ADS)
Ding, Liang; Wang, Shui; Cai, Bingjie; Zhang, Mancheng; Qu, Changsheng
2018-02-01
In this study, portable X-ray fluorescence spectrometry (pXRF) was used to measure the heavy metal contents of As, Cu, Cr, Ni, Pb and Zn in the soils of heavy metal-contaminated sites. The precision, accuracy and system errors of pXRF were evaluated and compared with traditional laboratory methods to examine the suitability of in situ pXRF. The results show that the pXRF analysis achieved satisfactory accuracy and precision in measuring As, Cr, Cu, Ni, Pb, and Zn in soils, and meets the requirements of the relevant detection technology specifications. For the certified reference soil samples, the pXRF results of As, Cr, Cu, Ni, Pb, and Zn show good linear relationships and coefficients of determination with the values measured using the reference analysis methods; with the exception of Ni, all the measured values were within the 95% confidence level. In the soil samples, the coefficients of determination between Cu, Zn, Pb, and Ni concentrations measured laboratory pXRF and the values measured with laboratory analysis all reach 0.9, showing a good linear relationship; however, there were large deviations between methods for Cr and As. This study provides reference data and scientific support for rapid detection of heavy metals in soils using pXRF in site investigation, which can better guide the practical application of pXRF.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.
Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less
Marker-free motion correction in weight-bearing cone-beam CT of the knee joint
Berger, M.; Müller, K.; Aichert, A.; Unberath, M.; Thies, J.; Choi, J.-H.; Fahrig, R.; Maier, A.
2016-01-01
Purpose: To allow for a purely image-based motion estimation and compensation in weight-bearing cone-beam computed tomography of the knee joint. Methods: Weight-bearing imaging of the knee joint in a standing position poses additional requirements for the image reconstruction algorithm. In contrast to supine scans, patient motion needs to be estimated and compensated. The authors propose a method that is based on 2D/3D registration of left and right femur and tibia segmented from a prior, motion-free reconstruction acquired in supine position. Each segmented bone is first roughly aligned to the motion-corrupted reconstruction of a scan in standing or squatting position. Subsequently, a rigid 2D/3D registration is performed for each bone to each of K projection images, estimating 6 × 4 × K motion parameters. The motion of individual bones is combined into global motion fields using thin-plate-spline extrapolation. These can be incorporated into a motion-compensated reconstruction in the backprojection step. The authors performed visual and quantitative comparisons between a state-of-the-art marker-based (MB) method and two variants of the proposed method using gradient correlation (GC) and normalized gradient information (NGI) as similarity measure for the 2D/3D registration. Results: The authors evaluated their method on four acquisitions under different squatting positions of the same patient. All methods showed substantial improvement in image quality compared to the uncorrected reconstructions. Compared to NGI and MB, the GC method showed increased streaking artifacts due to misregistrations in lateral projection images. NGI and MB showed comparable image quality at the bone regions. Because the markers are attached to the skin, the MB method performed better at the surface of the legs where the authors observed slight streaking of the NGI and GC methods. For a quantitative evaluation, the authors computed the universal quality index (UQI) for all bone regions with respect to the motion-free reconstruction. The authors quantitative evaluation over regions around the bones yielded a mean UQI of 18.4 for no correction, 53.3 and 56.1 for the proposed method using GC and NGI, respectively, and 53.7 for the MB reference approach. In contrast to the authors registration-based corrections, the MB reference method caused slight nonrigid deformations at bone outlines when compared to a motion-free reference scan. Conclusions: The authors showed that their method based on the NGI similarity measure yields reconstruction quality close to the MB reference method. In contrast to the MB method, the proposed method does not require any preparation prior to the examination which will improve the clinical workflow and patient comfort. Further, the authors found that the MB method causes small, nonrigid deformations at the bone outline which indicates that markers may not accurately reflect the internal motion close to the knee joint. Therefore, the authors believe that the proposed method is a promising alternative to MB motion management. PMID:26936708
NASA Astrophysics Data System (ADS)
Ahooghalandari, Matin; Khiadani, Mehdi; Jahromi, Mina Esmi
2017-05-01
Reference evapotranspiration (ET0) is a critical component of water resources management and planning. Different methods have been developed to estimate ET0 with various required data. In this study, Hargreaves, Turc, Oudin, Copais, Abtew methods and three forms of Valiantzas' formulas, developed in recent years, were used to estimate ET0 for the Pilbara region of Western Australia. The estimated ET0 values from these methods were compared with those from the FAO-56 Penman-Monteith (PM) method. The results showed that the Copais methods and two of Valiantzas' equations, in their original forms, are suitable for estimating ET0 for the study area. A modification of Honey-Bee Mating Optimization (MHBMO) algorithm was further implemented, and three Valiantzas' equations for a region located in the southern hemisphere were calibrated.
Diagnostic Performance of a Molecular Test versus Clinician Assessment of Vaginitis.
Schwebke, Jane R; Gaydos, Charlotte A; Nyirjesy, Paul; Paradis, Sonia; Kodsi, Salma; Cooper, Charles K
2018-06-01
Vaginitis is a common complaint, diagnosed either empirically or using Amsel's criteria and wet mount microscopy. This study sought to determine characteristics of an investigational test (a molecular test for vaginitis), compared to reference, for detection of bacterial vaginosis, Candida spp., and Trichomonas vaginalis Vaginal specimens from a cross-sectional study were obtained from 1,740 women (≥18 years old), with vaginitis symptoms, during routine clinic visits (across 10 sites in the United States). Specimens were analyzed using a commercial PCR/fluorogenic probe-based investigational test that detects bacterial vaginosis, Candida spp., and Trichomonas vaginalis Clinician diagnosis and in-clinic testing (Amsel's test, potassium hydroxide preparation, and wet mount) were also employed to detect the three vaginitis causes. All testing methods were compared to the respective reference methods (Nugent Gram stain for bacterial vaginosis, detection of the Candida gene its2 , and Trichomonas vaginalis culture). The investigational test, clinician diagnosis, and in-clinic testing were compared to reference methods for bacterial vaginosis, Candida spp., and Trichomonas vaginalis The investigational test resulted in significantly higher sensitivity and negative predictive value than clinician diagnosis or in-clinic testing. In addition, the investigational test showed a statistically higher overall percent agreement with each of the three reference methods than did clinician diagnosis or in-clinic testing. The investigational test showed significantly higher sensitivity for detecting vaginitis, involving more than one cause, than did clinician diagnosis. Taken together, these results suggest that a molecular investigational test can facilitate accurate detection of vaginitis. Copyright © 2018 Schwebke et al.
Johansen, Ilona; Andreassen, Rune
2014-12-23
MicroRNAs (miRNAs) are an abundant class of endogenous small RNA molecules that downregulate gene expression at the post-transcriptional level. They play important roles by regulating genes that control multiple biological processes, and recent years there has been an increased interest in studying miRNA genes and miRNA gene expression. The most common method applied to study gene expression of single genes is quantitative PCR (qPCR). However, before expression of mature miRNAs can be studied robust qPCR methods (miRNA-qPCR) must be developed. This includes identification and validation of suitable reference genes. We are particularly interested in Atlantic salmon (Salmo salar). This is an economically important aquaculture species, but no reference genes dedicated for use in miRNA-qPCR methods has been validated for this species. Our aim was, therefore, to identify suitable reference genes for miRNA-qPCR methods in Salmo salar. We used a systematic approach where we utilized similar studies in other species, some biological criteria, results from deep sequencing of small RNAs and, finally, experimental validation of candidate reference genes by qPCR to identify the most suitable reference genes. Ssa-miR-25-3p was identified as most suitable single reference gene. The best combinations of two reference genes were ssa-miR-25-3p and ssa-miR-455-5p. These two genes were constitutively and stably expressed across many different tissues. Furthermore, infectious salmon anaemia did not seem to affect their expression levels. These genes were amplified with high specificity, good efficiency and the qPCR assays showed a good linearity when applying a simple cybergreen miRNA-PCR method using miRNA gene specific forward primers. We have identified suitable reference genes for miRNA-qPCR in Atlantic salmon. These results will greatly facilitate further studies on miRNA genes in this species. The reference genes identified are conserved genes that are identical in their mature sequence in many aquaculture species. Therefore, they may also be suitable as reference genes in other teleosts. Finally, the systematic approach used in our study successfully identified suitable reference genes, suggesting that this may be a useful strategy to apply in similar validation studies in other aquaculture species.
Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki
2013-06-17
We have already proposed a method for detecting vehicle positions and their movements (henceforth referred to as "our previous method") using thermal images taken with an infrared thermal camera. Our experiments have shown that our previous method detects vehicles robustly under four different environmental conditions which involve poor visibility conditions in snow and thick fog. Our previous method uses the windshield and its surroundings as the target of the Viola-Jones detector. Some experiments in winter show that the vehicle detection accuracy decreases because the temperatures of many windshields approximate those of the exterior of the windshields. In this paper, we propose a new vehicle detection method (henceforth referred to as "our new method"). Our new method detects vehicles based on tires' thermal energy reflection. We have done experiments using three series of thermal images for which the vehicle detection accuracies of our previous method are low. Our new method detects 1,417 vehicles (92.8%) out of 1,527 vehicles, and the number of false detection is 52 in total. Therefore, by combining our two methods, high vehicle detection accuracies are maintained under various environmental conditions. Finally, we apply the traffic information obtained by our two methods to traffic flow automatic monitoring, and show the effectiveness of our proposal.
Marker-free motion correction in weight-bearing cone-beam CT of the knee joint.
Berger, M; Müller, K; Aichert, A; Unberath, M; Thies, J; Choi, J-H; Fahrig, R; Maier, A
2016-03-01
To allow for a purely image-based motion estimation and compensation in weight-bearing cone-beam computed tomography of the knee joint. Weight-bearing imaging of the knee joint in a standing position poses additional requirements for the image reconstruction algorithm. In contrast to supine scans, patient motion needs to be estimated and compensated. The authors propose a method that is based on 2D/3D registration of left and right femur and tibia segmented from a prior, motion-free reconstruction acquired in supine position. Each segmented bone is first roughly aligned to the motion-corrupted reconstruction of a scan in standing or squatting position. Subsequently, a rigid 2D/3D registration is performed for each bone to each of K projection images, estimating 6 × 4 × K motion parameters. The motion of individual bones is combined into global motion fields using thin-plate-spline extrapolation. These can be incorporated into a motion-compensated reconstruction in the backprojection step. The authors performed visual and quantitative comparisons between a state-of-the-art marker-based (MB) method and two variants of the proposed method using gradient correlation (GC) and normalized gradient information (NGI) as similarity measure for the 2D/3D registration. The authors evaluated their method on four acquisitions under different squatting positions of the same patient. All methods showed substantial improvement in image quality compared to the uncorrected reconstructions. Compared to NGI and MB, the GC method showed increased streaking artifacts due to misregistrations in lateral projection images. NGI and MB showed comparable image quality at the bone regions. Because the markers are attached to the skin, the MB method performed better at the surface of the legs where the authors observed slight streaking of the NGI and GC methods. For a quantitative evaluation, the authors computed the universal quality index (UQI) for all bone regions with respect to the motion-free reconstruction. The authors quantitative evaluation over regions around the bones yielded a mean UQI of 18.4 for no correction, 53.3 and 56.1 for the proposed method using GC and NGI, respectively, and 53.7 for the MB reference approach. In contrast to the authors registration-based corrections, the MB reference method caused slight nonrigid deformations at bone outlines when compared to a motion-free reference scan. The authors showed that their method based on the NGI similarity measure yields reconstruction quality close to the MB reference method. In contrast to the MB method, the proposed method does not require any preparation prior to the examination which will improve the clinical workflow and patient comfort. Further, the authors found that the MB method causes small, nonrigid deformations at the bone outline which indicates that markers may not accurately reflect the internal motion close to the knee joint. Therefore, the authors believe that the proposed method is a promising alternative to MB motion management.
Wang, L; Rokhlin, S I
2002-09-01
An inversion method based on Floquet wave velocity in a periodic medium has been introduced to determine the single ply elastic moduli of a multi-ply composite. The stability of this algorithm is demonstrated by numerical simulation. The applicability of the plane wave approximation to the velocity measurement in the double-through-transmission self-reference method has been analyzed using a time-domain beam model. It shows that the finite width of the transmitter affects only the amplitudes of the signals and has almost no effect on the time delay. Using this method, the ply moduli for a multiply composite have been experimentally determined. While the paper focuses on elastic constant reconstruction from phase velocity measurements by the self-reference double-through-transmission method, the reconstruction methodology is also applicable to assessment of data collected by other methods.
The impacts of speed cameras on road accidents: an application of propensity score matching methods.
Li, Haojie; Graham, Daniel J; Majumdar, Arnab
2013-11-01
This paper aims to evaluate the impacts of speed limit enforcement cameras on reducing road accidents in the UK by accounting for both confounding factors and the selection of proper reference groups. The propensity score matching (PSM) method is employed to do this. A naïve before and after approach and the empirical Bayes (EB) method are compared with the PSM method. A total of 771 sites and 4787 sites for the treatment and the potential reference groups respectively are observed for a period of 9 years in England. Both the PSM and the EB methods show similar results that there are significant reductions in the number of accidents of all severities at speed camera sites. It is suggested that the propensity score can be used as the criteria for selecting the reference group in before-after control studies. Speed cameras were found to be most effective in reducing accidents up to 200 meters from camera sites and no evidence of accident migration was found. Copyright © 2013 Elsevier Ltd. All rights reserved.
Fully automated motion correction in first-pass myocardial perfusion MR image sequences.
Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F
2008-11-01
This paper presents a novel method for registration of cardiac perfusion magnetic resonance imaging (MRI). The presented method is capable of automatically registering perfusion data, using independent component analysis (ICA) to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of that ICA. This reference image is used in a two-pass registration framework. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Despite varying image quality and motion patterns in the evaluation set, validation of the method showed a reduction of the average right ventricle (LV) motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. Comparison of clinically relevant parameters computed using registered data and the manual gold standard show a good agreement. Additional tests with a simulated free-breathing protocol showed robustness against considerable deviations from a standard breathing protocol. We conclude that this fully automatic ICA-based method shows an accuracy, a robustness and a computation speed adequate for use in a clinical environment.
Accuracy of Referring Provider and Endoscopist Impressions of Colonoscopy Indication.
Naveed, Mariam; Clary, Meredith; Ahn, Chul; Kubiliun, Nisa; Agrawal, Deepak; Cryer, Byron; Murphy, Caitlin; Singal, Amit G
2017-07-01
Background: Referring provider and endoscopist impressions of colonoscopy indication are used for clinical care, reimbursement, and quality reporting decisions; however, the accuracy of these impressions is unknown. This study assessed the sensitivity, specificity, positive and negative predictive value, and overall accuracy of methods to classify colonoscopy indication, including referring provider impression, endoscopist impression, and administrative algorithm compared with gold standard chart review. Methods: We randomly sampled 400 patients undergoing a colonoscopy at a Veterans Affairs health system between January 2010 and December 2010. Referring provider and endoscopist impressions of colonoscopy indication were compared with gold-standard chart review. Indications were classified into 4 mutually exclusive categories: diagnostic, surveillance, high-risk screening, or average-risk screening. Results: Of 400 colonoscopies, 26% were performed for average-risk screening, 7% for high-risk screening, 26% for surveillance, and 41% for diagnostic indications. Accuracy of referring provider and endoscopist impressions of colonoscopy indication were 87% and 84%, respectively, which were significantly higher than that of the administrative algorithm (45%; P <.001 for both). There was substantial agreement between endoscopist and referring provider impressions (κ=0.76). All 3 methods showed high sensitivity (>90%) for determining screening (vs nonscreening) indication, but specificity of the administrative algorithm was lower (40.3%) compared with referring provider (93.7%) and endoscopist (84.0%) impressions. Accuracy of endoscopist, but not referring provider, impression was lower in patients with a family history of colon cancer than in those without (65% vs 84%; P =.001). Conclusions: Referring provider and endoscopist impressions of colonoscopy indication are both accurate and may be useful data to incorporate into algorithms classifying colonoscopy indication. Copyright © 2017 by the National Comprehensive Cancer Network.
Han, Bingqing; Ge, Menglei; Zhao, Haijian; Yan, Ying; Zeng, Jie; Zhang, Tianjiao; Zhou, Weiyan; Zhang, Jiangtao; Wang, Jing; Zhang, Chuanbao
2017-11-27
Serum calcium level is an important clinical index that reflects pathophysiological states. However, detection accuracy in laboratory tests is not ideal; as such, a high accuracy method is needed. We developed a reference method for measuring serum calcium levels by isotope dilution inductively coupled plasma mass spectrometry (ID ICP-MS), using 42Ca as the enriched isotope. Serum was digested with 69% ultrapure nitric acid and diluted to a suitable concentration. The 44Ca/42Ca ratio was detected in H2 mode; spike concentration was calibrated by reverse IDMS using standard reference material (SRM) 3109a, and sample concentration was measured by a bracketing procedure. We compared the performance of ID ICP-MS with those of three other reference methods in China using the same serum and aqueous samples. The relative expanded uncertainty of the sample concentration was 0.414% (k=2). The range of repeatability (within-run imprecision), intermediate imprecision (between-run imprecision), and intra-laboratory imprecision were 0.12%-0.19%, 0.07%-0.09%, and 0.16%-0.17%, respectively, for two of the serum samples. SRM909bI, SRM909bII, SRM909c, and GBW09152 were found to be within the certified value interval, with mean relative bias values of 0.29%, -0.02%, 0.10%, and -0.19%, respectively. The range of recovery was 99.87%-100.37%. Results obtained by ID ICP-MS showed a better accuracy than and were highly correlated with those of other reference methods. ID ICP-MS is a simple and accurate candidate reference method for serum calcium measurement and can be used to establish and improve serum calcium reference system in China.
Mauté, Carole; Nibourel, Olivier; Réa, Delphine; Coiteux, Valérie; Grardel, Nathalie; Preudhomme, Claude; Cayuela, Jean-Michel
2014-09-01
Until recently, diagnostic laboratories that wanted to report on the international scale had limited options: they had to align their BCR-ABL1 quantification methods through a sample exchange with a reference laboratory to derive a conversion factor. However, commercial methods calibrated on the World Health Organization genetic reference panel are now available. We report results from a study designed to assess the comparability of the two alignment strategies. Sixty follow-up samples from chronic myeloid leukemia patients were included. Two commercial methods calibrated on the genetic reference panel were compared to two conversion factor methods routinely used at Saint-Louis Hospital, Paris, and at Lille University Hospital. Results were matched against concordance criteria (i.e., obtaining at least two of the three following landmarks: 50, 75 and 90% of the patient samples within a 2-fold, 3-fold and 5-fold range, respectively). Out of the 60 samples, more than 32 were available for comparison. Compared to the conversion factor method, the two commercial methods were within a 2-fold, 3-fold and 5-fold range for 53 and 59%, 89 and 88%, 100 and 97%, respectively of the samples analyzed at Saint-Louis. At Lille, results were 45 and 85%, 76 and 97%, 100 and 100%, respectively. Agreements between methods were observed in the four comparisons performed. Our data show that the two commercial methods selected are concordant with the conversion factor methods. This study brings the proof of principle that alignment on the international scale using the genetic reference panel is compatible with the patient sample exchange procedure. We believe that these results are particularly important for diagnostic laboratories wishing to adopt commercial methods. Copyright © 2014 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Qin, Chuan; Zhao, Jianlin; Di, Jianglei; Wang, Le; Yu, Yiting; Yuan, Weizheng
2009-02-10
We employed digital holographic microscopy to visually test microoptoelectromechanical systems (MOEMS). The sample is a blazed-angle adjustable grating. Considering the periodic structure of the sample, a local area unwrapping method based on a binary template was adopted to demodulate the fringes obtained by referring to a reference hologram. A series of holograms at different deformation states due to different drive voltages were captured to analyze the dynamic character of the MOEMS, and the uniformity of different microcantilever beams was also inspected. The results show this testing method is effective for a periodic structure.
Comparative study of minutiae selection algorithms for ISO fingerprint templates
NASA Astrophysics Data System (ADS)
Vibert, B.; Charrier, C.; Le Bars, J.-M.; Rosenberger, C.
2015-03-01
We address the selection of fingerprint minutiae given a fingerprint ISO template. Minutiae selection plays a very important role when a secure element (i.e. a smart-card) is used. Because of the limited capability of computation and memory, the number of minutiae of a stored reference in the secure element is limited. We propose in this paper a comparative study of 6 minutiae selection methods including 2 methods from the literature and 1 like reference (No Selection). Experimental results on 3 fingerprint databases from the Fingerprint Verification Competition show their relative efficiency in terms of performance and computation time.
A computerized procedure for teaching the relationship between graphic symbols and their referents.
Isaacson, Mick; Lloyd, Lyle L
2013-01-01
Many individuals with little or no functional speech communicate through graphic symbols. Communication is enhanced when the relationship between symbols and their referents are learned to such a degree that retrieval is effortless, resulting in fluent communication. Developing fluency is a time consuming endeavor for special educators and speech-language pathologists (SLPs). It would be beneficial for these professionals to have an automated procedure based on the most efficacious method for teaching the relationship between symbols and referent. Hence, this study investigated whether a procedure based on the generation effect would promote learning the association between symbols and their referents. Results show that referent generation produces the best long-term retention of this relationship. These findings provide evidence that software based on referent generation would provide special educators and SLPs with an efficacious automated procedure, requiring minimal direct supervision, to facilitate symbol/referent learning and the development of communicative fluency.
A Comparative Study of Different EEG Reference Choices for Diagnosing Unipolar Depression.
Mumtaz, Wajid; Malik, Aamir Saeed
2018-06-02
The choice of an electroencephalogram (EEG) reference has fundamental importance and could be critical during clinical decision-making because an impure EEG reference could falsify the clinical measurements and subsequent inferences. In this research, the suitability of three EEG references was compared while classifying depressed and healthy brains using a machine-learning (ML)-based validation method. In this research, the EEG data of 30 unipolar depressed subjects and 30 age-matched healthy controls were recorded. The EEG data were analyzed in three different EEG references, the link-ear reference (LE), average reference (AR), and reference electrode standardization technique (REST). The EEG-based functional connectivity (FC) was computed. Also, the graph-based measures, such as the distances between nodes, minimum spanning tree, and maximum flow between the nodes for each channel pair, were calculated. An ML scheme provided a mechanism to compare the performances of the extracted features that involved a general framework such as the feature extraction (graph-based theoretic measures), feature selection, classification, and validation. For comparison purposes, the performance metrics such as the classification accuracies, sensitivities, specificities, and F scores were computed. When comparing the three references, the diagnostic accuracy showed better performances during the REST, while the LE and AR showed less discrimination between the two groups. Based on the results, it can be concluded that the choice of appropriate reference is critical during the clinical scenario. The REST reference is recommended for future applications of EEG-based diagnosis of mental illnesses.
Li, Xiuying; Yang, Qiwei; Bai, Jinping; Xuan, Yali; Wang, Yimin
2015-01-01
Normalization to a reference gene is the method of choice for quantitative reverse transcription-PCR (RT-qPCR) analysis. The stability of reference genes is critical for accurate experimental results and conclusions. We have evaluated the expression stability of eight commonly used reference genes found in four different human mesenchymal stem cells (MSC). Using geNorm, NormFinder and BestKeeper algorithms, we show that beta-2-microglobulin and peptidyl-prolylisomerase A were the optimal reference genes for normalizing RT-qPCR data obtained from MSC, whereas the TATA box binding protein was not suitable due to its extensive variability in expression. Our findings emphasize the significance of validating reference genes for qPCR analyses. We offer a short list of reference genes to use for normalization and recommend some commercially-available software programs as a rapid approach to validate reference genes. We also demonstrate that the two reference genes, β-actin and glyceraldehyde-3-phosphate dehydrogenase, are frequently used are not always successful in many cases.
Cho, Yun Sung; Kim, Hyunho; Kim, Hak-Min; Jho, Sungwoong; Jun, JeHoon; Lee, Yong Joo; Chae, Kyun Shik; Kim, Chang Geun; Kim, Sangsoo; Eriksson, Anders; Edwards, Jeremy S.; Lee, Semin; Kim, Byung Chul; Manica, Andrea; Oh, Tae-Kwang; Church, George M.; Bhak, Jong
2016-01-01
Human genomes are routinely compared against a universal reference. However, this strategy could miss population-specific and personal genomic variations, which may be detected more efficiently using an ethnically relevant or personal reference. Here we report a hybrid assembly of a Korean reference genome (KOREF) for constructing personal and ethnic references by combining sequencing and mapping methods. We also build its consensus variome reference, providing information on millions of variants from 40 additional ethnically homogeneous genomes from the Korean Personal Genome Project. We find that the ethnically relevant consensus reference can be beneficial for efficient variant detection. Systematic comparison of human assemblies shows the importance of assembly quality, suggesting the necessity of new technologies to comprehensively map ethnic and personal genomic structure variations. In the era of large-scale population genome projects, the leveraging of ethnicity-specific genome assemblies as well as the human reference genome will accelerate mapping all human genome diversity. PMID:27882922
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less
Tanabe, Akifumi S; Toju, Hirokazu
2013-01-01
Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to accelerate the registration of reference barcode sequences to apply high-throughput DNA barcoding to genus or species level identification in biodiversity research.
Tanabe, Akifumi S.; Toju, Hirokazu
2013-01-01
Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used “1-nearest-neighbor” (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to accelerate the registration of reference barcode sequences to apply high-throughput DNA barcoding to genus or species level identification in biodiversity research. PMID:24204702
Single point aerosol sampling: evaluation of mixing and probe performance in a nuclear stack.
Rodgers, J C; Fairchild, C I; Wood, G O; Ortiz, C A; Muyshondt, A; McFarland, A R
1996-01-01
Alternative reference methodologies have been developed for sampling of radionuclides from stacks and ducts, which differ from the methods previously required by the United States Environmental Protection Agency. These alternative reference methodologies have recently been approved by the U.S. EPA for use in lieu of the current standard techniques. The standard EPA methods are prescriptive in selection of sampling locations and in design of sampling probes whereas the alternative reference methodologies are performance driven. Tests were conducted in a stack at Los Alamos National Laboratory to demonstrate the efficacy of some aspects of the alternative reference methodologies. Coefficients of variation of velocity, tracer gas, and aerosol particle profiles were determined at three sampling locations. Results showed that numerical criteria placed upon the coefficients of variation by the alternative reference methodologies were met at sampling stations located 9 and 14 stack diameters from the flow entrance, but not at a location that was 1.5 diameters downstream from the inlet. Experiments were conducted to characterize the transmission of 10 microns aerodynamic diameter liquid aerosol particles through three types of sampling probes. The transmission ratio (ratio of aerosol concentration at the probe exit plane to the concentration in the free stream) was 107% for a 113 L min-1 (4-cfm) anisokinetic shrouded probe, but only 20% for an isokinetic probe that follows the existing EPA standard requirements. A specially designed isokinetic probe showed a transmission ratio of 63%. The shrouded probe performance would conform to the alternative reference methodologies criteria; however, the isokinetic probes would not.
Bias Assessment of General Chemistry Analytes using Commutable Samples.
Koerbin, Gus; Tate, Jillian R; Ryan, Julie; Jones, Graham Rd; Sikaris, Ken A; Kanowski, David; Reed, Maxine; Gill, Janice; Koumantakis, George; Yen, Tina; St John, Andrew; Hickman, Peter E; Simpson, Aaron; Graham, Peter
2014-11-01
Harmonisation of reference intervals for routine general chemistry analytes has been a goal for many years. Analytical bias may prevent this harmonisation. To determine if analytical bias is present when comparing methods, the use of commutable samples, or samples that have the same properties as the clinical samples routinely analysed, should be used as reference samples to eliminate the possibility of matrix effect. The use of commutable samples has improved the identification of unacceptable analytical performance in the Netherlands and Spain. The International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) has undertaken a pilot study using commutable samples in an attempt to determine not only country specific reference intervals but to make them comparable between countries. Australia and New Zealand, through the Australasian Association of Clinical Biochemists (AACB), have also undertaken an assessment of analytical bias using commutable samples and determined that of the 27 general chemistry analytes studied, 19 showed sufficiently small between method biases as to not prevent harmonisation of reference intervals. Application of evidence based approaches including the determination of analytical bias using commutable material is necessary when seeking to harmonise reference intervals.
Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods
NASA Astrophysics Data System (ADS)
Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.
2011-12-01
Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.
NASA Astrophysics Data System (ADS)
Khanmohammadi, Neda; Rezaie, Hossein; Montaseri, Majid; Behmanesh, Javad
2017-10-01
The reference evapotranspiration (ET0) plays an important role in water management plans in arid or semi-arid countries such as Iran. For this reason, the regional analysis of this parameter is important. But, ET0 process is affected by several meteorological parameters such as wind speed, solar radiation, temperature and relative humidity. Therefore, the effect of distribution type of effective meteorological variables on ET0 distribution was analyzed. For this purpose, the regional probability distribution of the annual ET0 and its effective parameters were selected. Used data in this research was recorded data at 30 synoptic stations of Iran during 1960-2014. Using the probability plot correlation coefficient (PPCC) test and the L-moment method, five common distributions were compared and the best distribution was selected. The results of PPCC test and L-moment diagram indicated that the Pearson type III distribution was the best probability distribution for fitting annual ET0 and its four effective parameters. The results of RMSE showed that the ability of the PPCC test and L-moment method for regional analysis of reference evapotranspiration and its effective parameters was similar. The results also showed that the distribution type of the parameters which affected ET0 values can affect the distribution of reference evapotranspiration.
Sieslack, Anne K; Dziallas, Peter; Nolte, Ingo; Wefstaedt, Patrick; Hungerbühler, Stephan O
2014-10-12
Right ventricular (RV) volume and function are important diagnostic and prognostic factors in dogs with primary or secondary right-sided heart failure. The complex shape of the right ventricle and its retrosternal position make the quantification of its volume difficult. For that reason, only few studies exist, which deal with the determination of RV volume parameters. In human medicine cardiac magnetic resonance imaging (CMRI) is considered to be the reference technique for RV volumetric measurement (Nat Rev Cardiol 7(10):551-563, 2010), but cardiac computed tomography (CCT) and three-dimensional echocardiography (3DE) are other non-invasive methods feasible for RV volume quantification. The purpose of this study was the comparison of 3DE and CCT with CMRI, the gold standard for RV volumetric quantification. 3DE showed significant lower and CCT significant higher right ventricular volumes than CMRI. Both techniques showed very good correlations (R > 0.8) with CMRI for the volumetric parameters end-diastolic volume (EDV) and end-systolic volume (ESV). Ejection fraction (EF) and stroke volume (SV) were not different when considering CCT and CMRI, whereas 3DE showed a significant higher EF and lower SV than CMRI. The 3DE values showed excellent intra-observer variability (<3%) and still acceptable inter-observer variability (<13%). CCT provides an accurate image quality of the right ventricle with comparable results to the reference method CMRI. CCT overestimates the RV volumes; therefore, it is not an interchangeable method, having the disadvantage as well of needing general anaesthesia. 3DE underestimated the RV-Volumes, which could be explained by the worse image resolution. The excellent correlation between the methods indicates a close relationship between 3DE and CMRI although not directly comparable. 3DE is a promising technique for RV volumetric quantification, but further studies in awake dogs and dogs with heart disease are necessary to evaluate its usefulness in veterinary cardiology.
Retinal status analysis method based on feature extraction and quantitative grading in OCT images.
Fu, Dongmei; Tong, Hejun; Zheng, Shuang; Luo, Ling; Gao, Fulin; Minar, Jiri
2016-07-22
Optical coherence tomography (OCT) is widely used in ophthalmology for viewing the morphology of the retina, which is important for disease detection and assessing therapeutic effect. The diagnosis of retinal diseases is based primarily on the subjective analysis of OCT images by trained ophthalmologists. This paper describes an OCT images automatic analysis method for computer-aided disease diagnosis and it is a critical part of the eye fundus diagnosis. This study analyzed 300 OCT images acquired by Optovue Avanti RTVue XR (Optovue Corp., Fremont, CA). Firstly, the normal retinal reference model based on retinal boundaries was presented. Subsequently, two kinds of quantitative methods based on geometric features and morphological features were proposed. This paper put forward a retinal abnormal grading decision-making method which was used in actual analysis and evaluation of multiple OCT images. This paper showed detailed analysis process by four retinal OCT images with different abnormal degrees. The final grading results verified that the analysis method can distinguish abnormal severity and lesion regions. This paper presented the simulation of the 150 test images, where the results of analysis of retinal status showed that the sensitivity was 0.94 and specificity was 0.92.The proposed method can speed up diagnostic process and objectively evaluate the retinal status. This paper aims on studies of retinal status automatic analysis method based on feature extraction and quantitative grading in OCT images. The proposed method can obtain the parameters and the features that are associated with retinal morphology. Quantitative analysis and evaluation of these features are combined with reference model which can realize the target image abnormal judgment and provide a reference for disease diagnosis.
Assessing the accuracy of TDR-based water leak detection system
NASA Astrophysics Data System (ADS)
Fatemi Aghda, S. M.; GanjaliPour, K.; Nabiollahi, K.
2018-03-01
The use of TDR system to detect leakage locations in underground pipes has been developed in recent years. In this system, a bi-wire is installed in parallel with the underground pipes and is considered as a TDR sensor. This approach greatly covers the limitations arisen with using the traditional method of acoustic leak positioning. TDR based leak detection method is relatively accurate when the TDR sensor is in contact with water in just one point. Researchers have been working to improve the accuracy of this method in recent years. In this study, the ability of TDR method was evaluated in terms of the appearance of multi leakage points simultaneously. For this purpose, several laboratory tests were conducted. In these tests in order to simulate leakage points, the TDR sensor was put in contact with water at some points, then the number and the dimension of the simulated leakage points were gradually increased. The results showed that with the increase in the number and dimension of the leakage points, the error rate of the TDR-based water leak detection system increases. The authors tried, according to the results obtained from the laboratory tests, to develop a method to improve the accuracy of the TDR-based leak detection systems. To do that, they defined a few reference points on the TDR sensor. These points were created via increasing the distance between two conductors of TDR sensor and were easily identifiable in the TDR waveform. The tests were repeated again using the TDR sensor having reference points. In order to calculate the exact distance of the leakage point, the authors developed an equation in accordance to the reference points. A comparison between the results obtained from both tests (with and without reference points) showed that using the method and equation developed by the authors can significantly improve the accuracy of positioning the leakage points.
Detector-unit-dependent calibration for polychromatic projections of rock core CT.
Li, Mengfei; Zhao, Yunsong; Zhang, Peng
2017-01-01
Computed tomography (CT) plays an important role in digital rock analysis, which is a new prospective technique for oil and gas industry. But the artifacts in CT images will influence the accuracy of the digital rock model. In this study, we proposed and demonstrated a novel method to restore detector-unit-dependent functions for polychromatic projection calibration by scanning some simple shaped reference samples. As long as the attenuation coefficients of the reference samples are similar to the scanned object, the size or position is not needed to be exactly known. Both simulated and real data were used to verify the proposed method. The results showed that the new method reduced both beam hardening artifacts and ring artifacts effectively. Moreover, the method appeared to be quite robust.
Visualization of the IMIA Yearbook of Medical Informatics Publications over the Last 25 Years
Tam-Tham, H.; Minty, E. P.
2016-01-01
Summary Background The last 25 years have been a period of innovation in the area of medical informatics. The International Medical Informatics Association (IMIA) has published, every year for the last quarter century, the Yearbook of Medical Informatics, collating selected papers from various journals in an attempt to provide a summary of the academic medical informatics literature. The objective of this paper is to visualize the evolution of the medical informatics field over the last 25 years according to the frequency of word occurrences in the papers published in the IMIA Yearbook of Medical Informatics. Methods A literature review was conducted examining the IMIA Yearbook of Medical Informatics between 1992 and 2015. These references were collated into a reference manager application to examine the literature using keyword searches, word clouds, and topic clustering. The data was considered in its entirety, as well as segregated into 3 time periods to examine the evolution of main trends over time. Several methods were used, including word clouds, cluster maps, and custom developed web-based information dashboards. Results The literature search resulted in a total of 1210 references published in the Yearbook, of which 213 references were excluded, resulting in 997 references for visualization. Overall, we found that publications were more technical and methods-oriented between 1992 and 1999; more clinically and patient-oriented between 2000 and 2009; and noted the emergence of “big data”, decision support, and global health in the past decade between 2010 and 2015. Dashboards were additionally created to show individual reference data, as well as, aggregated information. Conclusion Medical informatics is a vast and expanding area with new methods and technologies being researched, implemented, and evaluated. Determining visualization approaches that enhance our understanding of literature is an active area of research, and like medical informatics, is constantly evolving as new software and algorithms are developed. This paper examined several approaches for visualizing the medical informatics literature to show historical trends, associations, and aggregated summarized information to illustrate the state and changes in the IMIA Yearbook publications over the last quarter century. PMID:27362591
Zhang, Guodong; Thau, Eve; Brown, Eric W; Hammack, Thomas S
2013-12-01
The current FDA Bacteriological Analytical Manual (BAM) method for the detection of Salmonella in eggs requires 2 wk to complete. The objective of this project was to improve the BAM method for the detection and isolation of Salmonella in whole shell eggs. A novel protocol, using 1,000 g of liquid eggs for direct preenrichment with 2 L of tryptic soy broth (TSB) followed by enrichment using Rappaport-Vassiliadis and Tetrathionate broths, was compared with the standard BAM method, which requires 96 h room temperature incubation of whole shell egg samples followed by preenrichment in TSB supplemented with FeSO4. Four Salmonella ser. Enteritidis (4 phage types) and one Salmonella ser. Heidelberg isolates were used in the study. Bulk inoculated pooled liquid eggs, weighing 52 or 56 kg (approximately 1,100 eggs) were used in each trial. Twenty 1,000-g test portions were withdrawn from the pooled eggs for both the alternative and the reference methods. Test portions were inoculated with Salmonella at 1 to 5 cfu/1,000 g eggs. Two replicates were performed for each isolate. In the 8 trials conducted with Salmonella ser. Enteritidis, the alternative method was significantly (P < 0.05) more productive than the reference method in 3 trials, and significantly (P < 0.05) less productive than the reference method in 1 trial. There were no significant (P < 0.05) differences between the 2 methods for the other 4 trials. For Salmonella ser. Heidelberg, combined data from 2 trials showed the alternative method was significantly (P < 0.05) more efficient than the BAM method. We have concluded that the alternative method, described herein, has the potential to replace the current BAM culture method for detection and isolation of Salmonella from shell eggs based on the following factors: 1) the alternative method is 4 d shorter than the reference method; 2) it uses regular TSB instead of the more complicated TSB supplemented with FeSO4; and 3) it was equivalent or superior to the reference method in 9 out of 10 trials for the detection of Salmonella in shell eggs.
A localization algorithm of adaptively determining the ROI of the reference circle in image
NASA Astrophysics Data System (ADS)
Xu, Zeen; Zhang, Jun; Zhang, Daimeng; Liu, Xiaomao; Tian, Jinwen
2018-03-01
Aiming at solving the problem of accurately positioning the detection probes underwater, this paper proposed a method based on computer vision which can effectively solve this problem. The theory of this method is that: First, because the shape information of the heat tube is similar to a circle in the image, we can find a circle which physical location is well known in the image, we set this circle as the reference circle. Second, we calculate the pixel offset between the reference circle and the probes in the picture, and adjust the steering gear through the offset. As a result, we can accurately measure the physical distance between the probes and the under test heat tubes, then we can know the precise location of the probes underwater. However, how to choose reference circle in image is a difficult problem. In this paper, we propose an algorithm that can adaptively confirm the area of reference circle. In this area, there will be only one circle, and the circle is the reference circle. The test results show that the accuracy of the algorithm of extracting the reference circle in the whole picture without using ROI (region of interest) of the reference circle is only 58.76% and the proposed algorithm is 95.88%. The experimental results indicate that the proposed algorithm can effectively improve the efficiency of the tubes detection.
Factors interfering with the accuracy of five blood glucose meters used in Chinese hospitals.
Lv, Hong; Zhang, Guo-jun; Kang, Xi-xiong; Yuan, Hui; Lv, Yan-wei; Wang, Wen-wen; Randall, Rollins
2013-09-01
The prevalence of diabetes is increasing in China. Glucose control is very important in diabetic patients. The aim of this study was to compare the accuracy of five glucose meters used in Chinese hospitals with a reference method, in the absence and presence of various factors that may interfere with the meters. Within-run precision of the meters was evaluated include Roche Accu-Chek Inform®, Abbott Precision PCx FreeStyle®, Bayer Contour®, J&J LifeScan SureStep Flexx®, and Nova Biomedical StatStrip®. The interference of hematocrit level, maltose, ascorbic acid, acetaminophen, galactose, dopamine, and uric acid were tested in three levels of blood glucose, namely low, medium, and high concentrations. Accuracy (bias) of the meters and analytical interference by various factors were evaluated by comparing results obtained in whole blood specimens with those in plasma samples of the whole blood specimens run on the reference method. Impact of oxygen tension on above five blood glucose meters was detected. Precision was acceptable and slightly different between meters. There were no significant differences in the measurements between the meters and the reference method. The hematocrit level significantly interfered with all meters, except StatStrip. Measurements were affected to varying degrees by different substances at different glucose levels, e.g. acetaminophen and ascorbic acid (Freestyle), maltose and galactose (FreeStyle, Accu-Chek), uric acid (FreeStyle, Bayer Contour), and dopamine (Bayer Contour). The measurements with the five meters showed a good correlation with the plasma hexokinase reference method, but most were affected by the hematocrit level. Some meters also showed marked interference by other substances. © 2013 Wiley Periodicals, Inc.
Medical Students’ Attitudes and Beliefs towards Psychotherapy: A Mixed Research Methods Study
Constantinou, Costas S.; Georgiou, Maria; Perdikogianni, Maria
2017-01-01
Background: Research findings suggest that attitudes towards psychotherapy predict willingness to seek therapy. However, understanding how medical students think about using and referring their patients for psychotherapy is limited. Aims: The aims of this study are to measure medical students’ attitudes towards professional help seeking, and to investigate the reasons for whether or not they would refer their patients to psychotherapy in their future role as doctors. Method: The participants were 127 medical students in their first and second year of the MBBS4 programme at the Cyprus campus of St George’s University of London, who completed a self-report measure of attitudes towards psychotherapy and a semi-structured interview. Findings: Participants showed general positive attitudes towards psychotherapy, but were reluctant to use or refer their patients, largely due to perceived stigma and accessibility. Conclusions: Medical students should be further trained in order to become more confident in using psychotherapy and referring their patients. PMID:28820440
Bailey, Timothy S.; Klaff, Leslie J.; Wallace, Jane F.; Greene, Carmine; Pardo, Scott; Harrison, Bern; Simmons, David A.
2016-01-01
Background: As blood glucose monitoring system (BGMS) accuracy is based on comparison of BGMS and laboratory reference glucose analyzer results, reference instrument accuracy is important to discriminate small differences between BGMS and reference glucose analyzer results. Here, we demonstrate the important role of reference glucose analyzer accuracy in BGMS accuracy evaluations. Methods: Two clinical studies assessed the performance of a new BGMS, using different reference instrument procedures. BGMS and YSI analyzer results were compared for fingertip blood that was obtained by untrained subjects’ self-testing and study staff testing, respectively. YSI analyzer accuracy was monitored using traceable serum controls. Results: In study 1 (N = 136), 94.1% of BGMS results were within International Organization for Standardization (ISO) 15197:2013 accuracy criteria; YSI analyzer serum control results showed a negative bias (−0.64% to −2.48%) at the first site and a positive bias (3.36% to 6.91%) at the other site. In study 2 (N = 329), 97.8% of BGMS results were within accuracy criteria; serum controls showed minimal bias (<0.92%) at both sites. Conclusions: These findings suggest that the ability to demonstrate that a BGMS meets accuracy guidelines is influenced by reference instrument accuracy. PMID:26902794
Using Vision Metrology System for Quality Control in Automotive Industries
NASA Astrophysics Data System (ADS)
Mostofi, N.; Samadzadegan, F.; Roohy, Sh.; Nozari, M.
2012-07-01
The need of more accurate measurements in different stages of industrial applications, such as designing, producing, installation, and etc., is the main reason of encouraging the industry deputy in using of industrial Photogrammetry (Vision Metrology System). With respect to the main advantages of Photogrammetric methods, such as greater economy, high level of automation, capability of noncontact measurement, more flexibility and high accuracy, a good competition occurred between this method and other industrial traditional methods. With respect to the industries that make objects using a main reference model without having any mathematical model of it, main problem of producers is the evaluation of the production line. This problem will be so complicated when both reference and product object just as a physical object is available and comparison of them will be possible with direct measurement. In such case, producers make fixtures fitting reference with limited accuracy. In practical reports sometimes available precision is not better than millimetres. We used a non-metric high resolution digital camera for this investigation and the case study that studied in this paper is a chassis of automobile. In this research, a stable photogrammetric network designed for measuring the industrial object (Both Reference and Product) and then by using the Bundle Adjustment and Self-Calibration methods, differences between the Reference and Product object achieved. These differences will be useful for the producer to improve the production work flow and bringing more accurate products. Results of this research, demonstrate the high potential of proposed method in industrial fields. Presented results prove high efficiency and reliability of this method using RMSE criteria. Achieved RMSE for this case study is smaller than 200 microns that shows the fact of high capability of implemented approach.
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...
2015-10-24
Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less
Generalized Cross Entropy Method for estimating joint distribution from incomplete information
NASA Astrophysics Data System (ADS)
Xu, Hai-Yan; Kuo, Shyh-Hao; Li, Guoqi; Legara, Erika Fille T.; Zhao, Daxuan; Monterola, Christopher P.
2016-07-01
Obtaining a full joint distribution from individual marginal distributions with incomplete information is a non-trivial task that continues to challenge researchers from various domains including economics, demography, and statistics. In this work, we develop a new methodology referred to as ;Generalized Cross Entropy Method; (GCEM) that is aimed at addressing the issue. The objective function is proposed to be a weighted sum of divergences between joint distributions and various references. We show that the solution of the GCEM is unique and global optimal. Furthermore, we illustrate the applicability and validity of the method by utilizing it to recover the joint distribution of a household profile of a given administrative region. In particular, we estimate the joint distribution of the household size, household dwelling type, and household home ownership in Singapore. Results show a high-accuracy estimation of the full joint distribution of the household profile under study. Finally, the impact of constraints and weight on the estimation of joint distribution is explored.
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...
2016-03-18
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less
Plant, R. S.; Woolnough, S. J.; Sessions, S.; Herman, M. J.; Sobel, A.; Wang, S.; Kim, D.; Cheng, A.; Bellon, G.; Peyrille, P.; Ferry, F.; Siebesma, P.; van Ulft, L.
2016-01-01
Abstract As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large‐scale dynamics in a set of cloud‐resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative‐convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison of the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large‐scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column‐relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large‐scale velocity profiles which are smoother and less top‐heavy compared to those produced by the WTG simulations. These large‐scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two‐way feedback between convection and the large‐scale circulation. PMID:27642501
Shu, Y Y; Lao, R C; Chiu, C H; Turle, R
2000-12-01
The microwave-assisted extraction (MAE) of polycyclic aromatic hydrocarbons (PAHs) from harbor sediment reference material EC-1, marine sediment reference material HS-2 and PAH-spiked river bed soil was conducted. The extraction conditions for EC-1 were carried out at 70 degrees C and 100 degrees C under pressure in closed vessels with cyclohexane acetone (1:1), cyclohexane-water (3:1), hexane acetone (1:1), and hexane-water (3:1) for 10 min. A comparison between MAE and a 16-h Soxhlet extraction (SX) method showed that both techniques gave comparable results with certified values. MAE has advantages over the currently used Soxhlet technique due to a faster extraction time and lower quantity of solvent used. The consumption of organic solvent of the microwave method was less than one-tenth compared to Soxhlet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konstantinidis, Anastasios C.; Olivo, Alessandro; Speller, Robert D.
2011-12-15
Purpose: The x-ray performance evaluation of digital x-ray detectors is based on the calculation of the modulation transfer function (MTF), the noise power spectrum (NPS), and the resultant detective quantum efficiency (DQE). The flat images used for the extraction of the NPS should not contain any fixed pattern noise (FPN) to avoid contamination from nonstochastic processes. The ''gold standard'' method used for the reduction of the FPN (i.e., the different gain between pixels) in linear x-ray detectors is based on normalization with an average reference flat-field. However, the noise in the corrected image depends on the number of flat framesmore » used for the average flat image. The aim of this study is to modify the standard gain correction algorithm to make it independent on the used reference flat frames. Methods: Many publications suggest the use of 10-16 reference flat frames, while other studies use higher numbers (e.g., 48 frames) to reduce the propagated noise from the average flat image. This study quantifies experimentally the effect of the number of used reference flat frames on the NPS and DQE values and appropriately modifies the gain correction algorithm to compensate for this effect. Results: It is shown that using the suggested gain correction algorithm a minimum number of reference flat frames (i.e., down to one frame) can be used to eliminate the FPN from the raw flat image. This saves computer memory and time during the x-ray performance evaluation. Conclusions: The authors show that the method presented in the study (a) leads to the maximum DQE value that one would have by using the conventional method and very large number of frames and (b) has been compared to an independent gain correction method based on the subtraction of flat-field images, leading to identical DQE values. They believe this provides robust validation of the proposed method.« less
Feature weighting using particle swarm optimization for learning vector quantization classifier
NASA Astrophysics Data System (ADS)
Dongoran, A.; Rahmadani, S.; Zarlis, M.; Zakarias
2018-03-01
This paper discusses and proposes a method of feature weighting in classification assignments on competitive learning artificial neural network LVQ. The weighting feature method is the search for the weight of an attribute using the PSO so as to give effect to the resulting output. This method is then applied to the LVQ-Classifier and tested on the 3 datasets obtained from the UCI Machine Learning repository. Then an accuracy analysis will be generated by two approaches. The first approach using LVQ1, referred to as LVQ-Classifier and the second approach referred to as PSOFW-LVQ, is a proposed model. The result shows that the PSO algorithm is capable of finding attribute weights that increase LVQ-classifier accuracy.
Analytic Guidance for the First Entry in a Skip Atmospheric Entry
NASA Technical Reports Server (NTRS)
Garcia-Llama, Eduardo
2007-01-01
This paper presents an analytic method to generate a reference drag trajectory for the first entry portion of a skip atmospheric entry. The drag reference, expressed as a polynomial function of the velocity, will meet the conditions necessary to fit the requirements of the complete entry phase. The generic method proposed to generate the drag reference profile is further simplified by thinking of the drag and the velocity as density and cumulative distribution functions respectively. With this notion it will be shown that the reference drag profile can be obtained by solving a linear algebraic system of equations. The resulting drag profile is flown using the feedback linearization method of differential geometric control as guidance law with the error dynamics of a second order homogeneous equation in the form of a damped oscillator. This approach was first proposed as a revisited version of the Space Shuttle Orbiter entry guidance. However, this paper will show that it can be used to fly the first entry in a skip entry trajectory. In doing so, the gains in the error dynamics will be changed at a certain point along the trajectory to improve the tracking performance.
(GTG)5-PCR reference framework for acetic acid bacteria.
Papalexandratou, Zoi; Cleenwerck, Ilse; De Vos, Paul; De Vuyst, Luc
2009-11-01
One hundred and fifty-eight strains of acetic acid bacteria (AAB) were subjected to (GTG)(5)-PCR fingerprinting to construct a reference framework for their rapid classification and identification. Most of them clustered according to their respective taxonomic designation; others had to be reclassified based on polyphasic data. This study shows the usefulness of the method to determine the taxonomic and phylogenetic relationships among AAB and to study the AAB diversity of complex ecosystems.
Rapid Antimicrobial Susceptibility Testing Using Forward Laser Light Scatter Technology.
Hayden, Randall T; Clinton, Lani K; Hewitt, Carolyn; Koyamatsu, Terri; Sun, Yilun; Jamison, Ginger; Perkins, Rosalie; Tang, Li; Pounds, Stanley; Bankowski, Matthew J
2016-11-01
The delayed reporting of antimicrobial susceptibility testing remains a limiting factor in clinical decision-making in the treatment of bacterial infection. This study evaluates the use of forward laser light scatter (FLLS) to measure bacterial growth for the early determination of antimicrobial susceptibility. Three isolates each (two clinical isolates and one reference strain) of Staphylococcus aureus, Escherichia coli, and Pseudomonas aeruginosa were tested in triplicate using two commercial antimicrobial testing systems, the Vitek2 and the MicroScan MIC panel, to challenge the BacterioScan FLLS. The BacterioScan FLLS showed a high degree of categorical concordance with the commercial methods. Pairwise comparison with each commercial system serving as a reference standard showed 88.9% agreement with MicroScan (two minor errors) and 72.2% agreement with Vitek (five minor errors). FLLS using the BacterioScan system shows promise as a novel method for the rapid and accurate determination of antimicrobial susceptibility. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Performance Evaluation and Community Application of Low-Cost Sensors for Ozone and Nitrogen Dioxide.
Duvall, Rachelle M; Long, Russell W; Beaver, Melinda R; Kronmiller, Keith G; Wheeler, Michael L; Szykman, James J
2016-10-13
This study reports on the performance of electrochemical-based low-cost sensors and their use in a community application. CairClip sensors were collocated with federal reference and equivalent methods and operated in a network of sites by citizen scientists (community members) in Houston, Texas and Denver, Colorado, under the umbrella of the NASA-led DISCOVER-AQ Earth Venture Mission. Measurements were focused on ozone (O₃) and nitrogen dioxide (NO₂). The performance evaluation showed that the CairClip O₃/NO₂ sensor provided a consistent measurement response to that of reference monitors (r² = 0.79 in Houston; r² = 0.72 in Denver) whereas the CairClip NO₂ sensor measurements showed no agreement to reference measurements. The CairClip O₃/NO₂ sensor data from the citizen science sites compared favorably to measurements at nearby reference monitoring sites. This study provides important information on data quality from low-cost sensor technologies and is one of few studies that reports sensor data collected directly by citizen scientists.
Reference value sensitivity of measures of unfair health inequality
García-Gómez, Pilar; Schokkaert, Erik; Van Ourti, Tom
2014-01-01
Most politicians and ethical observers are not interested in pure health inequalities, as they want to distinguish between different causes of health differences. Measures of “unfair” inequality - direct unfairness and the fairness gap, but also the popular standardized concentration index - therefore neutralize the effects of what are considered to be “legitimate” causes of inequality. This neutralization is performed by putting a subset of the explanatory variables at reference values, e.g. their means. We analyze how the inequality ranking of different policies depends on the specific choice of reference values. We show with mortality data from the Netherlands that the problem is empirically relevant and we suggest a statistical method for fixing the reference values. PMID:24954998
2013-01-01
Background Apomixis is a naturally occurring asexual mode of seed reproduction resulting in offspring genetically identical to the maternal plant. Identifying differential gene expression patterns between apomictic and sexual plants is valuable to help deconstruct the trait. Quantitative RT-PCR (qRT-PCR) is a popular method for analyzing gene expression. Normalizing gene expression data using proper reference genes which show stable expression under investigated conditions is critical in qRT-PCR analysis. We used qRT-PCR to validate expression and stability of six potential reference genes (EF1alpha, EIF4A, UBCE, GAPDH, ACT2 and TUBA) in vegetative and reproductive tissues of B-2S and B-12-9 accessions of C. ciliaris. Findings Among tissue types evaluated, EF1alpha showed the highest level of expression while TUBA showed the lowest. When all tissue types were evaluated and compared between genotypes, EIF4A was the most stable reference gene. Gene expression stability for specific ovary stages of B-2S and B-12-9 was also determined. Except for TUBA, all other tested reference genes could be used for any stage-specific ovary tissue normalization, irrespective of the mode of reproduction. Conclusion Our gene expression stability assay using six reference genes, in sexual and apomictic accessions of C. ciliaris, suggests that EIF4A is the most stable gene across all tissue types analyzed. All other tested reference genes, with the exception of TUBA, could be used for gene expression comparison studies between sexual and apomictic ovaries over multiple developmental stages. This reference gene validation data in C. ciliaris will serve as an important base for future apomixis-related transcriptome data validation. PMID:24083672
Graphical Technique to Support the Teaching/Learning Process of Software Process Reference Models
NASA Astrophysics Data System (ADS)
Espinosa-Curiel, Ismael Edrein; Rodríguez-Jacobo, Josefina; Fernández-Zepeda, José Alberto
In this paper, we propose a set of diagrams to visualize software process reference models (PRM). The diagrams, called dimods, are the combination of some visual and process modeling techniques such as rich pictures, mind maps, IDEF and RAD diagrams. We show the use of this technique by designing a set of dimods for the Mexican Software Industry Process Model (MoProSoft). Additionally, we perform an evaluation of the usefulness of dimods. The result of the evaluation shows that dimods may be a support tool that facilitates the understanding, memorization, and learning of software PRMs in both, software development organizations and universities. The results also show that dimods may have advantages over the traditional description methods for these types of models.
Ravindranath, Jayasurya; Pillai, Priyamvada P Sivan; Parameswaran, Sreejith; Kamalanathan, Sadish Kumar; Pal, Gopal Krushna
2016-09-01
Body composition analysis is required for accurate assessment of nutritional status in patients with predialysis chronic kidney disease (CKD). The reference method for assessing body fat is dual-energy X-ray absorptiometry (DXA), but it is relatively expensive and often not available for widespread clinical use. There is only limited data on the utility of less expensive and easily available alternatives such as multifrequency bioimpedance assay (BIA) and skinfold thickness (SFT) measurements for assessing body fat in predialysis CKD. The study intends to assess the utility of BIA and SFT in measuring body fat compared to the reference method DXA in subjects with predialysis CKD. Body composition analysis was done in 50 subjects with predialysis CKD using multifrequency BIA, SFT, and DXA. The agreement between the body fat percentages measured by reference method DXA and BIA/SFT was assessed by paired t-test, intraclass correlation coefficients (ICCs), regression, and Bland-Altman plots. Percentage of body fat measured by BIA was higher compared to the measurements by DXA, but the difference was not significant (30.44 ± 9.34 vs. 28.62 ± 9.00; P = .071). The ICC between DXA and BIA was 0.822 (confidence interval: 0.688, 0.899; P = .000). The mean values of body fat percentages measured by anthropometry (SFT) was considerably lower when compared to DXA (23.62 ± 8.18 vs. 28.62 ± 9.00; P = .000). The ICC between DXA and SFT was .851 (confidence interval: 0.739, 0.915; P = .000). Bland-Altman plots showed that BIA overestimated body fat by a mean of 1.8% (standard deviation, 6.98), whereas SFT underestimated body fat by 5% (standard deviation, 4.01). Regression plots showed a better agreement between SFT and DXA (R(2) = .79) than BIA (R(2) = .50). Overall, SFT showed better agreement with the DXA. Body mass index (BMI) showed a moderate positive correlation with body fat measured by DXA whereas serum albumin failed to show good correlation. SFT showed relatively better agreement with the reference method DXA, compared to BIA. SFT can be used as a tool for assessing nutritional status in predialysis patients with CKD. Copyright © 2016 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.
Kuś, Tomasz; Krylov, Anna I
2011-08-28
The charge-stabilization method is applied to double ionization potential equation-of-motion (EOM-DIP) calculations to stabilize unstable dianion reference functions. The auto-ionizing character of the dianionic reference states spoils the numeric performance of EOM-DIP limiting applications of this method. We demonstrate that reliable excitation energies can be computed by EOM-DIP using a stabilized resonance wave function instead of the lowest energy solution corresponding to the neutral + free electron(s) state of the system. The details of charge-stabilization procedure are discussed and illustrated by examples. The choice of optimal stabilizing Coulomb potential, which is strong enough to stabilize the dianion reference, yet, minimally perturbs the target states of the neutral, is the crux of the approach. Two algorithms of choosing optimal parameters of the stabilization potential are presented. One is based on the orbital energies, and another--on the basis set dependence of the total Hartree-Fock energy of the reference. Our benchmark calculations of the singlet-triplet energy gaps in several diradicals show a remarkable improvement of the EOM-DIP accuracy in problematic cases. Overall, the excitation energies in diradicals computed using the stabilized EOM-DIP are within 0.2 eV from the reference EOM spin-flip values. © 2011 American Institute of Physics
Zhu, Yuanyuan; Yang, Chao; Weng, Mingjiao; Zhang, Yan; Yang, Chunhui; Jin, Yinji; Yang, Weiwei; He, Yan; Wu, Yiqi; Zhang, Yuhua; Wang, Guangyu; RajkumarEzakiel Redpath, Riju James; Zhang, Lei; Jin, Xiaoming; Liu, Ying; Sun, Yuchun; Ning, Ning; Qiao, Yu; Zhang, Fengmin; Li, Zhiwei; Wang, Tianzhen; Zhang, Yanqiao; Li, Xiaobo
2017-01-01
Numerous evidences indicate that aspirin usage causes a significant reduction in colorectal cancer. However, the molecular mechanisms about aspirin preventing colon cancer are largely unknown. Quantitative reverse transcription polymerase chain reaction (qRT-PCR) is a most frequently used method to identify the target molecules regulated by certain compound. However, this method needs stable internal reference genes to analyze the expression change of the targets. In this study, the transcriptional stabilities of several traditional reference genes were evaluated in colon cancer cells treated with aspirin, and also, the suitable internal reference genes were screened by using a microarray and were further identified by using the geNorm and NormFinder softwares, and then were validated in more cell lines and xenografts. We have showed that three traditional internal reference genes, β-actin, GAPDH and α-tubulin, are not suitable for studying gene transcription in colon cancer cells treated with aspirin, and we have identified and validated TMEM208 and PQLC2 as the ideal internal reference genes for detecting the molecular targets of aspirin in colon cancer in vitro and in vivo. This study reveals stable internal reference genes for studying the target genes of aspirin in colon cancer, which will contribute to identify the molecular mechanism behind aspirin preventing colon cancer. PMID:28184026
Zhu, Yuanyuan; Yang, Chao; Weng, Mingjiao; Zhang, Yan; Yang, Chunhui; Jin, Yinji; Yang, Weiwei; He, Yan; Wu, Yiqi; Zhang, Yuhua; Wang, Guangyu; RajkumarEzakiel Redpath, Riju James; Zhang, Lei; Jin, Xiaoming; Liu, Ying; Sun, Yuchun; Ning, Ning; Qiao, Yu; Zhang, Fengmin; Li, Zhiwei; Wang, Tianzhen; Zhang, Yanqiao; Li, Xiaobo
2017-04-04
Numerous evidences indicate that aspirin usage causes a significant reduction in colorectal cancer. However, the molecular mechanisms about aspirin preventing colon cancer are largely unknown. Quantitative reverse transcription polymerase chain reaction (qRT-PCR) is a most frequently used method to identify the target molecules regulated by certain compound. However, this method needs stable internal reference genes to analyze the expression change of the targets. In this study, the transcriptional stabilities of several traditional reference genes were evaluated in colon cancer cells treated with aspirin, and also, the suitable internal reference genes were screened by using a microarray and were further identified by using the geNorm and NormFinder softwares, and then were validated in more cell lines and xenografts. We have showed that three traditional internal reference genes, β-actin, GAPDH and α-tubulin, are not suitable for studying gene transcription in colon cancer cells treated with aspirin, and we have identified and validated TMEM208 and PQLC2 as the ideal internal reference genes for detecting the molecular targets of aspirin in colon cancer in vitro and in vivo. This study reveals stable internal reference genes for studying the target genes of aspirin in colon cancer, which will contribute to identify the molecular mechanism behind aspirin preventing colon cancer.
Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki
2013-01-01
We have already proposed a method for detecting vehicle positions and their movements (henceforth referred to as “our previous method”) using thermal images taken with an infrared thermal camera. Our experiments have shown that our previous method detects vehicles robustly under four different environmental conditions which involve poor visibility conditions in snow and thick fog. Our previous method uses the windshield and its surroundings as the target of the Viola-Jones detector. Some experiments in winter show that the vehicle detection accuracy decreases because the temperatures of many windshields approximate those of the exterior of the windshields. In this paper, we propose a new vehicle detection method (henceforth referred to as “our new method”). Our new method detects vehicles based on tires' thermal energy reflection. We have done experiments using three series of thermal images for which the vehicle detection accuracies of our previous method are low. Our new method detects 1,417 vehicles (92.8%) out of 1,527 vehicles, and the number of false detection is 52 in total. Therefore, by combining our two methods, high vehicle detection accuracies are maintained under various environmental conditions. Finally, we apply the traffic information obtained by our two methods to traffic flow automatic monitoring, and show the effectiveness of our proposal. PMID:23774988
34 CFR 655.31 - What general selection criteria does the Secretary use?
Code of Federal Regulations, 2012 CFR
2012-07-01
... the actual teaching and supervision of students; and (iii) The time that each person referred to in... project. (2) The Secretary looks for information that shows methods of evaluation that are appropriate for...
34 CFR 655.31 - What general selection criteria does the Secretary use?
Code of Federal Regulations, 2011 CFR
2011-07-01
... the actual teaching and supervision of students; and (iii) The time that each person referred to in... project. (2) The Secretary looks for information that shows methods of evaluation that are appropriate for...
34 CFR 655.31 - What general selection criteria does the Secretary use?
Code of Federal Regulations, 2014 CFR
2014-07-01
... the actual teaching and supervision of students; and (iii) The time that each person referred to in... project. (2) The Secretary looks for information that shows methods of evaluation that are appropriate for...
34 CFR 655.31 - What general selection criteria does the Secretary use?
Code of Federal Regulations, 2010 CFR
2010-07-01
... the actual teaching and supervision of students; and (iii) The time that each person referred to in... project. (2) The Secretary looks for information that shows methods of evaluation that are appropriate for...
34 CFR 655.31 - What general selection criteria does the Secretary use?
Code of Federal Regulations, 2013 CFR
2013-07-01
... the actual teaching and supervision of students; and (iii) The time that each person referred to in... project. (2) The Secretary looks for information that shows methods of evaluation that are appropriate for...
Quantification of mixed chimerism by real time PCR on whole blood-impregnated FTA cards.
Pezzoli, N; Silvy, M; Woronko, A; Le Treut, T; Lévy-Mozziconacci, A; Reviron, D; Gabert, J; Picard, C
2007-09-01
This study has investigated quantification of chimerism in sex-mismatched transplantations by quantitative real time PCR (RQ-PCR) using FTA paper for blood sampling. First, we demonstrate that the quantification of DNA from EDTA-blood which has been deposit on FTA card is accurate and reproducible. Secondly, we show that fraction of recipient cells detected by RQ-PCR was concordant between the FTA and salting-out method, reference DNA extraction method. Furthermore, the sensitivity of detection of recipient cells is relatively similar with the two methods. Our results show that this innovative method can be used for MC assessment by RQ-PCR.
Kwon, Dohyeon; Jeon, Chan-Gi; Shin, Junho; Heo, Myoung-Sun; Park, Sang Eon; Song, Youjian; Kim, Jungwon
2017-01-01
Timing jitter is one of the most important properties of femtosecond mode-locked lasers and optical frequency combs. Accurate measurement of timing jitter power spectral density (PSD) is a critical prerequisite for optimizing overall noise performance and further advancing comb applications both in the time and frequency domains. Commonly used jitter measurement methods require a reference mode-locked laser with timing jitter similar to or lower than that of the laser-under-test, which is a demanding requirement for many laser laboratories, and/or have limited measurement resolution. Here we show a high-resolution and reference-source-free measurement method of timing jitter spectra of optical frequency combs using an optical fibre delay line and optical carrier interference. The demonstrated method works well for both mode-locked oscillators and supercontinua, with 2 × 10−9 fs2/Hz (equivalent to −174 dBc/Hz at 10-GHz carrier frequency) measurement noise floor. The demonstrated method can serve as a simple and powerful characterization tool for timing jitter PSDs of various comb sources including mode-locked oscillators, supercontinua and recently emerging Kerr-frequency combs; the jitter measurement results enabled by our method will provide new insights for understanding and optimizing timing noise in such comb sources. PMID:28102352
Bisi, Maria Cristina; Stagni, Rita; Caroselli, Alessio; Cappello, Angelo
2015-08-01
Inertial sensors are becoming widely used for the assessment of human movement in both clinical and research applications, thanks to their usability out of the laboratory. This work aims to propose a method for calibrating anatomical landmark position in the wearable sensor reference frame with an ease to use, portable and low cost device. An off-the-shelf camera, a stick and a pattern, attached to the inertial sensor, compose the device. The proposed technique is referred to as video Calibrated Anatomical System Technique (vCAST). The absolute orientation of a synthetic femur was tracked both using the vCAST together with an inertial sensor and using stereo-photogrammetry as reference. Anatomical landmark calibration showed mean absolute error of 0.6±0.5 mm: these errors are smaller than those affecting the in-vivo identification of anatomical landmarks. The roll, pitch and yaw anatomical frame orientations showed root mean square errors close to the accuracy limit of the wearable sensor used (1°), highlighting the reliability of the proposed technique. In conclusion, the present paper proposes and preliminarily verifies the performance of a method (vCAST) for calibrating anatomical landmark position in the wearable sensor reference frame: the technique is low time consuming, highly portable, easy to implement and usable outside laboratory. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cui, Ying; Dy, Jennifer G.; Sharp, Greg C.; Alexander, Brian; Jiang, Steve B.
2007-02-01
For gated lung cancer radiotherapy, it is difficult to generate accurate gating signals due to the large uncertainties when using external surrogates and the risk of pneumothorax when using implanted fiducial markers. We have previously investigated and demonstrated the feasibility of generating gating signals using the correlation scores between the reference template image and the fluoroscopic images acquired during the treatment. In this paper, we present an in-depth study, aiming at the improvement of robustness of the algorithm and its validation using multiple sets of patient data. Three different template generating and matching methods have been developed and evaluated: (1) single template method, (2) multiple template method, and (3) template clustering method. Using the fluoroscopic data acquired during patient setup before each fraction of treatment, reference templates are built that represent the tumour position and shape in the gating window, which is assumed to be at the end-of-exhale phase. For the single template method, all the setup images within the gating window are averaged to generate a composite template. For the multiple template method, each setup image in the gating window is considered as a reference template and used to generate an ensemble of correlation scores. All the scores are then combined to generate the gating signal. For the template clustering method, clustering (grouping of similar objects together) is performed to reduce the large number of reference templates into a few representative ones. Each of these methods has been evaluated against the reference gating signal as manually determined by a radiation oncologist. Five patient datasets were used for evaluation. In each case, gated treatments were simulated at both 35% and 50% duty cycles. False positive, negative and total error rates were computed. Experiments show that the single template method is sensitive to noise; the multiple template and clustering methods are more robust to noise due to the smoothing effect of aggregation of correlation scores; and the clustering method results in the best performance in terms of computational efficiency and accuracy.
Rueda, Oscar M; Diaz-Uriarte, Ramon
2007-10-16
Yu et al. (BMC Bioinformatics 2007,8: 145+) have recently compared the performance of several methods for the detection of genomic amplification and deletion breakpoints using data from high-density single nucleotide polymorphism arrays. One of the methods compared is our non-homogenous Hidden Markov Model approach. Our approach uses Markov Chain Monte Carlo for inference, but Yu et al. ran the sampler for a severely insufficient number of iterations for a Markov Chain Monte Carlo-based method. Moreover, they did not use the appropriate reference level for the non-altered state. We rerun the analysis in Yu et al. using appropriate settings for both the Markov Chain Monte Carlo iterations and the reference level. Additionally, to show how easy it is to obtain answers to additional specific questions, we have added a new analysis targeted specifically to the detection of breakpoints. The reanalysis shows that the performance of our method is comparable to that of the other methods analyzed. In addition, we can provide probabilities of a given spot being a breakpoint, something unique among the methods examined. Markov Chain Monte Carlo methods require using a sufficient number of iterations before they can be assumed to yield samples from the distribution of interest. Running our method with too small a number of iterations cannot be representative of its performance. Moreover, our analysis shows how our original approach can be easily adapted to answer specific additional questions (e.g., identify edges).
Chen, Hsin-Chen; Jou, I-Ming; Wang, Chien-Kuo; Su, Fong-Chin; Sun, Yung-Nien
2010-06-01
The quantitative measurements of hand bones, including volume, surface, orientation, and position are essential in investigating hand kinematics. Moreover, within the measurement stage, bone segmentation is the most important step due to its certain influences on measuring accuracy. Since hand bones are small and tubular in shape, magnetic resonance (MR) imaging is prone to artifacts such as nonuniform intensity and fuzzy boundaries. Thus, greater detail is required for improving segmentation accuracy. The authors then propose using a novel registration-based method on an articulated hand model to segment hand bones from multipostural MR images. The proposed method consists of the model construction and registration-based segmentation stages. Given a reference postural image, the first stage requires construction of a drivable reference model characterized by hand bone shapes, intensity patterns, and articulated joint mechanism. By applying the reference model to the second stage, the authors initially design a model-based registration pursuant to intensity distribution similarity, MR bone intensity properties, and constraints of model geometry to align the reference model to target bone regions of the given postural image. The authors then refine the resulting surface to improve the superimposition between the registered reference model and target bone boundaries. For each subject, given a reference postural image, the proposed method can automatically segment all hand bones from all other postural images. Compared to the ground truth from two experts, the resulting surface image had an average margin of error within 1 mm (mm) only. In addition, the proposed method showed good agreement on the overlap of bone segmentations by dice similarity coefficient and also demonstrated better segmentation results than conventional methods. The proposed registration-based segmentation method can successfully overcome drawbacks caused by inherent artifacts in MR images and obtain more accurate segmentation results automatically. Moreover, realistic hand motion animations can be generated based on the bone segmentation results. The proposed method is found helpful for understanding hand bone geometries in dynamic postures that can be used in simulating 3D hand motion through multipostural MR images.
Applying Suffix Rules to Organization Name Recognition
NASA Astrophysics Data System (ADS)
Inui, Takashi; Murakami, Koji; Hashimoto, Taiichi; Utsumi, Kazuo; Ishikawa, Masamichi
This paper presents a method for boosting the performance of the organization name recognition, which is a part of named entity recognition (NER). Although gazetteers (lists of the NEs) have been known as one of the effective features for supervised machine learning approaches on the NER task, the previous methods which have applied the gazetteers to the NER were very simple. The gazetteers have been used just for searching the exact matches between input text and NEs included in them. The proposed method generates regular expression rules from gazetteers, and, with these rules, it can realize a high-coverage searches based on looser matches between input text and NEs. To generate these rules, we focus on the two well-known characteristics of NE expressions; 1) most of NE expressions can be divided into two parts, class-reference part and instance-reference part, 2) for most of NE expressions the class-reference parts are located at the suffix position of them. A pattern mining algorithm runs on the set of NEs in the gazetteers, and some frequent word sequences from which NEs are constructed are found. Then, we employ only word sequences which have the class-reference part at the suffix position as suffix rules. Experimental results showed that our proposed method improved the performance of the organization name recognition, and achieved the 84.58 F-value for evaluation data.
Makretsov, Nikita; Gilks, C Blake; Alaghehbandan, Reza; Garratt, John; Quenneville, Louise; Mercer, Joel; Palavdzic, Dragana; Torlakovic, Emina E
2011-07-01
External quality assurance and proficiency testing programs for breast cancer predictive biomarkers are based largely on traditional ad hoc design; at present there is no universal consensus on definition of a standard reference value for samples used in external quality assurance programs. To explore reference values for estrogen receptor and progesterone receptor immunohistochemistry in order to develop an evidence-based analytic platform for external quality assurance. There were 31 participating laboratories, 4 of which were previously designated as "expert" laboratories. Each participant tested a tissue microarray slide with 44 breast carcinomas for estrogen receptor and progesterone receptor and submitted it to the Canadian Immunohistochemistry Quality Control Program for analysis. Nuclear staining in 1% or more of the tumor cells was a positive score. Five methods for determining reference values were compared. All reference values showed 100% agreement for estrogen receptor and progesterone receptor scores, when indeterminate results were excluded. Individual laboratory performance (agreement rates, test sensitivity, test specificity, positive predictive value, negative predictive value, and κ value) was very similar for all reference values. Identification of suboptimal performance by all methods was identical for 30 of 31 laboratories. Estrogen receptor assessment of 1 laboratory was discordant: agreement was less than 90% for 3 of 5 reference values and greater than 90% with the use of 2 other reference values. Various reference values provide equivalent laboratory rating. In addition to descriptive feedback, our approach allows calculation of technical test sensitivity and specificity, positive and negative predictive values, agreement rates, and κ values to guide corrective actions.
Validation of no-reference image quality index for the assessment of digital mammographic images
NASA Astrophysics Data System (ADS)
de Oliveira, Helder C. R.; Barufaldi, Bruno; Borges, Lucas R.; Gabarda, Salvador; Bakic, Predrag R.; Maidment, Andrew D. A.; Schiabel, Homero; Vieira, Marcelo A. C.
2016-03-01
To ensure optimal clinical performance of digital mammography, it is necessary to obtain images with high spatial resolution and low noise, keeping radiation exposure as low as possible. These requirements directly affect the interpretation of radiologists. The quality of a digital image should be assessed using objective measurements. In general, these methods measure the similarity between a degraded image and an ideal image without degradation (ground-truth), used as a reference. These methods are called Full-Reference Image Quality Assessment (FR-IQA). However, for digital mammography, an image without degradation is not available in clinical practice; thus, an objective method to assess the quality of mammograms must be performed without reference. The purpose of this study is to present a Normalized Anisotropic Quality Index (NAQI), based on the Rényi entropy in the pseudo-Wigner domain, to assess mammography images in terms of spatial resolution and noise without any reference. The method was validated using synthetic images acquired through an anthropomorphic breast software phantom, and the clinical exposures on anthropomorphic breast physical phantoms and patient's mammograms. The results reported by this noreference index follow the same behavior as other well-established full-reference metrics, e.g., the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Reductions of 50% on the radiation dose in phantom images were translated as a decrease of 4dB on the PSNR, 25% on the SSIM and 33% on the NAQI, evidencing that the proposed metric is sensitive to the noise resulted from dose reduction. The clinical results showed that images reduced to 53% and 30% of the standard radiation dose reported reductions of 15% and 25% on the NAQI, respectively. Thus, this index may be used in clinical practice as an image quality indicator to improve the quality assurance programs in mammography; hence, the proposed method reduces the subjectivity inter-observers in the reporting of image quality assessment.
40 CFR 53.16 - Supersession of reference methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Supersession of reference methods. 53... (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.16 Supersession of reference methods. (a) This section prescribes procedures and criteria applicable to requests that...
Labots, M Maaike; Laarakker, M C Marijke; Schetters, D Dustin; Arndt, S S Saskia; van Lith, H A Hein
2018-01-01
Guilloux et al. introduced: integrated behavioral z-scoring, a method for behavioral phenotyping of mice. Using this method multiple ethological variables can be combined to show an overall description of a certain behavioral dimension or motivational system. However, a problem may occur when the control group used for the calculation has a standard deviation of zero or when no control group is present to act as a reference group. In order to solve these problems, an improved procedure is suggested: taking the pooled data as reference. For this purpose a behavioral study with male mice from three inbred strains was carried out. The integrated behavioral z-scoring methodology was applied, thereby taking five different reference group options. The outcome regarding statistical significance and practical importance was compared. Significant effects and effect sizes were influenced by the choice of the reference group. In some cases it was impossible to use a certain population and condition, because one or more behavioral variables in question had a standard deviation of zero. Based on the improved method, male mice from the three inbred strains differed regarding activity and anxiety. Taking the method described by Guilloux et al. as basis, the present procedure improved the generalizability to all types of experimental designs in animal behavioral research. To solve the aforementioned problems and to avoid getting the diagnosis of data manipulation, the pooled data (combining the data from all experimental groups in a study) as reference option is recommended. Copyright © 2017 Elsevier B.V. All rights reserved.
Parallel But Not Equivalent: Challenges and Solutions for Repeated Assessment of Cognition over Time
Gross, Alden L.; Inouye, Sharon K.; Rebok, George W.; Brandt, Jason; Crane, Paul K.; Parisi, Jeanine M.; Tommet, Doug; Bandeen-Roche, Karen; Carlson, Michelle C.; Jones, Richard N.
2013-01-01
Objective Analyses of individual differences in change may be unintentionally biased when versions of a neuropsychological test used at different follow-ups are not of equivalent difficulty. This study’s objective was to compare mean, linear, and equipercentile equating methods and demonstrate their utility in longitudinal research. Study Design and Setting The Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE, N=1,401) study is a longitudinal randomized trial of cognitive training. The Alzheimer’s Disease Neuroimaging Initiative (ADNI, n=819) is an observational cohort study. Nonequivalent alternate versions of the Auditory Verbal Learning Test (AVLT) were administered in both studies. Results Using visual displays, raw and mean-equated AVLT scores in both studies showed obvious nonlinear trajectories in reference groups that should show minimal change, poor equivalence over time (ps≤0.001), and raw scores demonstrated poor fits in models of within-person change (RMSEAs>0.12). Linear and equipercentile equating produced more similar means in reference groups (ps≥0.09) and performed better in growth models (RMSEAs<0.05). Conclusion Equipercentile equating is the preferred equating method because it accommodates tests more difficult than a reference test at different percentiles of performance and performs well in models of within-person trajectory. The method has broad applications in both clinical and research settings to enhance the ability to use nonequivalent test forms. PMID:22540849
Riond, B; Steffen, F; Schmied, O; Hofmann-Lehmann, R; Lutz, H
2014-03-01
In veterinary clinical laboratories, qualitative tests for total protein measurement in canine cerebrospinal fluid (CSF) have been replaced by quantitative methods, which can be divided into dye-binding assays and turbidimetric methods. There is a lack of validation data and reference intervals (RIs) for these assays. The aim of the present study was to assess agreement between the turbidimetric benzethonium chloride method and 2 dye-binding methods (Pyrogallol Red-Molybdate method [PRM], Coomassie Brilliant Blue [CBB] technique) for measurement of total protein concentration in canine CSF. Furthermore, RIs were determined for all 3 methods using an indirect a posteriori method. For assay comparison, a total of 118 canine CSF specimens were analyzed. For RIs calculation, clinical records of 401 canine patients with normal CSF analysis were studied and classified according to their final diagnosis in pathologic and nonpathologic values. The turbidimetric assay showed excellent agreement with the PRM assay (mean bias 0.003 g/L [-0.26-0.27]). The CBB method generally showed higher total protein values than the turbidimetric assay and the PRM assay (mean bias -0.14 g/L for turbidimetric and PRM assay). From 90 of 401 canine patients, nonparametric reference intervals (2.5%, 97.5% quantile) were calculated (turbidimetric assay and PRM method: 0.08-0.35 g/L (90% CI: 0.07-0.08/0.33-0.39); CBB method: 0.17-0.55 g/L (90% CI: 0.16-0.18/0.52-0.61). Total protein concentration in canine CSF specimens remained stable for up to 6 months of storage at -80°C. Due to variations among methods, RIs for total protein concentration in canine CSF have to be calculated for each method. The a posteriori method of RIs calculation described here should encourage other veterinary laboratories to establish RIs that are laboratory-specific. ©2014 American Society for Veterinary Clinical Pathology and European Society for Veterinary Clinical Pathology.
Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location
NASA Astrophysics Data System (ADS)
Zhao, A. H.
2014-12-01
Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.
Who cites non-English-language pharmaceutical articles?
Edouard, Bruno
2009-01-01
PURPOSE The objective was to determine a link between the number of non-English language references in the bibliographies of publications in international pharmaceutical journals and the geographic origin of these publications. METHODS A systematic prospective analysis of 7 international pharmaceutical journals in 2005–2006. All research articles whom corresponding author was a pharmacist were included. For each article, were recorded: the geographic origin of the corresponding author (classified in: North America, Latin America, Oceania, Europe, Asia, others); the title of the journal; the number of non-English language references in the bibliography (classified in: Spanish, German, French, Portuguese, Dutch, Russian, Japanese, Chinese, others). RESULTS 1,568 articles were included, corresponding to 45,949 bibliographic references, of whom 542 where non-English references. North America is the geographic zone of the world with the lowest rate of non-English language references in bibliographies of published articles; significant differences appear between North America and Europe, Latin America and Asia. A sub-analysis by countries shows that United States, United Kingdom, Australia and China present a specific low rate of non-English language references. The two journals with the lowest rate of non-English language references in bibliographies of published articles are edited in the USA. CONCLUSIONS Despite some limitations, this study shows that pharmacists from regions where English language is the only or predominant language are refractory to include non-English language references in the bibliographies of their publications. The fundamental reasons of this restriction are not clear. PMID:19240258
NASA Astrophysics Data System (ADS)
Goh, C. P.; Ismail, H.; Yen, K. S.; Ratnam, M. M.
2017-01-01
The incremental digital image correlation (DIC) method has been applied in the past to determine strain in large deformation materials like rubber. This method is, however, prone to cumulative errors since the total displacement is determined by combining the displacements in numerous stages of the deformation. In this work, a method of mapping large strains in rubber using DIC in a single-step without the need for a series of deformation images is proposed. The reference subsets were deformed using deformation factors obtained from the fitted mean stress-axial stretch ratio curve obtained experimentally and the theoretical Poisson function. The deformed reference subsets were then correlated with the deformed image after loading. The recently developed scanner-based digital image correlation (SB-DIC) method was applied on dumbbell rubber specimens to obtain the in-plane displacement fields up to 350% axial strain. Comparison of the mean axial strains determined from the single-step SB-DIC method with those from the incremental SB-DIC method showed an average difference of 4.7%. Two rectangular rubber specimens containing circular and square holes were deformed and analysed using the proposed method. The resultant strain maps from the single-step SB-DIC method were compared with the results of finite element modeling (FEM). The comparison shows that the proposed single-step SB-DIC method can be used to map the strain distribution accurately in large deformation materials like rubber at much shorter time compared to the incremental DIC method.
Development of certified reference materials for electrolytes in human serum (GBW09124-09126).
Feng, Liuxing; Wang, Jun; Cui, Yanjie; Shi, Naijie; Li, Haifeng; Li, Hongmei
2017-05-01
Three reference materials, at relatively low, middle, and high concentrations, were developed for analysis of the mass fractions of electrolytes (K, Ca, Na, Mg, Cl, and Li) in human serum. The reference materials were prepared by adding high purity chloride salts to normal human serum. The concentration range of the three levels is within ±20% of normal human serum. It was shown that 14 units with duplicate analysis is enough to demonstrate the homogeneity of these candidate reference materials. The statistical results also showed no significant trends in both short-term stability test for 1 week at 40 °C and long-term stability test for 14 months. The certification methods of the six elements include isotope dilution inductively coupled plasma mass spectrometry (ID-ICP-MS), inductively coupled plasma optical emission spectroscopy (ICP-OES), atomic absorption spectroscopy (AAS), ion chromatography (IC), and ion-selective electrode (ISE). The certification methods were validated by international comparisons among a number of national metrology institutes (NMIs). The combined relative standard uncertainties of the property values were estimated by considering the uncertainties of the analytical methods, homogeneity, and stability. The range of the expanded uncertainties of all the elements is from 2.2% to 3.9%. The certified reference materials (CRMs) are primarily intended for use in the calibration and validation of procedures in clinical analysis for the determination of electrolytes in human serum or plasma. Graphical Abstract Certified reference materials for K, Ca, Mg, Na, Cl and Li in human serum (GBW09124-09126).
NASA Astrophysics Data System (ADS)
Ohara, Masaki; Noguchi, Toshihiko
This paper describes a new method for a rotor position sensorless control of a surface permanent magnet synchronous motor based on a model reference adaptive system (MRAS). This method features the MRAS in a current control loop to estimate a rotor speed and position by using only current sensors. This method as well as almost all the conventional methods incorporates a mathematical model of the motor, which consists of parameters such as winding resistances, inductances, and an induced voltage constant. Hence, the important thing is to investigate how the deviation of these parameters affects the estimated rotor position. First, this paper proposes a structure of the sensorless control applied in the current control loop. Next, it proves the stability of the proposed method when motor parameters deviate from the nominal values, and derives the relationship between the estimated position and the deviation of the parameters in a steady state. Finally, some experimental results are presented to show performance and effectiveness of the proposed method.
Image quality evaluation of full reference algorithm
NASA Astrophysics Data System (ADS)
He, Nannan; Xie, Kai; Li, Tong; Ye, Yushan
2018-03-01
Image quality evaluation is a classic research topic, the goal is to design the algorithm, given the subjective feelings consistent with the evaluation value. This paper mainly introduces several typical reference methods of Mean Squared Error(MSE), Peak Signal to Noise Rate(PSNR), Structural Similarity Image Metric(SSIM) and feature similarity(FSIM) of objective evaluation methods. The different evaluation methods are tested by Matlab, and the advantages and disadvantages of these methods are obtained by analyzing and comparing them.MSE and PSNR are simple, but they are not considered to introduce HVS characteristics into image quality evaluation. The evaluation result is not ideal. SSIM has a good correlation and simple calculation ,because it is considered to the human visual effect into image quality evaluation,However the SSIM method is based on a hypothesis,The evaluation result is limited. The FSIM method can be used for test of gray image and color image test, and the result is better. Experimental results show that the new image quality evaluation algorithm based on FSIM is more accurate.
Uehara, Takashi; Sartori, Matteo; Tanaka, Toshihisa; Fiori, Simone
2017-06-01
The estimation of covariance matrices is of prime importance to analyze the distribution of multivariate signals. In motor imagery-based brain-computer interfaces (MI-BCI), covariance matrices play a central role in the extraction of features from recorded electroencephalograms (EEGs); therefore, correctly estimating covariance is crucial for EEG classification. This letter discusses algorithms to average sample covariance matrices (SCMs) for the selection of the reference matrix in tangent space mapping (TSM)-based MI-BCI. Tangent space mapping is a powerful method of feature extraction and strongly depends on the selection of a reference covariance matrix. In general, the observed signals may include outliers; therefore, taking the geometric mean of SCMs as the reference matrix may not be the best choice. In order to deal with the effects of outliers, robust estimators have to be used. In particular, we discuss and test the use of geometric medians and trimmed averages (defined on the basis of several metrics) as robust estimators. The main idea behind trimmed averages is to eliminate data that exhibit the largest distance from the average covariance calculated on the basis of all available data. The results of the experiments show that while the geometric medians show little differences from conventional methods in terms of classification accuracy in the classification of electroencephalographic recordings, the trimmed averages show significant improvement for all subjects.
Bayesian SEM for Specification Search Problems in Testing Factorial Invariance.
Shi, Dexin; Song, Hairong; Liao, Xiaolan; Terry, Robert; Snyder, Lori A
2017-01-01
Specification search problems refer to two important but under-addressed issues in testing for factorial invariance: how to select proper reference indicators and how to locate specific non-invariant parameters. In this study, we propose a two-step procedure to solve these issues. Step 1 is to identify a proper reference indicator using the Bayesian structural equation modeling approach. An item is selected if it is associated with the highest likelihood to be invariant across groups. Step 2 is to locate specific non-invariant parameters, given that a proper reference indicator has already been selected in Step 1. A series of simulation analyses show that the proposed method performs well under a variety of data conditions, and optimal performance is observed under conditions of large magnitude of non-invariance, low proportion of non-invariance, and large sample sizes. We also provide an empirical example to demonstrate the specific procedures to implement the proposed method in applied research. The importance and influences are discussed regarding the choices of informative priors with zero mean and small variances. Extensions and limitations are also pointed out.
NASA Astrophysics Data System (ADS)
Zhang, Jia-shi; Yang, Xi-xiang
2017-11-01
The stratospheric airship has the characteristics of large inertia, long time delay and large disturbance of wind field , so the trajectory control is very difficult .Build the lateral three degrees of freedom dynamic model which consider the wind interference , the dynamics equation is linearized by the small perturbation theory, propose a trajectory control method Combine with the sliding mode control and prediction, design the trajectory controller , takes the HAA airship as the reference to carry out simulation analysis. Results show that the improved sliding mode control with front-feedback method not only can solve well control problems of airship trajectory in wind field, but also can effectively improve the control accuracy of the traditional sliding mode control method, solved problems that using the traditional sliding mode control to control. It provides a useful reference for dynamic modeling and trajectory control of stratospheric airship.
Fu, Hongbo; Wang, Huadong; Jia, Junwei; Ni, Zhibo; Dong, Fengzhong
2018-01-01
Due to the influence of major elements' self-absorption, scarce observable spectral lines of trace elements, and relative efficiency correction of experimental system, accurate quantitative analysis with calibration-free laser-induced breakdown spectroscopy (CF-LIBS) is in fact not easy. In order to overcome these difficulties, standard reference line (SRL) combined with one-point calibration (OPC) is used to analyze six elements in three stainless-steel and five heat-resistant steel samples. The Stark broadening and Saha - Boltzmann plot of Fe are used to calculate the electron density and the plasma temperature, respectively. In the present work, we tested the original SRL method, the SRL with the OPC method, and intercept with the OPC method. The final calculation results show that the latter two methods can effectively improve the overall accuracy of quantitative analysis and the detection limits of trace elements.
Determination of Vitamin E in Cereal Products and Biscuits by GC-FID.
Pasias, Ioannis N; Kiriakou, Ioannis K; Papakonstantinou, Lila; Proestos, Charalampos
2018-01-01
A rapid, precise and accurate method for the determination of vitamin E (α-tocopherol) in cereal products and biscuits has been developed. The uncertainty was calculated for the first time, and the methods were performed for different cereal products and biscuits, characterized as "superfoods". The limits of detection and quantification were calculated. The accuracy and precision were estimated using the certified reference material FAPAS T10112QC, and the determined values were in good accordance with the certified values. The health claims according to the daily reference values for vitamin E were calculated, and the results proved that the majority of the samples examined showed a percentage daily value higher than 15%.
Determination of Vitamin E in Cereal Products and Biscuits by GC-FID
Kiriakou, Ioannis K.; Papakonstantinou, Lila
2018-01-01
A rapid, precise and accurate method for the determination of vitamin E (α-tocopherol) in cereal products and biscuits has been developed. The uncertainty was calculated for the first time, and the methods were performed for different cereal products and biscuits, characterized as “superfoods”. The limits of detection and quantification were calculated. The accuracy and precision were estimated using the certified reference material FAPAS T10112QC, and the determined values were in good accordance with the certified values. The health claims according to the daily reference values for vitamin E were calculated, and the results proved that the majority of the samples examined showed a percentage daily value higher than 15%. PMID:29301245
García-Álvarez, Lara; Busto, Jesús H.; Avenoza, Alberto; Sáenz, Yolanda; Peregrina, Jesús Manuel
2015-01-01
Antimicrobial drug susceptibility tests involving multiple time-consuming steps are still used as reference methods. Today, there is a need for the development of new automated instruments that can provide faster results and reduce operating time, reagent costs, and labor requirements. Nuclear magnetic resonance (NMR) spectroscopy meets those requirements. The metabolism and antimicrobial susceptibility of Escherichia coli ATCC 25922 in the presence of gentamicin have been analyzed using NMR and compared with a reference method. Direct incubation of the bacteria (with and without gentamicin) into the NMR tube has also been performed, and differences in the NMR spectra were obtained. The MIC, determined by the reference method found in this study, would correspond with the termination of the bacterial metabolism observed with NMR. Experiments carried out directly into the NMR tube enabled the development of antimicrobial drug susceptibility tests to assess the effectiveness of the antibiotic. NMR is an objective and reproducible method for showing the effects of a drug on the subject bacterium and can emerge as an excellent tool for studying bacterial activity in the presence of different antibiotic concentrations. PMID:25972417
García-Álvarez, Lara; Busto, Jesús H; Avenoza, Alberto; Sáenz, Yolanda; Peregrina, Jesús Manuel; Oteo, José A
2015-08-01
Antimicrobial drug susceptibility tests involving multiple time-consuming steps are still used as reference methods. Today, there is a need for the development of new automated instruments that can provide faster results and reduce operating time, reagent costs, and labor requirements. Nuclear magnetic resonance (NMR) spectroscopy meets those requirements. The metabolism and antimicrobial susceptibility of Escherichia coli ATCC 25922 in the presence of gentamicin have been analyzed using NMR and compared with a reference method. Direct incubation of the bacteria (with and without gentamicin) into the NMR tube has also been performed, and differences in the NMR spectra were obtained. The MIC, determined by the reference method found in this study, would correspond with the termination of the bacterial metabolism observed with NMR. Experiments carried out directly into the NMR tube enabled the development of antimicrobial drug susceptibility tests to assess the effectiveness of the antibiotic. NMR is an objective and reproducible method for showing the effects of a drug on the subject bacterium and can emerge as an excellent tool for studying bacterial activity in the presence of different antibiotic concentrations. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrel, J.E.; Kucera, C.L.; Johannsen, C.J.
1980-12-01
During this contract period research was continued at finding suitable methods and criteria for determining the success of revegetation in Midwestern prime ag lands strip mined for coal. Particularly important to the experimental design was the concept of reference areas, which were nearby fields from which the performance standards for reclaimed areas were derived. Direct and remote sensing techniques for measuring plant ground cover, production, and species composition were tested. 15 mine sites were worked in which were permitted under interim permanent surface mine regulations and in 4 adjoining reference sites. Studies at 9 prelaw sites were continued. All sitesmore » were either in Missouri or Illinois. Data gathered in the 1980 growing season showed that 13 unmanaged or young mineland pastures generally had lower average ground cover and production than 2 reference pastures. In contrast, yields at approximately 40% of 11 recently reclaimed mine sites planted with winter wheat, soybeans, or milo were statistically similar to 3 reference values. Digital computer image analysis of color infrared aerial photographs, when compared to ground level measurements, was a fast, accurate, and inexpensive way to determine plant ground cover and areas. But the remote sensing approach was inferior to standard surface methods for detailing plant species abundance and composition.« less
NASA Astrophysics Data System (ADS)
Chen, Jian-bo; Sun, Su-qin; Tang, Xu-dong; Zhang, Jing-zhao; Zhou, Qun
2016-08-01
Herbal powder preparation is a kind of widely-used herbal product in the form of powder mixture of herbal ingredients. Identification of herbal ingredients is the first and foremost step in assuring the quality, safety and efficacy of herbal powder preparations. In this research, Fourier transform infrared (FT-IR) microspectroscopic identification method is proposed for the direct and simultaneous recognition of multiple organic and inorganic ingredients in herbal powder preparations. First, the reference spectrum of characteristic particles of each herbal ingredient is assigned according to FT-IR results and other available information. Next, a statistical correlation threshold is determined as the lower limit of correlation coefficients between the reference spectrum and a larger number of calibration characteristic particles. After validation, the reference spectrum and correlation threshold can be used to identify herbal ingredient in mixture preparations. A herbal ingredient is supposed to be present if correlation coefficients between the reference spectrum and some sample particles are above the threshold. Using this method, all kinds of herbal materials in powder preparation Kouqiang Kuiyang San are identified successfully. This research shows the potential of FT-IR microspectroscopic identification method for the accurate and quick identification of ingredients in herbal powder preparations.
Automated color classification of urine dipstick image in urine examination
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Royananda; Muchtar, M. A.; Taqiuddin, R.; Adnan, S.; Anugrahwaty, R.; Budiarto, R.
2018-03-01
Urine examination using urine dipstick has long been used to determine the health status of a person. The economical and convenient use of urine dipstick is one of the reasons urine dipstick is still used to check people health status. The real-life implementation of urine dipstick is done manually, in general, that is by comparing it with the reference color visually. This resulted perception differences in the color reading of the examination results. In this research, authors used a scanner to obtain the urine dipstick color image. The use of scanner can be one of the solutions in reading the result of urine dipstick because the light produced is consistent. A method is required to overcome the problems of urine dipstick color matching and the test reference color that have been conducted manually. The method proposed by authors is Euclidean Distance, Otsu along with RGB color feature extraction method to match the colors on the urine dipstick with the standard reference color of urine examination. The result shows that the proposed approach was able to classify the colors on a urine dipstick with an accuracy of 95.45%. The accuracy of color classification on urine dipstick against the standard reference color is influenced by the level of scanner resolution used, the higher the scanner resolution level, the higher the accuracy.
The Effectiveness of Circular Equating as a Criterion for Evaluating Equating.
ERIC Educational Resources Information Center
Wang, Tianyou; Hanson, Bradley A.; Harris, Deborah J.
Equating a test form to itself through a chain of equatings, commonly referred to as circular equating, has been widely used as a criterion to evaluate the adequacy of equating. This paper uses both analytical methods and simulation methods to show that this criterion is in general invalid in serving this purpose. For the random groups design done…
Flow Injection Technique for Biochemical Analysis with Chemiluminescence Detection in Acidic Media
Chen, Jing; Fang, Yanjun
2007-01-01
A review with 90 references is presented to show the development of acidic chemiluminescence methods for biochemical analysis by use of flow injection technique in the last 10 years. A brief discussion of both the chemiluminescence and flow injection technique is given. The proposed methods for biochemical analysis are described and compared according to the used chemiluminescence system.
26 CFR 1.668(b)-3A - Computation of the beneficiary's income and tax for a prior taxable year.
Code of Federal Regulations, 2010 CFR
2010-04-01
... either the exact method or the short-cut method shall be determined by reference to the information... under section 6501 has expired, and such return shows a mathematical error on its face which resulted in... after the correction of such mathematical errors, and the beneficiary shall be credited for the correct...
Code of Federal Regulations, 2012 CFR
2012-07-01
... rigorous statistical experimental design and interpretation (Reference 16.4). 14.0Pollution Prevention 14... fluids. 1.4This method has been designed to show positive contamination for 5% of representative crude....1Sample collection bottles/jars—New, pre-cleaned bottles/jars, lot-certified to be free of artifacts...
Code of Federal Regulations, 2013 CFR
2013-07-01
... rigorous statistical experimental design and interpretation (Reference 16.4). 14.0Pollution Prevention 14... fluids. 1.4This method has been designed to show positive contamination for 5% of representative crude....1Sample collection bottles/jars—New, pre-cleaned bottles/jars, lot-certified to be free of artifacts...
Code of Federal Regulations, 2014 CFR
2014-07-01
... rigorous statistical experimental design and interpretation (Reference 16.4). 14.0Pollution Prevention 14... fluids. 1.4This method has been designed to show positive contamination for 5% of representative crude....1Sample collection bottles/jars—New, pre-cleaned bottles/jars, lot-certified to be free of artifacts...
Code of Federal Regulations, 2011 CFR
2011-07-01
... rigorous statistical experimental design and interpretation (Reference 16.4). 14.0Pollution Prevention 14... oil contamination in drilling fluids. 1.4This method has been designed to show positive contamination....1Sample collection bottles/jars—New, pre-cleaned bottles/jars, lot-certified to be free of artifacts...
NASA Astrophysics Data System (ADS)
Guerrout, EL-Hachemi; Ait-Aoudia, Samy; Michelucci, Dominique; Mahiou, Ramdane
2018-05-01
Many routine medical examinations produce images of patients suffering from various pathologies. With the huge number of medical images, the manual analysis and interpretation became a tedious task. Thus, automatic image segmentation became essential for diagnosis assistance. Segmentation consists in dividing the image into homogeneous and significant regions. We focus on hidden Markov random fields referred to as HMRF to model the problem of segmentation. This modelisation leads to a classical function minimisation problem. Broyden-Fletcher-Goldfarb-Shanno algorithm referred to as BFGS is one of the most powerful methods to solve unconstrained optimisation problem. In this paper, we investigate the combination of HMRF and BFGS algorithm to perform the segmentation operation. The proposed method shows very good segmentation results comparing with well-known approaches. The tests are conducted on brain magnetic resonance image databases (BrainWeb and IBSR) largely used to objectively confront the results obtained. The well-known Dice coefficient (DC) was used as similarity metric. The experimental results show that, in many cases, our proposed method approaches the perfect segmentation with a Dice Coefficient above .9. Moreover, it generally outperforms other methods in the tests conducted.
Cubical Mass-Spring Model design based on a tensile deformation test and nonlinear material model.
San-Vicente, Gaizka; Aguinaga, Iker; Tomás Celigüeta, Juan
2012-02-01
Mass-Spring Models (MSMs) are used to simulate the mechanical behavior of deformable bodies such as soft tissues in medical applications. Although they are fast to compute, they lack accuracy and their design remains still a great challenge. The major difficulties in building realistic MSMs lie on the spring stiffness estimation and the topology identification. In this work, the mechanical behavior of MSMs under tensile loads is analyzed before studying the spring stiffness estimation. In particular, the performed qualitative and quantitative analysis of the behavior of cubical MSMs shows that they have a nonlinear response similar to hyperelastic material models. According to this behavior, a new method for spring stiffness estimation valid for linear and nonlinear material models is proposed. This method adjusts the stress-strain and compressibility curves to a given reference behavior. The accuracy of the MSMs designed with this method is tested taking as reference some soft-tissue simulations based on nonlinear Finite Element Method (FEM). The obtained results show that MSMs can be designed to realistically model the behavior of hyperelastic materials such as soft tissues and can become an interesting alternative to other approaches such as nonlinear FEM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berndt, B; Wuerl, M; Dedes, G
Purpose: To improve agreement of predicted and measured positron emitter yields in patients, after proton irradiation for PET-based treatment verification, using a novel dual energy CT (DECT) tissue segmentation approach, overcoming known deficiencies from single energy CT (SECT). Methods: DECT head scans of 5 trauma patients were segmented and compared to existing decomposition methods with a first focus on the brain. For validation purposes, three brain equivalent solutions [water, white matter (WM) and grey matter (GM) – equivalent with respect to their reference carbon and oxygen contents and CT numbers at 90kVp and 150kVp] were prepared from water, ethanol, sucrosemore » and salt. The activities of all brain solutions, measured during a PET scan after uniform proton irradiation, were compared to Monte Carlo simulations. Simulation inputs were various solution compositions obtained from different segmentation approaches from DECT, SECT scans, and known reference composition. Virtual GM solution salt concentration corrections were applied based on DECT measurements of solutions with varying salt concentration. Results: The novel tissue segmentation showed qualitative improvements in %C for patient brain scans (ground truth unavailable). The activity simulations based on reference solution compositions agree with the measurement within 3–5% (4–8Bq/ml). These reference simulations showed an absolute activity difference between WM (20%C) and GM (10%C) to H2O (0%C) of 43 Bq/ml and 22 Bq/ml, respectively. Activity differences between reference simulations and segmented ones varied from −6 to 1 Bq/ml for DECT and −79 to 8 Bq/ml for SECT. Conclusion: Compared to the conventionally used SECT segmentation, the DECT based segmentation indicates a qualitative and quantitative improvement. In controlled solutions, a MC input based on DECT segmentation leads to better agreement with the reference. Future work will address the anticipated improvement of quantification accuracy in patients, comparing different tissue decomposition methods with an MR brain segmentation. Acknowledgement: DFG-MAP and HIT-Heidelberg Deutsche Forschungsgemeinschaft (MAP); Bundesministerium fur Bildung und Forschung (01IB13001)« less
Visualization of the IMIA Yearbook of Medical Informatics Publications over the Last 25 Years.
Yergens, D W; Tam-Tham, H; Minty, E P
2016-06-30
The last 25 years have been a period of innovation in the area of medical informatics. The International Medical Informatics Association (IMIA) has published, every year for the last quarter century, the Yearbook of Medical Informatics, collating selected papers from various journals in an attempt to provide a summary of the academic medical informatics literature. The objective of this paper is to visualize the evolution of the medical informatics field over the last 25 years according to the frequency of word occurrences in the papers published in the IMIA Yearbook of Medical Informatics. A literature review was conducted examining the IMIA Yearbook of Medical Informatics between 1992 and 2015. These references were collated into a reference manager application to examine the literature using keyword searches, word clouds, and topic clustering. The data was considered in its entirety, as well as segregated into 3 time periods to examine the evolution of main trends over time. Several methods were used, including word clouds, cluster maps, and custom developed web-based information dashboards. The literature search resulted in a total of 1210 references published in the Yearbook, of which 213 references were excluded, resulting in 997 references for visualization. Overall, we found that publications were more technical and methods-oriented between 1992 and 1999; more clinically and patient-oriented between 2000 and 2009; and noted the emergence of "big data", decision support, and global health in the past decade between 2010 and 2015. Dashboards were additionally created to show individual reference data, as well as, aggregated information. Medical informatics is a vast and expanding area with new methods and technologies being researched, implemented, and evaluated. Determining visualization approaches that enhance our understanding of literature is an active area of research, and like medical informatics, is constantly evolving as new software and algorithms are developed. This paper examined several approaches for visualizing the medical informatics literature to show historical trends, associations, and aggregated summarized information to illustrate the state and changes in the IMIA Yearbook publications over the last quarter century.
Comparison of ozone determinations by ultraviolet photometry and gas-phase titration
NASA Technical Reports Server (NTRS)
Demore, W. B.; Patapoff, M.
1976-01-01
A comparison of ozone determinations based on ultraviolet absorption photometry and gas-phase titration (GPT) shows good agreement between the two methods. Together with other results, these findings indicate that three candidate reference methods for ozone, UV photometry, IR photometry, and GPT are in substantial agreement. However, the GPT method is not recommended for routine use by air pollution agencies for calibration of ozone monitors because of susceptibility to experimental error.
Lu, Feng; Matsushita, Yasuyuki; Sato, Imari; Okabe, Takahiro; Sato, Yoichi
2015-10-01
We propose an uncalibrated photometric stereo method that works with general and unknown isotropic reflectances. Our method uses a pixel intensity profile, which is a sequence of radiance intensities recorded at a pixel under unknown varying directional illumination. We show that for general isotropic materials and uniformly distributed light directions, the geodesic distance between intensity profiles is linearly related to the angular difference of their corresponding surface normals, and that the intensity distribution of the intensity profile reveals reflectance properties. Based on these observations, we develop two methods for surface normal estimation; one for a general setting that uses only the recorded intensity profiles, the other for the case where a BRDF database is available while the exact BRDF of the target scene is still unknown. Quantitative and qualitative evaluations are conducted using both synthetic and real-world scenes, which show the state-of-the-art accuracy of smaller than 10 degree without using reference data and 5 degree with reference data for all 100 materials in MERL database.
Angelides, Kimon; Matsunami, Risë K.; Engler, David A.
2015-01-01
Background: We evaluated the accuracy, precision, and linearity of the In Touch® blood glucose monitoring system (BGMS), a new color touch screen and cellular-enabled blood glucose meter, using a new rapid, highly precise and accurate 13C6 isotope-dilution liquid chromatography-mass spectrometry method (IDLC-MS). Methods: Blood glucose measurements from the In Touch® BGMS were referenced to a validated UPLC-MRM standard reference measurement procedure previously shown to be highly accurate and precise. Readings from the In Touch® BGMS were taken over the blood glucose range of 24-640 mg/dL using 12 concentrations of blood glucose. Ten In Touch® BGMS and 3 lots of test strips were used with 10 replicates at each concentration. A lay user study was also performed to assess the ease of use. Results: At blood glucose concentrations <75 mg/dL 100% of the measurements are within ±8 mg/dL from the true reference standard; at blood glucose levels >75 mg/dL 100% of the measurements are within ±15% of the true reference standard. 100% of the results are within category A of the consensus grid. Within-run precision show CV < 3.72% between 24-50 mg/dL and CV<2.22% between 500 and 600 mg/dL. The results show that the In Touch® meter exceeds the minimum criteria of both the ISO 15197:2003 and ISO 15197:2013 standards. The results from a user panel show that 100% of the respondents reported that the color touch screen, with its graphic user interface (GUI), is well labeled and easy to navigate. Conclusions: To our knowledge this is the first touch screen glucose meter and the first study where accuracy of a new BGMS has been measured against a true primary reference standard, namely IDLC-MS. PMID:26002836
WHO Melting-Point Reference Substances
Bervenmark, H.; Diding, N. Å.; Öhrner, B.
1963-01-01
Batches of 13 highly purified chemicals, intended for use as reference substances in the calibration of apparatus for melting-point determinations, have been subjected to a collaborative assay by 15 laboratories in 13 countries. All the laboratories performed melting-point determinations by the capillary methods described in the proposed text for the second edition of the Pharmacopoea Internationalis and some, in addition, carried out determinations by the microscope hot stage (Kofler) method, using both the “going-through” and the “equilibrium” technique. Statistical analysis of the data obtained by the capillary method showed that the within-laboratory variation was small and that the between-laboratory variation, though constituting the greatest part of the whole variance, was not such as to warrant the exclusion of any laboratory from the evaluation of the results. The average values of the melting-points obtained by the laboratories can therefore be used as constants for the substances in question, which have accordingly been established as WHO Melting-Point Reference Substances and included in the WHO collection of authentic chemical substances. As to the microscope hot stage method, analysis of the results indicated that the values obtained by the “going-through” technique did not differ significantly from those obtained by the capillary method, but the values obtained by the “equilibrium” technique were mostly significantly lower. PMID:20604137
Yoon, Kaeng Won; Yoon, Suk-Ja; Kang, Byung-Cheol; Kim, Young-Hee; Kook, Min Suk; Lee, Jae-Seo; Palomo, Juan Martin
2014-09-01
This study aimed to investigate the deviation of landmarks from horizontal or midsagittal reference planes according to the methods of establishing reference planes. Computed tomography (CT) scans of 18 patients who received orthodontic and orthognathic surgical treatment were reviewed. Each CT scan was reconstructed by three methods for establishing three orthogonal reference planes (namely, the horizontal, midsagittal, and coronal reference planes). The horizontal (bilateral porions and bilateral orbitales) and midsagittal (crista galli, nasion, prechiasmatic point, opisthion, and anterior nasal spine) landmarks were identified on each CT scan. Vertical deviation of the horizontal landmarks and horizontal deviation of the midsagittal landmarks were measured. The porion and orbitale, which were not involved in establishing the horizontal reference plane, were found to deviate vertically from the horizontal reference plane in the three methods. The midsagittal landmarks, which were not used for the midsagittal reference plane, deviated horizontally from the midsagittal reference plane in the three methods. In a three-dimensional facial analysis, the vertical and horizontal deviations of the landmarks from the horizontal and midsagittal reference planes could vary depending on the methods of establishing reference planes.
ERIC Educational Resources Information Center
Carson, S. R.
1998-01-01
Presents a method for using spreadsheets to model special relativistic phenomena based on the connection between electric and magnetic fields in special relativity. Uses the time dilation equation to carry out transformations between reference frames that show the connection between the fields quantitatively. (DDR)
Efficiency of personal dosimetry methods in vascular interventional radiology.
Bacchim Neto, Fernando Antonio; Alves, Allan Felipe Fattori; Mascarenhas, Yvone Maria; Giacomini, Guilherme; Maués, Nadine Helena Pelegrino Bastos; Nicolucci, Patrícia; de Freitas, Carlos Clayton Macedo; Alvarez, Matheus; Pina, Diana Rodrigues de
2017-05-01
The aim of the present study was to determine the efficiency of six methods for calculate the effective dose (E) that is received by health professionals during vascular interventional procedures. We evaluated the efficiency of six methods that are currently used to estimate professionals' E, based on national and international recommendations for interventional radiology. Equivalent doses on the head, neck, chest, abdomen, feet, and hands of seven professionals were monitored during 50 vascular interventional radiology procedures. Professionals' E was calculated for each procedure according to six methods that are commonly employed internationally. To determine the best method, a more efficient E calculation method was used to determine the reference value (reference E) for comparison. The highest equivalent dose were found for the hands (0.34±0.93mSv). The two methods that are described by Brazilian regulations overestimated E by approximately 100% and 200%. The more efficient method was the one that is recommended by the United States National Council on Radiological Protection and Measurements (NCRP). The mean and median differences of this method relative to reference E were close to 0%, and its standard deviation was the lowest among the six methods. The present study showed that the most precise method was the one that is recommended by the NCRP, which uses two dosimeters (one over and one under protective aprons). The use of methods that employ at least two dosimeters are more efficient and provide better information regarding estimates of E and doses for shielded and unshielded regions. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Performance Evaluation and Community Application of Low-Cost Sensors for Ozone and Nitrogen Dioxide
Duvall, Rachelle M.; Long, Russell W.; Beaver, Melinda R.; Kronmiller, Keith G.; Wheeler, Michael L.; Szykman, James J.
2016-01-01
This study reports on the performance of electrochemical-based low-cost sensors and their use in a community application. CairClip sensors were collocated with federal reference and equivalent methods and operated in a network of sites by citizen scientists (community members) in Houston, Texas and Denver, Colorado, under the umbrella of the NASA-led DISCOVER-AQ Earth Venture Mission. Measurements were focused on ozone (O3) and nitrogen dioxide (NO2). The performance evaluation showed that the CairClip O3/NO2 sensor provided a consistent measurement response to that of reference monitors (r2 = 0.79 in Houston; r2 = 0.72 in Denver) whereas the CairClip NO2 sensor measurements showed no agreement to reference measurements. The CairClip O3/NO2 sensor data from the citizen science sites compared favorably to measurements at nearby reference monitoring sites. This study provides important information on data quality from low-cost sensor technologies and is one of few studies that reports sensor data collected directly by citizen scientists. PMID:27754370
Li, Qing; Fan, Cheng-Ming; Zhang, Xiao-Mei; Fu, Yong-Fu
2012-10-01
Most of traditional reference genes chosen for real-time quantitative PCR normalization were assumed to be ubiquitously and constitutively expressed in vegetative tissues. However, seeds show distinct transcriptomes compared with the vegetative tissues. Therefore, there is a need for re-validation of reference genes in samples of seed development and germination, especially for soybean seeds. In this study, we aimed at identifying reference genes suitable for the quantification of gene expression level in soybean seeds. In order to identify the best reference genes for soybean seeds, 18 putative reference genes were tested with various methods in different seed samples. We combined the outputs of both geNorm and NormFinder to assess the expression stability of these genes. The reference genes identified as optimums for seed development were TUA5 and UKN2, whereas for seed germination they were novel reference genes Glyma05g37470 and Glyma08g28550. Furthermore, for total seed samples it was necessary to combine four genes of Glyma05g37470, Glyma08g28550, Glyma18g04130 and UKN2 [corrected] for normalization. Key message We identified several reference genes that stably expressed in soybean seed developmental and germinating processes.
Accuracy of acoustic respiration rate monitoring in pediatric patients.
Patino, Mario; Redford, Daniel T; Quigley, Thomas W; Mahmoud, Mohamed; Kurth, C Dean; Szmuk, Peter
2013-12-01
Rainbow acoustic monitoring (RRa) utilizes acoustic technology to continuously and noninvasively determine respiratory rate from an adhesive sensor located on the neck. We sought to validate the accuracy of RRa, by comparing it to capnography, impedance pneumography, and to a reference method of counting breaths in postsurgical children. Continuous respiration rate data were recorded from RRa and capnography. In a subset of patients, intermittent respiration rate from thoracic impedance pneumography was also recorded. The reference method, counted respiratory rate by the retrospective analysis of the RRa, and capnographic waveforms while listening to recorded breath sounds were used to compare respiration rate of both capnography and RRa. Bias, precision, and limits of agreement of RRa compared with capnography and RRa and capnography compared with the reference method were calculated. Tolerance and reliability to the acoustic sensor and nasal cannula were also assessed. Thirty-nine of 40 patients (97.5%) demonstrated good tolerance of the acoustic sensor, whereas 25 of 40 patients (62.5%) demonstrated good tolerance of the nasal cannula. Intermittent thoracic impedance produced erroneous respiratory rates (>50 b·min(-1) from the other methods) on 47% of occasions. The bias ± SD and limits of agreement were -0.30 ± 3.5 b·min(-1) and -7.3 to 6.6 b·min(-1) for RRa compared with capnography; -0.1 ± 2.5 b·min(-1) and -5.0 to 5.0 b·min(-1) for RRa compared with the reference method; and 0.2 ± 3.4 b·min(-1) and -6.8 to 6.7 b·min(-1) for capnography compared with the reference method. When compared to nasal capnography, RRa showed good agreement and similar accuracy and precision but was better tolerated in postsurgical pediatric patients. © 2013 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Han, Guang; Liu, Jin; Liu, Rong; Xu, Kexin
2016-10-01
Position-based reference measurement method is taken as one of the most promising method in non-invasive measurement of blood glucose based on spectroscopic methodology. Selecting an appropriate source-detector separation as the reference position is important for deducting the influence of background change and reducing the loss of useful signals. Our group proposed a special source-detector separation named floating-reference position where the signal contains only background change, that is to say, the signal at this source-detector separation is uncorrelated with glucose concentration. The existence of floating-reference position has been verified in a three layer skin by Monte Carlo simulation and in the in vitro experiment. But it is difficult to verify the existence of floating-reference position on the human body because the interference is more complex during in vivo experiment. Aiming at this situation, this paper studies the determination of the best reference position on human body by collecting signals at several source-detector separations on the palm and measuring the true blood glucose levels during oral glucose tolerance test (OGTT) experiments of 3 volunteers. Partial least square (PLS) calibration model is established between the signals at every source-detector separation and its corresponding blood glucose levels. The results shows that the correlation coefficient (R) between 1.32 mm to 1.88 mm is lowest and they can be used as reference for background correction. The signal of this special position is important for improving the accuracy of near-infrared non-invasive blood glucose measurement.
Shahshahani, Hayedeh J; Meraat, Nahid; Mansouri, Fatemeh
2013-07-01
Haemoglobin screening methods need to be highly sensitive to detect both low and high haemoglobin levels and avoid unnecessary rejection of potential blood donors. The aim of this study was to evaluate the accuracy of measurements by HemoCue in blood donors. Three hundred and fourteen randomly selected, prospective blood donors were studied. Single fingerstick blood samples were obtained to determine the donors' haemoglobin levels by HemoCue, while venous blood samples were drawn for measurement of the haemoglobin level by both HemoCue and an automated haematology analyser as the reference method. The sensitivity, specificity, predictive values and correlation between the reference method and HemoCue were assessed. Cases with a haemoglobin concentration in the range of 12.5-17.9 g/dL were accepted for blood donation. Analysis of paired results showed that haemoglobin levels measured by HemoCue were higher than those measured by the reference method. There was a significant correlation between the reference method and HemoCue for haemoglobin levels less than 12.5 g/dL. The correlation was less strong for increasing haemoglobin levels. Linear correlation was poor for haemoglobin levels over 18 g/dL. Thirteen percent of donors, who had haemoglobin levels close to the upper limit, were unnecessarily rejected. HemoCue is suitable for screening for anaemia in blood donors. Most donors at Yazd are males and a significant percentage of them have haemoglobin values close to the upper limit for acceptance as a blood donor; since these subjects could be unnecessarily rejected on the basis of HemoCue results and testing with this method is expensive, it is recommended that qualitative methods are used for primary screening and accurate quantitative methods used in clinically suspicious cases or when qualitative methods fail.
Stokes, Ashley M.; Semmineh, Natenael; Quarles, C. Chad
2015-01-01
Purpose A combined biophysical- and pharmacokinetic-based method is proposed to separate, quantify, and correct for both T1 and T2* leakage effects using dual-echo DSC acquisitions to provide more accurate hemodynamic measures, as validated by a reference intravascular contrast agent (CA). Methods Dual-echo DSC-MRI data were acquired in two rodent glioma models. The T1 leakage effects were removed and also quantified in order to subsequently correct for the remaining T2* leakage effects. Pharmacokinetic, biophysical, and combined biophysical and pharmacokinetic models were used to obtain corrected cerebral blood volume (CBV) and cerebral blood flow (CBF), and these were compared with CBV and CBF from an intravascular CA. Results T1-corrected CBV was significantly overestimated compared to MION CBV, while T1+T2*-correction yielded CBV values closer to the reference values. The pharmacokinetic and simplified biophysical methods showed similar results and underestimated CBV in tumors exhibiting strong T2* leakage effects. The combined method was effective for correcting T1 and T2* leakage effects across tumor types. Conclusions Correcting for both T1 and T2* leakage effects yielded more accurate measures of CBV. The combined correction method yields more reliable CBV measures than either correction method alone, but for certain brain tumor types (e.g., gliomas) the simplified biophysical method may provide a robust and computationally efficient alternative. PMID:26362714
Similarity regularized sparse group lasso for cup to disc ratio computation.
Cheng, Jun; Zhang, Zhuo; Tao, Dacheng; Wong, Damon Wing Kee; Liu, Jiang; Baskaran, Mani; Aung, Tin; Wong, Tien Yin
2017-08-01
Automatic cup to disc ratio (CDR) computation from color fundus images has shown to be promising for glaucoma detection. Over the past decade, many algorithms have been proposed. In this paper, we first review the recent work in the area and then present a novel similarity-regularized sparse group lasso method for automated CDR estimation. The proposed method reconstructs the testing disc image based on a set of reference disc images by integrating the similarity between testing and the reference disc images with the sparse group lasso constraints. The reconstruction coefficients are then used to estimate the CDR of the testing image. The proposed method has been validated using 650 images with manually annotated CDRs. Experimental results show an average CDR error of 0.0616 and a correlation coefficient of 0.7, outperforming other methods. The areas under curve in the diagnostic test reach 0.843 and 0.837 when manual and automatically segmented discs are used respectively, better than other methods as well.
Geffré, Anne; Concordet, Didier; Braun, Jean-Pierre; Trumel, Catherine
2011-03-01
International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals [CI]) using a nonparametric method when n≥40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Anderson-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The critical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in controlled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are performed properly, Reference Value Advisor, available as freeware at http://www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including currently unavailable methods. This allows for selection of the most appropriate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available. ©2011 American Society for Veterinary Clinical Pathology.
``Frames of Reference'' revisited
NASA Astrophysics Data System (ADS)
Steyn-Ross, Alistair; Ivey, Donald G.
1992-12-01
The PSSC teaching film, ``Frames of Reference,'' was made in 1960, and was one of the first audio-visual attempts at showing how your physical ``point of view,'' or frame of reference, necessarily alters both your perceptions and your observations of motion. The gentle humor and original demonstrations made a lasting impact on many audiences, and with its recent re-release as part of the AAPT Cinema Classics videodisc it is timely that we should review both the message and the methods of the film. An annotated script and photographs from the film are presented, followed by extension material on rotating frames which teachers may find appropriate for use in their classrooms: constructions, demonstrations, an example, and theory.
Hoerner, Rebecca; Feldpausch, Jill; Gray, R Lucas; Curry, Stephanie; Islam, Zahidul; Goldy, Tim; Klein, Frank; Tadese, Theodros; Rice, Jennifer; Mozola, Mark
2011-01-01
Reveal Salmonella 2.0 is an improved version of the original Reveal Salmonella lateral flow immunoassay and is applicable to the detection of Salmonella enterica serogroups A-E in a variety of food and environmental samples. A Performance Tested Method validation study was conducted to compare performance of the Reveal 2.0 method with that of the U.S. Department of Agriculture-Food Safety and Inspection Service or U.S. Food and Drug Administration/Bacteriological Analytical Manual reference culture methods for detection of Salmonella spp. in chicken carcass rinse, raw ground turkey, raw ground beef, hot dogs, raw shrimp, a ready-to-eat meal product, dry pet food, ice cream, spinach, cantaloupe, peanut butter, stainless steel surface, and sprout irrigation water. In a total of 17 trials performed internally and four trials performed in an independent laboratory, there were no statistically significant differences in performance of the Reveal 2.0 and reference culture procedures as determined by Chi-square analysis, with the exception of one trial with stainless steel surface and one trial with sprout irrigation water where there were significantly more positive results by the Reveal 2.0 method. Considering all data generated in testing food samples using enrichment procedures specifically designed for the Reveal method, overall sensitivity of the Reveal method relative to the reference culture methods was 99%. In testing environmental samples, sensitivity of the Reveal method relative to the reference culture method was 164%. For select foods, use of the Reveal test in conjunction with reference method enrichment resulted in overall sensitivity of 92%. There were no unconfirmed positive results on uninoculated control samples in any trials for specificity of 100%. In inclusivity testing, 102 different Salmonella serovars belonging to serogroups A-E were tested and 99 were consistently positive in the Reveal test. In exclusivity testing of 33 strains of non-salmonellae representing 14 genera, 32 were negative when tested with Reveal following nonselective enrichment, and the remaining strain was found to be substantially inhibited by the enrichment media used with the Reveal method. Results of ruggedness testing showed that the Reveal test produces accurate results even with substantial deviation in sample volume or device development time.
Demura, Shinichi; Sato, Susumu; Nakada, Masakatsu; Minami, Masaki; Kitabayashi, Tamotsu
2003-07-01
This study compared the accuracy of body density (Db) estimation methods using hydrostatic weighing without complete head submersion (HW(withoutHS)) of Donnelly et al. (1988) and Donnelly and Sintek (1984) as referenced to Goldman and Buskirk's approach (1961). Donnelly et al.'s method estimates Db from a regression equation using HW(withoutHS), moreover, Donnelly and Sintek's method estimates it from HW(withoutHS) and head anthropometric variables. Fifteen Japanese males (173.8+/-4.5 cm, 63.6+/-5.4 kg, 21.2+/-2.8 years) and fifteen females (161.4+/-5.4 cm, 53.8+/-4.8 kg, 21.0+/-1.4 years) participated in this study. All the subjects were measured for head length, width and HWs under the two conditions of with and without head submersion. In order to examine the consistency of estimation values of Db, the correlation coefficients between the estimation values and the reference (Goldman and Buskirk, 1961) were calculated. The standard errors of estimation (SEE) were calculated by regression analysis using a reference value as a dependent variable and estimation values as independent variables. In addition, the systematic errors of two estimation methods were investigated by the Bland-Altman technique (Bland and Altman, 1986). In the estimation, Donnelly and Sintek's equation showed a high relationship with the reference (r=0.960, p<0.01), but had more differences from the reference compared with Donnelly et al.'s equation. Further studies are needed to develop new prediction equations for Japanese considering sex and individual differences in head anthropometry.
Wu, Mixia; Zhang, Dianchen; Liu, Aiyi
2016-01-01
New biomarkers continue to be developed for the purpose of diagnosis, and their diagnostic performances are typically compared with an existing reference biomarker used for the same purpose. Considerable amounts of research have focused on receiver operating characteristic curves analysis when the reference biomarker is dichotomous. In the situation where the reference biomarker is measured on a continuous scale and dichotomization is not practically appealing, an index was proposed in the literature to measure the accuracy of a continuous biomarker, which is essentially a linear function of the popular Kendall's tau. We consider the issue of estimating such an accuracy index when the continuous reference biomarker is measured with errors. We first investigate the impact of measurement errors on the accuracy index, and then propose methods to correct for the bias due to measurement errors. Simulation results show the effectiveness of the proposed estimator in reducing biases. The methods are exemplified with hemoglobin A1c measurements obtained from both the central lab and a local lab to evaluate the accuracy of the mean data obtained from the metered blood glucose monitoring against the centrally measured hemoglobin A1c from a behavioral intervention study for families of youth with type 1 diabetes.
NASA Astrophysics Data System (ADS)
He, Yaqian; Bo, Yanchen; Chai, Leilei; Liu, Xiaolong; Li, Aihua
2016-08-01
Leaf Area Index (LAI) is an important parameter of vegetation structure. A number of moderate resolution LAI products have been produced in urgent need of large scale vegetation monitoring. High resolution LAI reference maps are necessary to validate these LAI products. This study used a geostatistical regression (GR) method to estimate LAI reference maps by linking in situ LAI and Landsat TM/ETM+ and SPOT-HRV data over two cropland and two grassland sites. To explore the discrepancies of employing different vegetation indices (VIs) on estimating LAI reference maps, this study established the GR models for different VIs, including difference vegetation index (DVI), normalized difference vegetation index (NDVI), and ratio vegetation index (RVI). To further assess the performance of the GR model, the results from the GR and Reduced Major Axis (RMA) models were compared. The results show that the performance of the GR model varies between the cropland and grassland sites. At the cropland sites, the GR model based on DVI provides the best estimation, while at the grassland sites, the GR model based on DVI performs poorly. Compared to the RMA model, the GR model improves the accuracy of reference LAI maps in terms of root mean square errors (RMSE) and bias.
Farooqui, Javed Hussain; Sharma, Mansi; Koul, Archana; Dutta, Ranjan; Shroff, Noshir Minoo
2017-01-01
The aim of this study is to compare two different methods of analysis of preoperative reference marking for toric intraocular lens (IOL) after marking with an electronic marker. Cataract and IOL Implantation Service, Shroff Eye Centre, New Delhi, India. Fifty-two eyes of thirty patients planned for toric IOL implantation were included in the study. All patients had preoperative marking performed with an electronic preoperative two-step toric IOL reference marker (ASICO AE-2929). Reference marks were placed at 3-and 9-o'clock positions. Marks were analyzed with two systems. First, slit-lamp photographs taken and analyzed using Adobe Photoshop (version 7.0). Second, Tracey iTrace Visual Function Analyzer (version 5.1.1) was used for capturing corneal topograph examination and position of marks noted. Amount of alignment error was calculated. Mean absolute rotation error was 2.38 ± 1.78° by Photoshop and 2.87 ± 2.03° by iTrace which was not statistically significant ( P = 0.215). Nearly 72.7% of eyes by Photoshop and 61.4% by iTrace had rotation error ≤3° ( P = 0.359); and 90.9% of eyes by Photoshop and 81.8% by iTrace had rotation error ≤5° ( P = 0.344). No significant difference in absolute amount of rotation between eyes when analyzed by either method. Difference in reference mark positions when analyzed by two systems suggests the presence of varying cyclotorsion at different points of time. Both analysis methods showed an approximately 3° of alignment error, which could contribute to 10% loss of astigmatic correction of toric IOL. This can be further compounded by intra-operative marking errors and final placement of IOL in the bag.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanyi, James A.; Nitzling, Kevin D.; Lodwick, Camille J.
2011-02-15
Purpose: Assessment of the fundamental dosimetric characteristics of a novel gated fiber-optic-coupled dosimetry system for clinical electron beam irradiation. Methods: The response of fiber-optic-coupled dosimetry system to clinical electron beam, with nominal energy range of 6-20 MeV, was evaluated for reproducibility, linearity, and output dependence on dose rate, dose per pulse, energy, and field size. The validity of the detector system's response was assessed in correspondence with a reference ionization chamber. Results: The fiber-optic-coupled dosimetry system showed little dependence to dose rate variations (coefficient of variation {+-}0.37%) and dose per pulse changes (with 0.54% of reference chamber measurements). The reproducibilitymore » of the system was {+-}0.55% for dose fractions of {approx}100 cGy. Energy dependence was within {+-}1.67% relative to the reference ionization chamber for the 6-20 MeV nominal electron beam energy range. The system exhibited excellent linear response (R{sup 2}=1.000) compared to reference ionization chamber in the dose range of 1-1000 cGy. The output factors were within {+-}0.54% of the corresponding reference ionization chamber measurements. Conclusions: The dosimetric properties of the gated fiber-optic-coupled dosimetry system compare favorably to the corresponding reference ionization chamber measurements and show considerable potential for applications in clinical electron beam radiotherapy.« less
NASA Astrophysics Data System (ADS)
Valizadeh, Maryam; Sohrabi, Mahmoud Reza
2018-03-01
In the present study, artificial neural networks (ANNs) and support vector regression (SVR) as intelligent methods coupled with UV spectroscopy for simultaneous quantitative determination of Dorzolamide (DOR) and Timolol (TIM) in eye drop. Several synthetic mixtures were analyzed for validating the proposed methods. At first, neural network time series, which one type of network from the artificial neural network was employed and its efficiency was evaluated. Afterwards, the radial basis network was applied as another neural network. Results showed that the performance of this method is suitable for predicting. Finally, support vector regression was proposed to construct the Zilomole prediction model. Also, root mean square error (RMSE) and mean recovery (%) were calculated for SVR method. Moreover, the proposed methods were compared to the high-performance liquid chromatography (HPLC) as a reference method. One way analysis of variance (ANOVA) test at the 95% confidence level applied to the comparison results of suggested and reference methods that there were no significant differences between them. Also, the effect of interferences was investigated in spike solutions.
Evaluation of a CLEIA automated assay system for the detection of a panel of tumor markers.
Falzarano, Renato; Viggiani, Valentina; Michienzi, Simona; Longo, Flavia; Tudini, Silvestra; Frati, Luigi; Anastasi, Emanuela
2013-10-01
Tumor markers are commonly used to detect a relapse of disease in oncologic patients during follow-up. It is important to evaluate new assay systems for a better and more precise assessment, as a standardized method is currently lacking. The aim of this study was to assess the concordance between an automated chemiluminescent enzyme immunoassay system (LUMIPULSE® G1200) and our reference methods using seven tumor markers. Serum samples from 787 subjects representing a variety of diagnoses, including oncologic, were analyzed using LUMIPULSE® G1200 and our reference methods. Serum values were measured for the following analytes: prostate-specific antigen (PSA), alpha-fetoprotein (AFP), carcinoembryonic antigen (CEA), cancer antigen 125 (CA125), carbohydrate antigen 15-3 (CA15-3), carbohydrate antigen 19-9 (CA19-9), and cytokeratin 19 fragment (CYFRA 21-1). For the determination of CEA, AFP, and PSA, an automatic analyzer based on chemiluminescence was applied as reference method. To assess CYFRA 21-1, CA125, CA19-9, and CA15-3, an immunoradiometric manual system was employed. Method comparison by Passing-Bablok analysis resulted in slopes ranging from 0.9728 to 1.9089 and correlation coefficients from 0.9977 to 0.9335. The precision of each assay was assessed by testing six serum samples. Each sample was analyzed for all tumor biomarkers in duplicate and in three different runs. The coefficients of variation were less than 6.3 and 6.2 % for within-run and between-run variation, respectively. Our data suggest an overall good interassay agreement for all markers. The comparison with our reference methods showed good precision and reliability, highlighting its usefulness in clinical laboratory's routine.
Unsupervised Segmentation of Head Tissues from Multi-modal MR Images for EEG Source Localization.
Mahmood, Qaiser; Chodorowski, Artur; Mehnert, Andrew; Gellermann, Johanna; Persson, Mikael
2015-08-01
In this paper, we present and evaluate an automatic unsupervised segmentation method, hierarchical segmentation approach (HSA)-Bayesian-based adaptive mean shift (BAMS), for use in the construction of a patient-specific head conductivity model for electroencephalography (EEG) source localization. It is based on a HSA and BAMS for segmenting the tissues from multi-modal magnetic resonance (MR) head images. The evaluation of the proposed method was done both directly in terms of segmentation accuracy and indirectly in terms of source localization accuracy. The direct evaluation was performed relative to a commonly used reference method brain extraction tool (BET)-FMRIB's automated segmentation tool (FAST) and four variants of the HSA using both synthetic data and real data from ten subjects. The synthetic data includes multiple realizations of four different noise levels and several realizations of typical noise with a 20% bias field level. The Dice index and Hausdorff distance were used to measure the segmentation accuracy. The indirect evaluation was performed relative to the reference method BET-FAST using synthetic two-dimensional (2D) multimodal magnetic resonance (MR) data with 3% noise and synthetic EEG (generated for a prescribed source). The source localization accuracy was determined in terms of localization error and relative error of potential. The experimental results demonstrate the efficacy of HSA-BAMS, its robustness to noise and the bias field, and that it provides better segmentation accuracy than the reference method and variants of the HSA. They also show that it leads to a more accurate localization accuracy than the commonly used reference method and suggest that it has potential as a surrogate for expert manual segmentation for the EEG source localization problem.
SU-E-P-10: Establishment of Local Diagnostic Reference Levels of Routine Exam in Computed Tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeh, M; Wang, Y; Weng, H
Introduction National diagnostic reference levels (NDRLs) can be used as a reference dose of radiological examination can provide radiation dose as the basis of patient dose optimization. Local diagnostic reference levels (LDRLs) by periodically view and check doses, more efficiency to improve the way of examination. Therefore, the important first step is establishing a diagnostic reference level. Computed Tomography in Taiwan had been built up the radiation dose limit value,in addition, many studies report shows that CT scan contributed most of the radiation dose in different medical. Therefore, this study was mainly to let everyone understand DRL’s international status. Formore » computed tomography in our hospital to establish diagnostic reference levels. Methods and Materials: There are two clinical CT scanners (a Toshiba Aquilion and a Siemens Sensation) were performed in this study. For CT examinations the basic recommended dosimetric quantity is the Computed Tomography Dose Index (CTDI). Each exam each different body part, we collect 10 patients at least. Carried out the routine examinations, and all exposure parameters have been collected and the corresponding CTDIv and DLP values have been determined. Results: The majority of patients (75%) were between 60–70 Kg of body weight. There are 25 examinations in this study. Table 1 shows the LDRL of each CT routine examination. Conclusions: Therefore, this study would like to let everyone know DRL’s international status, but also establishment of computed tomography of the local reference levels for our hospital, and providing radiation reference, as a basis for optimizing patient dose.« less
Resampling methods in Microsoft Excel® for estimating reference intervals
Theodorsson, Elvar
2015-01-01
Computer- intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples. PMID:26527366
Resampling methods in Microsoft Excel® for estimating reference intervals.
Theodorsson, Elvar
2015-01-01
Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.
ERIC Educational Resources Information Center
Hahn, William G.; Bart, Barbara D.
2003-01-01
Business students were taught a total quality management-based outlining process for course readings and a tally method to measure learning efficiency. Comparison of 233 who used the process and 99 who did not showed that the group means of users' test scores were 12.4 points higher than those of nonusers. (Contains 25 references.) (SK)
ERIC Educational Resources Information Center
Bhattacharyya, Pratip; Chakrabarti, Bikas K.
2008-01-01
We study different ways of determining the mean distance (r[subscript n]) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating…
26 CFR 1.669(c)-2A - Computation of the beneficiary's income and tax for a prior taxable year.
Code of Federal Regulations, 2010 CFR
2010-04-01
... either the exact method or the short-cut method shall be determined by reference to the information... shows a mathematical error on its face which resulted in the wrong amount of tax being paid for such... amounts in such gross income, shall be based upon the return after the correction of such mathematical...
46 CFR 160.176-4 - Incorporation by reference.
Code of Federal Regulations, 2014 CFR
2014-10-01
... and Elongation, Breaking of Woven Cloth; Grab Method, incorporation by reference approved for § 160.176-13. (ii) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method, incorporation by reference approved for § 160.176-13. (iii) Method 5134, Strength of Cloth, Tearing; Tongue Method...
46 CFR 160.176-4 - Incorporation by reference.
Code of Federal Regulations, 2013 CFR
2013-10-01
... and Elongation, Breaking of Woven Cloth; Grab Method, incorporation by reference approved for § 160.176-13. (ii) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method, incorporation by reference approved for § 160.176-13. (iii) Method 5134, Strength of Cloth, Tearing; Tongue Method...
Deformation effect simulation and optimization for double front axle steering mechanism
NASA Astrophysics Data System (ADS)
Wu, Jungang; Zhang, Siqin; Yang, Qinglong
2013-03-01
This paper research on tire wear problem of heavy vehicles with Double Front Axle Steering Mechanism from the flexible effect of Steering Mechanism, and proposes a structural optimization method which use both traditional static structural theory and dynamic structure theory - Equivalent Static Load (ESL) method to optimize key parts. The good simulated and test results show this method has high engineering practice and reference value for tire wear problem of Double Front Axle Steering Mechanism design.
GFAAS determination of selenium in infant formulas using a microwave digestion method.
Alegria, A; Barbera, R; Farré, R; Moreno, A
1994-01-01
A method for determining the selenium content of infant formulas is proposed. It includes wet digestion with nitric acid and hydrogen peroxide in medium pressure teflon bombs in a microwave oven and determination by graphite furnace atomic absorption spectrometry (GFAAS). The absence of interferences is checked. Values obtained for the limit of detection (19.4 ng/g), precision (RSD = 2.2%) and accuracy by analysis of a reference material show that the method is reliable.
Ionization chamber-based reference dosimetry of intensity modulated radiation beams.
Bouchard, Hugo; Seuntjens, Jan
2004-09-01
The present paper addresses reference dose measurements using thimble ionization chambers for quality assurance in IMRT fields. In these radiation fields, detector fluence perturbation effects invalidate the application of open-field dosimetry protocol data for the derivation of absorbed dose to water from ionization chamber measurements. We define a correction factor C(Q)IMRT to correct the absorbed dose to water calibration coefficient N(D, w)Q for fluence perturbation effects in individual segments of an IMRT delivery and developed a calculation method to evaluate the factor. The method consists of precalculating, using accurate Monte Carlo techniques, ionization chamber, type-dependent cavity air dose, and in-phantom dose to water at the reference point for zero-width pencil beams as a function of position of the pencil beams impinging on the phantom surface. These precalculated kernels are convolved with the IMRT fluence distribution to arrive at the dose-to-water-dose-to-cavity air ratio [D(a)w (IMRT)] for IMRT fields and with a 10x10 cm2 open-field fluence to arrive at the same ratio D(a)w (Q) for the 10x10 cm2 reference field. The correction factor C(Q)IMRT is then calculated as the ratio of D(a)w (IMRT) and D(a)w (Q). The calculation method was experimentally validated and the magnitude of chamber correction factors in reference dose measurements in single static and dynamic IMRT fields was studied. The results show that, for thimble-type ionization chambers the correction factor in a single, realistic dynamic IMRT field can be of the order of 10% or more. We therefore propose that for accurate reference dosimetry of complete n-beam IMRT deliveries, ionization chamber fluence perturbation correction factors must explicitly be taken into account.
NASA Astrophysics Data System (ADS)
Khaidah Syed Sahab, Sharifah; Manap, Mahayuddin; Hamzah, Fadzilah
2017-05-01
The therapeutic potential of cisplatin as the best anticancer treatment for solid tumor is limited by its potential nephrotoxicity. This study analyses the incidence of cisplatin induced nephrotoxicity in oncology patients through GFR estimation using 99mTc-DTPA plasma sampling (reference method) and to compare with predicted creatinine clearance and Tc-99m renal scintigraphy. A prospective study of 33 oncology patients referred for GFR estimation in Penang Hospital. The incidence of cisplatin induced nephrotoxicity was analysed via radionuclide and creatinine based method. Of 33 samples, only 21 selected for the study. The dose of cisplatin given was 75 mg/m2 for each cycle. The mean difference of GFR pre and post chemotherapy (PSC 2) was 13.38 (-4.60, 31.36) ml/min/1.73m2 (p 0.136). Of 21 patients, 3 developed severe nephrotoxicity (GFR < 50ml/min/1.73 m2) contributing 14.3% of incidence. Bland-Altman plot showed only PSC 1 is in agreement with PSC 2 technique. Intraclass Correlation Coefficients (ICC) also showed that PSC 1 has high degree of reliability in comparison to PSC 2 (p < 0.001). The other methods do not show reliability and agreement in comparison to PSC 2 (p < 0.05). 3 of 21 patients (14.3%) developed severe nephrotoxicity post cisplatin chemotherapy. This percentage is much less than the reported 20 - 25% of cases from other studies, probably due to small sample size and biased study population due to strict exclusion criteria. Radionuclide method for evaluating GFR is the most sensitive method for the detection of cisplatin induced nephrotoxicity by showing 3 of 21 patients developing severe nephrotoxicity. PSC 1 was found to be a reliable substitute of PSC 2. The other methods are not reliable for detection of early nephrotoxicity. We will recommend the use of single plasma sampling method (PSC 1) for GFR estimation in monitoring post cisplatin chemotherapy patients.
Increasing life expectancy of water resources literature
NASA Astrophysics Data System (ADS)
Heistermann, M.; Francke, T.; Georgi, C.; Bronstert, A.
2014-06-01
In a study from 2008, Larivière and colleagues showed, for the field of natural sciences and engineering, that the median age of cited references is increasing over time. This result was considered counterintuitive: with the advent of electronic search engines, online journal issues and open access publications, one could have expected that cited literature is becoming younger. That study has motivated us to take a closer look at the changes in the age distribution of references that have been cited in water resources journals since 1965. Not only could we confirm the findings of Larivière and colleagues. We were also able to show that the aging is mainly happening in the oldest 10-25% of an average reference list. This is consistent with our analysis of top-cited papers in the field of water resources. Rankings based on total citations since 1965 consistently show the dominance of old literature, including text books and research papers in equal shares. For most top-cited old-timers, citations are still growing exponentially. There is strong evidence that most citations are attracted by publications that introduced methods which meanwhile belong to the standard toolset of researchers and practitioners in the field of water resources. Although we think that this trend should not be overinterpreted as a sign of stagnancy, there might be cause for concern regarding how authors select their references. We question the increasing citation of textbook knowledge as it holds the risk that reference lists become overcrowded, and that the readability of papers deteriorates.
US Fish and Wildlife Service biomonitoring operations manual, Appendices A--K
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gianotto, D.F.; Rope, R.C.; Mondecar, M.
1993-04-01
Volume 2 contains Appendices and Summary Sheets for the following areas: A-Legislative Background and Key to Relevant Legislation, B- Biomonitoring Operations Workbook, C-Air Monitoring, D-Introduction to the Flora and Fauna for Biomonitoring, E-Decontamination Guidance Reference Field Methods, F-Documentation Guidance, Sample Handling, and Quality Assurance/Quality Control Standard Operating Procedures, G-Field Instrument Measurements Reference Field Methods, H-Ground Water Sampling Reference Field Methods, I-Sediment Sampling Reference Field Methods, J-Soil Sampling Reference Field Methods, K-Surface Water Reference Field Methods. Appendix B explains how to set up strategy to enter information on the ``disk workbook``. Appendix B is enhanced by DE97006389, an on-line workbook formore » users to be able to make revisions to their own biomonitoring data.« less
Lindsey, David A.; Tysdal, Russell G.; Taggart, Joseph E.
2002-01-01
The principal purpose of this report is to provide a reference archive for results of a statistical analysis of geochemical data for metasedimentary rocks of Mesoproterozoic age of the Salmon River Mountains and Lemhi Range, central Idaho. Descriptions of geochemical data sets, statistical methods, rationale for interpretations, and references to the literature are provided. Three methods of analysis are used: R-mode factor analysis of major oxide and trace element data for identifying petrochemical processes, analysis of variance for effects of rock type and stratigraphic position on chemical composition, and major-oxide ratio plots for comparison with the chemical composition of common clastic sedimentary rocks.
[What do we know about participation in cultural activities and health?].
Knudtsen, Margunn Skjei; Holmen, Jostein; Håpnes, Odd
2005-12-15
Knowledge of the association between health status and lifestyle factors, such as food habits, smoking and physical activity, is abundant. Other lifestyle factors, such as participation in cultural activities, have attained less attention. The article is based on studies of the literature. Reference lists in key articles have been used as well as references given by research colleagues. The survey shows an association between participation in cultural activities, cultural experiences and health status, also when measured by differing methods. Further population studies, longitudinal studies and controlled studies are needed in order to expand our knowledge of the relationship between participation in cultural activities and health status. There is a need for multidisciplinary cooperation and increased use of combined quantitative and qualitative methods.
Stalder, J; Costanzo, A; Daas, A; Rautmann, G; Buchheit, K-H
2010-04-01
A reference standard calibrated in International Units (IU) is needed for the in vitro potency assay of hepatitis A vaccines prepared by formalin-inactivation of purified hepatitis A virus grown in cell cultures. Thus, a project was launched by the European Directorate for the Quality of Medicines & HealthCare (EDQM) to establish one or more non-adsorbed inactivated hepatitis A vaccine reference preparation(s) as working standard(s), calibrated against the 1st International Standard (IS), for the in vitro potency assay (ELISA) of all vaccines present on the European market. Four non-adsorbed liquid preparations of formalin-inactivated hepatitis A antigen with a known antigen content were obtained from 3 manufacturers as candidate Biological Reference Preparations (BRPs). Thirteen laboratories participated in the collaborative study. They were asked to use an in vitro ELISA method adapted from a commercially available kit for the detection of antibodies to hepatitis A virus. In-house validated assays were to be run in parallel, where available. Some participants also included commercially available hepatitis A vaccines in the assays, after appropriate desorption. During the collaborative study, several participants using the standard method were faced with problems with some of the most recent lots of the test kits. Due to these problems, the standard method did not perform satisfactorily and a high number of assays were invalid, whereas the in-house methods appeared to perform better. Despite this, the overall mean results of the valid assays using both methods were in agreement. Nonetheless, it was decided to base the assignment of the potency values on the in-house methods only. The results showed that all candidate BRPs were suitable for the intended purpose. However, based on availability of the material and on the results of end-product testing, 2 candidate reference preparations, Samples C and D, were selected. Both were from the same batch but filled on different days; no statistically significant difference in potency was observed. They were thus combined in 1 single batch. The candidate preparation (Sample C/D) was adopted at the June 2009 session of the European Pharmacopoeia (Ph. Eur.) Commission as the Ph. Eur. BRP batch 1 for hepatitis A vaccine (inactivated, non-adsorbed), with an assigned potency of 12 IU/ml for in vitro antigen content assays. Accelerated degradation studies have been initiated. The preliminary data show that the BRP is stable at the recommended storage temperature (< -50 degrees C). The BRP will be monitored at regular intervals throughout its lifetime.
Talbot, Clifford B; Lagarto, João; Warren, Sean; Neil, Mark A A; French, Paul M W; Dunsby, Chris
2015-09-01
A correction is proposed to the Delta function convolution method (DFCM) for fitting a multiexponential decay model to time-resolved fluorescence decay data using a monoexponential reference fluorophore. A theoretical analysis of the discretised DFCM multiexponential decay function shows the presence an extra exponential decay term with the same lifetime as the reference fluorophore that we denote as the residual reference component. This extra decay component arises as a result of the discretised convolution of one of the two terms in the modified model function required by the DFCM. The effect of the residual reference component becomes more pronounced when the fluorescence lifetime of the reference is longer than all of the individual components of the specimen under inspection and when the temporal sampling interval is not negligible compared to the quantity (τR (-1) - τ(-1))(-1), where τR and τ are the fluorescence lifetimes of the reference and the specimen respectively. It is shown that the unwanted residual reference component results in systematic errors when fitting simulated data and that these errors are not present when the proposed correction is applied. The correction is also verified using real data obtained from experiment.
Benner, Christian; Havulinna, Aki S; Järvelin, Marjo-Riitta; Salomaa, Veikko; Ripatti, Samuli; Pirinen, Matti
2017-10-05
During the past few years, various novel statistical methods have been developed for fine-mapping with the use of summary statistics from genome-wide association studies (GWASs). Although these approaches require information about the linkage disequilibrium (LD) between variants, there has not been a comprehensive evaluation of how estimation of the LD structure from reference genotype panels performs in comparison with that from the original individual-level GWAS data. Using population genotype data from Finland and the UK Biobank, we show here that a reference panel of 1,000 individuals from the target population is adequate for a GWAS cohort of up to 10,000 individuals, whereas smaller panels, such as those from the 1000 Genomes Project, should be avoided. We also show, both theoretically and empirically, that the size of the reference panel needs to scale with the GWAS sample size; this has important consequences for the application of these methods in ongoing GWAS meta-analyses and large biobank studies. We conclude by providing software tools and by recommending practices for sharing LD information to more efficiently exploit summary statistics in genetics research. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Application of Holland's Theory to a Nonprofessional Occupation.
ERIC Educational Resources Information Center
Miller, Mark J.; Bass, Connie
2003-01-01
The authors, using the recently developed Position Classification Inventory, examined male and female perceptions of a nonprofessional occupation. Results suggest that the PCI shows promise as a method of classifying occupations according to J. L. Holland's (1997) theory. (Contains 20 references and 2 tables.) (Author)
A spectral measurement method for determining white OLED average junction temperatures
NASA Astrophysics Data System (ADS)
Zhu, Yiting; Narendran, Nadarajah
2016-09-01
The objective of this study was to investigate an indirect method of measuring the average junction temperature of a white organic light-emitting diode (OLED) based on temperature sensitivity differences in the radiant power emitted by individual emitter materials (i.e., "blue," "green," and "red"). The measured spectral power distributions (SPDs) of the white OLED as a function of temperature showed amplitude decrease as a function of temperature in the different spectral bands, red, green, and blue. Analyzed data showed a good linear correlation between the integrated radiance for each spectral band and the OLED panel temperature, measured at a reference point on the back surface of the panel. The integrated radiance ratio of the spectral band green compared to red, (G/R), correlates linearly with panel temperature. Assuming that the panel reference point temperature is proportional to the average junction temperature of the OLED panel, the G/R ratio can be used for estimating the average junction temperature of an OLED panel.
He, Yan; Caporaso, J Gregory; Jiang, Xiao-Tao; Sheng, Hua-Fang; Huse, Susan M; Rideout, Jai Ram; Edgar, Robert C; Kopylova, Evguenia; Walters, William A; Knight, Rob; Zhou, Hong-Wei
2015-01-01
The operational taxonomic unit (OTU) is widely used in microbial ecology. Reproducibility in microbial ecology research depends on the reliability of OTU-based 16S ribosomal subunit RNA (rRNA) analyses. Here, we report that many hierarchical and greedy clustering methods produce unstable OTUs, with membership that depends on the number of sequences clustered. If OTUs are regenerated with additional sequences or samples, sequences originally assigned to a given OTU can be split into different OTUs. Alternatively, sequences assigned to different OTUs can be merged into a single OTU. This OTU instability affects alpha-diversity analyses such as rarefaction curves, beta-diversity analyses such as distance-based ordination (for example, Principal Coordinate Analysis (PCoA)), and the identification of differentially represented OTUs. Our results show that the proportion of unstable OTUs varies for different clustering methods. We found that the closed-reference method is the only one that produces completely stable OTUs, with the caveat that sequences that do not match a pre-existing reference sequence collection are discarded. As a compromise to the factors listed above, we propose using an open-reference method to enhance OTU stability. This type of method clusters sequences against a database and includes unmatched sequences by clustering them via a relatively stable de novo clustering method. OTU stability is an important consideration when analyzing microbial diversity and is a feature that should be taken into account during the development of novel OTU clustering methods.
2014-01-01
Automatic reconstruction of metabolic pathways for an organism from genomics and transcriptomics data has been a challenging and important problem in bioinformatics. Traditionally, known reference pathways can be mapped into an organism-specific ones based on its genome annotation and protein homology. However, this simple knowledge-based mapping method might produce incomplete pathways and generally cannot predict unknown new relations and reactions. In contrast, ab initio metabolic network construction methods can predict novel reactions and interactions, but its accuracy tends to be low leading to a lot of false positives. Here we combine existing pathway knowledge and a new ab initio Bayesian probabilistic graphical model together in a novel fashion to improve automatic reconstruction of metabolic networks. Specifically, we built a knowledge database containing known, individual gene / protein interactions and metabolic reactions extracted from existing reference pathways. Known reactions and interactions were then used as constraints for Bayesian network learning methods to predict metabolic pathways. Using individual reactions and interactions extracted from different pathways of many organisms to guide pathway construction is new and improves both the coverage and accuracy of metabolic pathway construction. We applied this probabilistic knowledge-based approach to construct the metabolic networks from yeast gene expression data and compared its results with 62 known metabolic networks in the KEGG database. The experiment showed that the method improved the coverage of metabolic network construction over the traditional reference pathway mapping method and was more accurate than pure ab initio methods. PMID:25374614
Mavridou, A; Smeti, E; Mandilara, G; Mandilara, G; Boufa, P; Vagiona-Arvanitidou, M; Vantarakis, A; Vassilandonopoulou, G; Pappa, O; Roussia, V; Tzouanopoulos, A; Livadara, M; Aisopou, I; Maraka, V; Nikolaou, E; Mandilara, G
2010-01-01
In this study ten laboratories in Greece compared the performance of reference method TTC Tergitol 7 Agar (with the additional test of beta-glucuronidase production) with five alternative methods, to detect E. coli in water, in line with European Water Directive recommendations. The samples were prepared by spiking drinking water with sewage effluent following a standard protocol. Chlorinated and non-chlorinated samples were used. The statistical analysis was based on the mean relative difference of confirmed counts and was performed in line with ISO 17994. The results showed that in total, three of the alternative methods (Chromocult Coliform agar, Membrane Lauryl Sulfate agar and Trypton Bilex-glucuronidase medium) were not different from TTC Tergitol 7 agar (TTC Tergitol 7 agar vs Chromocult Coliform agar, 294 samples, mean RD% 5.55; vs MLSA, 302 samples, mean RD% 1; vs TBX, 297 samples, mean RD% -2.78). The other two alternative methods (Membrane Faecal coliform medium and Colilert 18/ Quantitray) gave significantly higher counts than TTC Tergitol 7 agar (TTC Tergitol 7 agar vs MFc, 303 samples, mean RD% 8.81; vs Colilert-18/Quantitray, 76 samples, mean RD% 18.91). In other words, the alternative methods generated performance that was as reliable as, or even better than, the reference method. This study will help laboratories in Greece overcome culture and counting problems deriving from the EU reference method for E. coli counts in water samples.
An integrated pan-tropical biomass map using multiple reference datasets.
Avitabile, Valerio; Herold, Martin; Heuvelink, Gerard B M; Lewis, Simon L; Phillips, Oliver L; Asner, Gregory P; Armston, John; Ashton, Peter S; Banin, Lindsay; Bayol, Nicolas; Berry, Nicholas J; Boeckx, Pascal; de Jong, Bernardus H J; DeVries, Ben; Girardin, Cecile A J; Kearsley, Elizabeth; Lindsell, Jeremy A; Lopez-Gonzalez, Gabriela; Lucas, Richard; Malhi, Yadvinder; Morel, Alexandra; Mitchard, Edward T A; Nagy, Laszlo; Qie, Lan; Quinones, Marcela J; Ryan, Casey M; Ferry, Slik J W; Sunderland, Terry; Laurin, Gaia Vaglio; Gatti, Roberto Cazzolla; Valentini, Riccardo; Verbeeck, Hans; Wijaya, Arief; Willcock, Simon
2016-04-01
We combined two existing datasets of vegetation aboveground biomass (AGB) (Proceedings of the National Academy of Sciences of the United States of America, 108, 2011, 9899; Nature Climate Change, 2, 2012, 182) into a pan-tropical AGB map at 1-km resolution using an independent reference dataset of field observations and locally calibrated high-resolution biomass maps, harmonized and upscaled to 14 477 1-km AGB estimates. Our data fusion approach uses bias removal and weighted linear averaging that incorporates and spatializes the biomass patterns indicated by the reference data. The method was applied independently in areas (strata) with homogeneous error patterns of the input (Saatchi and Baccini) maps, which were estimated from the reference data and additional covariates. Based on the fused map, we estimated AGB stock for the tropics (23.4 N-23.4 S) of 375 Pg dry mass, 9-18% lower than the Saatchi and Baccini estimates. The fused map also showed differing spatial patterns of AGB over large areas, with higher AGB density in the dense forest areas in the Congo basin, Eastern Amazon and South-East Asia, and lower values in Central America and in most dry vegetation areas of Africa than either of the input maps. The validation exercise, based on 2118 estimates from the reference dataset not used in the fusion process, showed that the fused map had a RMSE 15-21% lower than that of the input maps and, most importantly, nearly unbiased estimates (mean bias 5 Mg dry mass ha(-1) vs. 21 and 28 Mg ha(-1) for the input maps). The fusion method can be applied at any scale including the policy-relevant national level, where it can provide improved biomass estimates by integrating existing regional biomass maps as input maps and additional, country-specific reference datasets. © 2015 John Wiley & Sons Ltd.
Mehranian, Abolfazl; Arabi, Hossein; Zaidi, Habib
2016-04-15
In quantitative PET/MR imaging, attenuation correction (AC) of PET data is markedly challenged by the need of deriving accurate attenuation maps from MR images. A number of strategies have been developed for MRI-guided attenuation correction with different degrees of success. In this work, we compare the quantitative performance of three generic AC methods, including standard 3-class MR segmentation-based, advanced atlas-registration-based and emission-based approaches in the context of brain time-of-flight (TOF) PET/MRI. Fourteen patients referred for diagnostic MRI and (18)F-FDG PET/CT brain scans were included in this comparative study. For each study, PET images were reconstructed using four different attenuation maps derived from CT-based AC (CTAC) serving as reference, standard 3-class MR-segmentation, atlas-registration and emission-based AC methods. To generate 3-class attenuation maps, T1-weighted MRI images were segmented into background air, fat and soft-tissue classes followed by assignment of constant linear attenuation coefficients of 0, 0.0864 and 0.0975 cm(-1) to each class, respectively. A robust atlas-registration based AC method was developed for pseudo-CT generation using local weighted fusion of atlases based on their morphological similarity to target MR images. Our recently proposed MRI-guided maximum likelihood reconstruction of activity and attenuation (MLAA) algorithm was employed to estimate the attenuation map from TOF emission data. The performance of the different AC algorithms in terms of prediction of bones and quantification of PET tracer uptake was objectively evaluated with respect to reference CTAC maps and CTAC-PET images. Qualitative evaluation showed that the MLAA-AC method could sparsely estimate bones and accurately differentiate them from air cavities. It was found that the atlas-AC method can accurately predict bones with variable errors in defining air cavities. Quantitative assessment of bone extraction accuracy based on Dice similarity coefficient (DSC) showed that MLAA-AC and atlas-AC resulted in DSC mean values of 0.79 and 0.92, respectively, in all patients. The MLAA-AC and atlas-AC methods predicted mean linear attenuation coefficients of 0.107 and 0.134 cm(-1), respectively, for the skull compared to reference CTAC mean value of 0.138cm(-1). The evaluation of the relative change in tracer uptake within 32 distinct regions of the brain with respect to CTAC PET images showed that the 3-class MRAC, MLAA-AC and atlas-AC methods resulted in quantification errors of -16.2 ± 3.6%, -13.3 ± 3.3% and 1.0 ± 3.4%, respectively. Linear regression and Bland-Altman concordance plots showed that both 3-class MRAC and MLAA-AC methods result in a significant systematic bias in PET tracer uptake, while the atlas-AC method results in a negligible bias. The standard 3-class MRAC method significantly underestimated cerebral PET tracer uptake. While current state-of-the-art MLAA-AC methods look promising, they were unable to noticeably reduce quantification errors in the context of brain imaging. Conversely, the proposed atlas-AC method provided the most accurate attenuation maps, and thus the lowest quantification bias. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wayson, Michael B.; Bolch, Wesley E.
2018-04-01
Various computational tools are currently available that facilitate patient organ dosimetry in diagnostic nuclear medicine, yet they are typically restricted to reporting organ doses to ICRP-defined reference phantoms. The present study, while remaining computational phantom based, provides straightforward tools to adjust reference phantom organ dose for both internal photon and electron sources. A wide variety of monoenergetic specific absorbed fractions were computed using radiation transport simulations for tissue spheres of varying size and separation distance. Scaling methods were then constructed for both photon and electron self-dose and cross-dose, with data validation provided from patient-specific voxel phantom simulations, as well as via comparison to the scaling methodology given in MIRD Pamphlet No. 11. Photon and electron self-dose was found to be dependent on both radiation energy and sphere size. Photon cross-dose was found to be mostly independent of sphere size. Electron cross-dose was found to be dependent on sphere size when the spheres were in close proximity, owing to differences in electron range. The validation studies showed that this dataset was more effective than the MIRD 11 method at predicting patient-specific photon doses for at both high and low energies, but gave similar results at photon energies between 100 keV and 1 MeV. The MIRD 11 method for electron self-dose scaling was accurate for lower energies but began to break down at higher energies. The photon cross-dose scaling methodology developed in this study showed gains in accuracy of up to 9% for actual patient studies, and the electron cross-dose scaling methodology showed gains in accuracy up to 9% as well when only the bremsstrahlung component of the cross-dose was scaled. These dose scaling methods are readily available for incorporation into internal dosimetry software for diagnostic phantom-based organ dosimetry.
Wayson, Michael B; Bolch, Wesley E
2018-04-13
Various computational tools are currently available that facilitate patient organ dosimetry in diagnostic nuclear medicine, yet they are typically restricted to reporting organ doses to ICRP-defined reference phantoms. The present study, while remaining computational phantom based, provides straightforward tools to adjust reference phantom organ dose for both internal photon and electron sources. A wide variety of monoenergetic specific absorbed fractions were computed using radiation transport simulations for tissue spheres of varying size and separation distance. Scaling methods were then constructed for both photon and electron self-dose and cross-dose, with data validation provided from patient-specific voxel phantom simulations, as well as via comparison to the scaling methodology given in MIRD Pamphlet No. 11. Photon and electron self-dose was found to be dependent on both radiation energy and sphere size. Photon cross-dose was found to be mostly independent of sphere size. Electron cross-dose was found to be dependent on sphere size when the spheres were in close proximity, owing to differences in electron range. The validation studies showed that this dataset was more effective than the MIRD 11 method at predicting patient-specific photon doses for at both high and low energies, but gave similar results at photon energies between 100 keV and 1 MeV. The MIRD 11 method for electron self-dose scaling was accurate for lower energies but began to break down at higher energies. The photon cross-dose scaling methodology developed in this study showed gains in accuracy of up to 9% for actual patient studies, and the electron cross-dose scaling methodology showed gains in accuracy up to 9% as well when only the bremsstrahlung component of the cross-dose was scaled. These dose scaling methods are readily available for incorporation into internal dosimetry software for diagnostic phantom-based organ dosimetry.
A framework for combining multiple soil moisture retrievals based on maximizing temporal correlation
NASA Astrophysics Data System (ADS)
Kim, Seokhyeon; Parinussa, Robert M.; Liu, Yi. Y.; Johnson, Fiona M.; Sharma, Ashish
2015-08-01
A method for combining two microwave satellite soil moisture products by maximizing the temporal correlation with a reference data set has been developed. The method was applied to two global soil moisture data sets, Japan Aerospace Exploration Agency (JAXA) and Land Parameter Retrieval Model (LPRM), retrieved from the Advanced Microwave Scanning Radiometer 2 observations for the period 2012-2014. A global comparison revealed superior results of the combined product compared to the individual products against the reference data set of ERA-Interim volumetric water content. The global mean temporal correlation coefficient of the combined product with this reference was 0.52 which outperforms the individual JAXA (0.35) as well as the LPRM (0.45) product. Additionally, the performance was evaluated against in situ observations from the International Soil Moisture Network. The combined data set showed a significant improvement in temporal correlation coefficients in the validation compared to JAXA and minor improvements for the LPRM product.
NASA Astrophysics Data System (ADS)
Didari, Shohreh; Ahmadi, Seyed Hamid
2018-05-01
Crop evapotranspiration (ET) is one of the main components in calculating the water balance in agricultural, hydrological, environmental, and climatological studies. Solar radiation (Rs) supplies the available energy for ET, and therefore, precise measurement of Rs is required for accurate ET estimation. However, measured Rs and ET and are not available in many areas and they should be estimated indirectly by the empirical methods. The Angström-Prescott (AP) is the most popular method for estimating Rs in areas where there are no measured data. In addition, the locally calibrated coefficients of AP are not yet available in many locations, and instead, the default coefficients are used. In this study, we investigated different approaches for Rs and ET calculations. The daily measured Rs values in 14 stations across arid and semi-arid areas of Fars province in south of Iran were used for calibrating the coefficients of the AP model. Results revealed that the calibrated AP coefficients were very different and higher than the default values. In addition, the reference ET (ET o ) was estimated by the FAO56 Penman-Monteith (FAO56 PM) and FAO24-radiation methods by using the measured Rs and were then compared with the measured pan evaporation as an indication of the potential atmospheric demand. Interestingly and unlike many previous studies, which have suggested the FAO56 PM as the standard method in calculation of ET o , the FAO24-radiation with the measured Rs showed better agreement with the mean pan evaporation. Therefore, the FAO24-radiation with the measured Rs was used as the reference method for the study area, which was also confirmed by the previous studies based on the lysimeter data. Moreover, the accuracy of calibrated Rs in the estimation of ET o by the FAO56 PM and FAO24-radiation was investigated. Results showed that the calibrated Rs improved the accuracy of the estimated ET o by the FAO24-radiation compared with the FAO24-radiation using the measured Rs as the reference method, whereas there was no improvement in the estimation of ET o by the FAO56 PM method compared with the FAO24-radiation using the measured Rs. Moreover, the empirical coefficient (α) of the Priestley and Taylor (PT) ET o estimation method was calibrated against the reference method and results indicated ca. 2 or higher α values than the recommended α = 1.26 in all stations. An empirical equation was suggested based on yearly mean relative humidity for estimation of α in the study area. Overall, this study showed that (1) the FAO24-radiation method with the either measured or calibrated Rs is more accurate than the FAO56 PM, (2) the spatially calibrated AP coefficients are very different from each other over an arid and semi-arid area and are higher than those proposed by the FAO56, (3) the original PT model is not applicable in arid and semi-arid area and substantially underestimates the ET o , and (4) the coefficient of the PT should be locally calibrated for each station over an arid and semi-arid area.
Measurement properties of gingival biotype evaluation methods.
Alves, Patrick Henry Machado; Alves, Thereza Cristina Lira Pacheco; Pegoraro, Thiago Amadei; Costa, Yuri Martins; Bonfante, Estevam Augusto; de Almeida, Ana Lúcia Pompéia Fraga
2018-06-01
There are numerous methods to measure the dimensions of the gingival tissue, but few have compared the effectiveness of one method over another. This study aimed to describe a new method and to estimate the validity of gingival biotype assessment with the aid of computed tomography scanning (CTS). In each patient different methods of evaluation of the gingival thickness were used: transparency of periodontal probe, transgingival, photography, and a new method of CTS). Intrarater and interrater reliability considering the categorical classification of the gingival biotype were estimated with Cohen's kappa coefficient, intraclass correlation coefficient (ICC), and ANOVA (P < .05). The criterion validity of the CTS was determined using the transgingival method as the reference standard. Sensitivity and specificity values were computed along with theirs 95% CI. Twelve patients were subjected to assessment of their gingival thickness. The highest agreement was found between transgingival and CTS (86.1%). The comparison between the categorical classifications of CTS and the transgingival method (reference standard) showed high specificity (94.92%) and low sensitivity (53.85%) for definition of a thin biotype. The new method of CTS assessment to classify gingival tissue thickness can be considered reliable and clinically useful to diagnose thick biotype. © 2018 Wiley Periodicals, Inc.
Single-trial event-related potential extraction through one-unit ICA-with-reference
NASA Astrophysics Data System (ADS)
Lih Lee, Wee; Tan, Tele; Falkmer, Torbjörn; Leung, Yee Hong
2016-12-01
Objective. In recent years, ICA has been one of the more popular methods for extracting event-related potential (ERP) at the single-trial level. It is a blind source separation technique that allows the extraction of an ERP without making strong assumptions on the temporal and spatial characteristics of an ERP. However, the problem with traditional ICA is that the extraction is not direct and is time-consuming due to the need for source selection processing. In this paper, the application of an one-unit ICA-with-Reference (ICA-R), a constrained ICA method, is proposed. Approach. In cases where the time-region of the desired ERP is known a priori, this time information is utilized to generate a reference signal, which is then used for guiding the one-unit ICA-R to extract the source signal of the desired ERP directly. Main results. Our results showed that, as compared to traditional ICA, ICA-R is a more effective method for analysing ERP because it avoids manual source selection and it requires less computation thus resulting in faster ERP extraction. Significance. In addition to that, since the method is automated, it reduces the risks of any subjective bias in the ERP analysis. It is also a potential tool for extracting the ERP in online application.
A Reference Method for Measuring Emissions of SVOCs in ...
Semivolatile organic compounds (SVOCs) are indoor air pollutants that may may have significant adverse effects on human health, and emission of SVOCs from building materials and consumer products is of growing concern. Few chamber studies have been conducted due to the challenges associated with SVOC analysis and the lack of validation procedures. Thus there is an urgent need for a reliable and accurate chamber test method to verify the performance of these measurements. A reference method employing a specially-designed chamber and experimental protocol has been developed and is undergoing extensive evaluation. A pilot interlaboratory study (ILS) has been conducted with five laboratories performing chamber tests under identical conditions. Results showed inter-laboratory variations at 25% for SVOC emission rates, with greater agreement observed between intra-laboratory measurements for most of the participating laboratories. The measured concentration profiles also compared reasonably well to the mechanistic model, demonstrating the feasibility of the proposed reference method to independently assess laboratory performance and validate SVOC emission tests. There is an urgent need for improved understanding of the measurement uncertainties associated with SVOC emissions testing. The creation of specially-designed chambers and well-characterized materials serves as a critical prerequisite for improving the procedure used to measure SVOCs emitted from indoor
Single-trial event-related potential extraction through one-unit ICA-with-reference.
Lee, Wee Lih; Tan, Tele; Falkmer, Torbjörn; Leung, Yee Hong
2016-12-01
In recent years, ICA has been one of the more popular methods for extracting event-related potential (ERP) at the single-trial level. It is a blind source separation technique that allows the extraction of an ERP without making strong assumptions on the temporal and spatial characteristics of an ERP. However, the problem with traditional ICA is that the extraction is not direct and is time-consuming due to the need for source selection processing. In this paper, the application of an one-unit ICA-with-Reference (ICA-R), a constrained ICA method, is proposed. In cases where the time-region of the desired ERP is known a priori, this time information is utilized to generate a reference signal, which is then used for guiding the one-unit ICA-R to extract the source signal of the desired ERP directly. Our results showed that, as compared to traditional ICA, ICA-R is a more effective method for analysing ERP because it avoids manual source selection and it requires less computation thus resulting in faster ERP extraction. In addition to that, since the method is automated, it reduces the risks of any subjective bias in the ERP analysis. It is also a potential tool for extracting the ERP in online application.
An Adaptive Filter for the Removal of Drifting Sinusoidal Noise Without a Reference.
Kelly, John W; Siewiorek, Daniel P; Smailagic, Asim; Wang, Wei
2016-01-01
This paper presents a method for filtering sinusoidal noise with a variable bandwidth filter that is capable of tracking a sinusoid's drifting frequency. The method, which is based on the adaptive noise canceling (ANC) technique, will be referred to here as the adaptive sinusoid canceler (ASC). The ASC eliminates sinusoidal contamination by tracking its frequency and achieving a narrower bandwidth than typical notch filters. The detected frequency is used to digitally generate an internal reference instead of relying on an external one as ANC filters typically do. The filter's bandwidth adjusts to achieve faster and more accurate convergence. In this paper, the focus of the discussion and the data is physiological signals, specifically electrocorticographic (ECoG) neural data contaminated with power line noise, but the presented technique could be applicable to other recordings as well. On simulated data, the ASC was able to reliably track the noise's frequency, properly adjust its bandwidth, and outperform comparative methods including standard notch filters and an adaptive line enhancer. These results were reinforced by visual results obtained from real ECoG data. The ASC showed that it could be an effective method for increasing signal to noise ratio in the presence of drifting sinusoidal noise, which is of significant interest for biomedical applications.
Rigorous Combination of GNSS and VLBI: How it Improves Earth Orientation and Reference Frames
NASA Astrophysics Data System (ADS)
Lambert, S. B.; Richard, J. Y.; Bizouard, C.; Becker, O.
2017-12-01
Current reference series (C04) of the International Earth Rotation and Reference Systems Service (IERS) are produced by a weighted combination of Earth orientation parameters (EOP) time series built up by combination centers of each technique (VLBI, GNSS, Laser ranging, DORIS). In the future, we plan to derive EOP from a rigorous combination of the normal equation systems of the four techniques.We present here the results of a rigorous combination of VLBI and GNSS pre-reduced, constraint-free, normal equations with the DYNAMO geodetic analysis software package developed and maintained by the French GRGS (Groupe de Recherche en GeÌodeÌsie Spatiale). The used normal equations are those produced separately by the IVS and IGS combination centers to which we apply our own minimal constraints.We address the usefulness of such a method with respect to the classical, a posteriori, combination method, and we show whether EOP determinations are improved.Especially, we implement external validations of the EOP series based on comparison with geophysical excitation and examination of the covariance matrices. Finally, we address the potential of the technique for the next generation celestial reference frames, which are currently determined by VLBI only.
USA hCG reference service, 10-year report.
Cole, Laurence A; Laidler, Laura L; Muller, Carolyn Y
2010-08-01
The USA hCG Reference Service has been dealing with cases of persistent low levels of hCG and gestational trophoblastic diseases for 10years. Here we present the complete experience. Total hCG in serum and urine was measured using the Siemen's Immulite 1000 assay. Hyperglycosylated hCG, nicked hCG, free ss-subunit and ss-core fragment were measured using microtiterplate assays with antibodies B152, B151, FBT11 and B210, respectively. The USA hCG Reference Service has identified 83 cases of false-positive hCG, 71 cases of aggressive gestational trophoblastic disease (GTD), 52 cases of minimally invasive GTD, 168 cases of quiescent GTD and 22 cases of placenta site trophoblastic tumor (PSTT). In addition, 103 cases of pituitary hCG have been identified, 60 cases of nontrophoblastic tumor, 4 cases of inherited hCG and 2 cases of Munchausen's syndrome. This is 565 cases total. Multiple new methods are described and tested for diagnosing all of these disorders. The USA hCG Reference Service experience shows new methods for detecting multiple hCG-related disorders and recommends new approaches for detecting these hCG-related disorders. 2010 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Crop Field Reflectance Measurements
NASA Astrophysics Data System (ADS)
Weber, Christian; Schinca, Daniel C.; Tocho, Jorge O.; Videla, Fabian
2008-04-01
We present in this paper the results of reflectance measurements performed with a three-band passive radiometer with independent channels for solar irradiance reference. The comparative operation of the traditional method that alternatively uses measurements of the field (downward-looking) and a white panel for reference (downard-looking) and the new approach that involves duplicated spectral channels, each one with its own difuser that point upwards to the zenith direction (upward-looking) is analyzed. The results indicated that the latter method is more suitable for use with passive sensors under rapid changing atmospheric conditions (such as clouds, dust, mist, smog and other scatterers), since a more reliable synchronic record of reference and incident light is achieved. Besides, having separate channels for the reference and the signal allow a better balancing of gains in the amplifiers for each spectral channel. We show the results obtained in the determination of the Normalized Difference Vegetation Index (NDVI) corresponding to 2006 and 2007 field experiments concerning weeds detection and fertilizer levels assessing in wheat, to refine sensor-based fertilizer nitrogen rate recommendations. It is also shown the variation of the radiometric normalization measurements taken at noon (nadir solar position) for the whole culture cycle corresponding to two seasons (winter and spring).
Araki, Hiromitsu; Takada, Naoki; Niwase, Hiroaki; Ikawa, Shohei; Fujiwara, Masato; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2015-12-01
We propose real-time time-division color electroholography using a single graphics processing unit (GPU) and a simple synchronization system of reference light. To facilitate real-time time-division color electroholography, we developed a light emitting diode (LED) controller with a universal serial bus (USB) module and the drive circuit for reference light. A one-chip RGB LED connected to a personal computer via an LED controller was used as the reference light. A single GPU calculates three computer-generated holograms (CGHs) suitable for red, green, and blue colors in each frame of a three-dimensional (3D) movie. After CGH calculation using a single GPU, the CPU can synchronize the CGH display with the color switching of the one-chip RGB LED via the LED controller. Consequently, we succeeded in real-time time-division color electroholography for a 3D object consisting of around 1000 points per color when an NVIDIA GeForce GTX TITAN was used as the GPU. Furthermore, we implemented the proposed method in various GPUs. The experimental results showed that the proposed method was effective for various GPUs.
Farooqui, Javed Hussain; Sharma, Mansi; Koul, Archana; Dutta, Ranjan; Shroff, Noshir Minoo
2017-01-01
PURPOSE: The aim of this study is to compare two different methods of analysis of preoperative reference marking for toric intraocular lens (IOL) after marking with an electronic marker. SETTING/VENUE: Cataract and IOL Implantation Service, Shroff Eye Centre, New Delhi, India. PATIENTS AND METHODS: Fifty-two eyes of thirty patients planned for toric IOL implantation were included in the study. All patients had preoperative marking performed with an electronic preoperative two-step toric IOL reference marker (ASICO AE-2929). Reference marks were placed at 3-and 9-o'clock positions. Marks were analyzed with two systems. First, slit-lamp photographs taken and analyzed using Adobe Photoshop (version 7.0). Second, Tracey iTrace Visual Function Analyzer (version 5.1.1) was used for capturing corneal topograph examination and position of marks noted. Amount of alignment error was calculated. RESULTS: Mean absolute rotation error was 2.38 ± 1.78° by Photoshop and 2.87 ± 2.03° by iTrace which was not statistically significant (P = 0.215). Nearly 72.7% of eyes by Photoshop and 61.4% by iTrace had rotation error ≤3° (P = 0.359); and 90.9% of eyes by Photoshop and 81.8% by iTrace had rotation error ≤5° (P = 0.344). No significant difference in absolute amount of rotation between eyes when analyzed by either method. CONCLUSIONS: Difference in reference mark positions when analyzed by two systems suggests the presence of varying cyclotorsion at different points of time. Both analysis methods showed an approximately 3° of alignment error, which could contribute to 10% loss of astigmatic correction of toric IOL. This can be further compounded by intra-operative marking errors and final placement of IOL in the bag. PMID:28757694
van der Vorm, Lisa N; Hendriks, Jan C M; Laarakkers, Coby M; Klaver, Siem; Armitage, Andrew E; Bamberg, Alison; Geurts-Moespot, Anneke J; Girelli, Domenico; Herkert, Matthias; Itkonen, Outi; Konrad, Robert J; Tomosugi, Naohisa; Westerman, Mark; Bansal, Sukhvinder S; Campostrini, Natascia; Drakesmith, Hal; Fillet, Marianne; Olbina, Gordana; Pasricha, Sant-Rayn; Pitts, Kelly R; Sloan, John H; Tagliaro, Franco; Weykamp, Cas W; Swinkels, Dorine W
2016-07-01
Absolute plasma hepcidin concentrations measured by various procedures differ substantially, complicating interpretation of results and rendering reference intervals method dependent. We investigated the degree of equivalence achievable by harmonization and the identification of a commutable secondary reference material to accomplish this goal. We applied technical procedures to achieve harmonization developed by the Consortium for Harmonization of Clinical Laboratory Results. Eleven plasma hepcidin measurement procedures (5 mass spectrometry based and 6 immunochemical based) quantified native individual plasma samples (n = 32) and native plasma pools (n = 8) to assess analytical performance and current and achievable equivalence. In addition, 8 types of candidate reference materials (3 concentrations each, n = 24) were assessed for their suitability, most notably in terms of commutability, to serve as secondary reference material. Absolute hepcidin values and reproducibility (intrameasurement procedure CVs 2.9%-8.7%) differed substantially between measurement procedures, but all were linear and correlated well. The current equivalence (intermeasurement procedure CV 28.6%) between the methods was mainly attributable to differences in calibration and could thus be improved by harmonization with a common calibrator. Linear regression analysis and standardized residuals showed that a candidate reference material consisting of native lyophilized plasma with cryolyoprotectant was commutable for all measurement procedures. Mathematically simulated harmonization with this calibrator resulted in a maximum achievable equivalence of 7.7%. The secondary reference material identified in this study has the potential to substantially improve equivalence between hepcidin measurement procedures and contributes to the establishment of a traceability chain that will ultimately allow standardization of hepcidin measurement results. © 2016 American Association for Clinical Chemistry.
Paten, A M; Pain, S J; Peterson, S W; Blair, H T; Kenyon, P R; Dearden, P K; Duncan, E J
2014-08-01
The mammary gland is a complex tissue consisting of multiple cell types which, over the lifetime of an animal, go through repeated cycles of development associated with pregnancy, lactation and involution. The mammary gland is also known to be sensitive to maternal programming by environmental stimuli such as nutrition. The molecular basis of these adaptations is of significant interest, but requires robust methods to measure gene expression. Reverse-transcription quantitative PCR (RT-qPCR) is commonly used to measure gene expression, and is currently the method of choice for validating genome-wide expression studies. RT-qPCR requires the selection of reference genes that are stably expressed over physiological states and treatments. In this study we identify suitable reference genes to normalize RT-qPCR data for the ovine mammary gland in two physiological states; late pregnancy and lactation. Biopsies were collected from offspring of ewes that had been subjected to different nutritional paradigms during pregnancy to examine effects of maternal programming on the mammary gland of the offspring. We evaluated eight candidate reference genes and found that two reference genes (PRPF3 and CUL1) are required for normalising RT-qPCR data from pooled RNA samples, but five reference genes are required for analyzing gene expression in individual animals (SENP2, EIF6, MRPL39, ATP1A1, CUL1). Using these stable reference genes, we showed that TET1, a key regulator of DNA methylation, is responsive to maternal programming and physiological state. The identification of these novel reference genes will be of utility to future studies of gene expression in the ovine mammary gland. Copyright © 2014 the American Physiological Society.
Purposes and methods of scoring earthquake forecasts
NASA Astrophysics Data System (ADS)
Zhuang, J.
2010-12-01
There are two kinds of purposes in the studies on earthquake prediction or forecasts: one is to give a systematic estimation of earthquake risks in some particular region and period in order to give advice to governments and enterprises for the use of reducing disasters, the other one is to search for reliable precursors that can be used to improve earthquake prediction or forecasts. For the first case, a complete score is necessary, while for the latter case, a partial score, which can be used to evaluate whether the forecasts or predictions have some advantages than a well know model, is necessary. This study reviews different scoring methods for evaluating the performance of earthquake prediction and forecasts. Especially, the gambling scoring method, which is developed recently, shows its capacity in finding good points in an earthquake prediction algorithm or model that are not in a reference model, even if its overall performance is no better than the reference model.
NASA Astrophysics Data System (ADS)
Benítez, Hernán D.; Ibarra-Castanedo, Clemente; Bendada, AbdelHakim; Maldague, Xavier; Loaiza, Humberto; Caicedo, Eduardo
2008-01-01
It is well known that the methods of thermographic non-destructive testing based on the thermal contrast are strongly affected by non-uniform heating at the surface. Hence, the results obtained from these methods considerably depend on the chosen reference point. The differential absolute contrast (DAC) method was developed to eliminate the need of determining a reference point that defined the thermal contrast with respect to an ideal sound area. Although, very useful at early times, the DAC accuracy decreases when the heat front approaches the sample rear face. We propose a new DAC version by explicitly introducing the sample thickness using the thermal quadrupoles theory and showing that the new DAC range of validity increases for long times while preserving the validity for short times. This new contrast is used for defect quantification in composite, Plexiglas™ and aluminum samples.
Convex formulation of multiple instance learning from positive and unlabeled bags.
Bao, Han; Sakai, Tomoya; Sato, Issei; Sugiyama, Masashi
2018-05-24
Multiple instance learning (MIL) is a variation of traditional supervised learning problems where data (referred to as bags) are composed of sub-elements (referred to as instances) and only bag labels are available. MIL has a variety of applications such as content-based image retrieval, text categorization, and medical diagnosis. Most of the previous work for MIL assume that training bags are fully labeled. However, it is often difficult to obtain an enough number of labeled bags in practical situations, while many unlabeled bags are available. A learning framework called PU classification (positive and unlabeled classification) can address this problem. In this paper, we propose a convex PU classification method to solve an MIL problem. We experimentally show that the proposed method achieves better performance with significantly lower computation costs than an existing method for PU-MIL. Copyright © 2018 Elsevier Ltd. All rights reserved.
Traceable Coulomb blockade thermometry
NASA Astrophysics Data System (ADS)
Hahtela, O.; Mykkänen, E.; Kemppinen, A.; Meschke, M.; Prunnila, M.; Gunnarsson, D.; Roschier, L.; Penttilä, J.; Pekola, J.
2017-02-01
We present a measurement and analysis scheme for determining traceable thermodynamic temperature at cryogenic temperatures using Coulomb blockade thermometry. The uncertainty of the electrical measurement is improved by utilizing two sampling digital voltmeters instead of the traditional lock-in technique. The remaining uncertainty is dominated by that of the numerical analysis of the measurement data. Two analysis methods are demonstrated: numerical fitting of the full conductance curve and measuring the height of the conductance dip. The complete uncertainty analysis shows that using either analysis method the relative combined standard uncertainty (k = 1) in determining the thermodynamic temperature in the temperature range from 20 mK to 200 mK is below 0.5%. In this temperature range, both analysis methods produced temperature estimates that deviated from 0.39% to 0.67% from the reference temperatures provided by a superconducting reference point device calibrated against the Provisional Low Temperature Scale of 2000.
Blair, K. S.; Otero, M.; Teng, C.; Geraci, M.; Lewis, E.; Hollon, N.; Blair, R. J. R.; Ernst, Monique; Grillon, C.; Pine, D. S.
2016-01-01
Background Social anxiety disorder involves fear of social objects or situations. Social referencing may play an important role in the acquisition of this fear and could be a key determinant in future biomarkers and treatment pathways. However, the neural underpinnings mediating such learning in social anxiety are unknown. Using event-related functional magnetic resonance imaging, we examined social reference learning in social anxiety disorder. Specifically, would patients with the disorder show increased amygdala activity during social reference learning, and further, following social reference learning, show particularly increased response to objects associated with other people’s negative reactions? Method A total of 32 unmedicated patients with social anxiety disorder and 22 age-, intelligence quotient- and gender-matched healthy individuals responded to objects that had become associated with others’ fearful, angry, happy or neutral reactions. Results During the social reference learning phase, a significant group × social context interaction revealed that, relative to the comparison group, the social anxiety group showed a significantly greater response in the amygdala, as well as rostral, dorsomedial and lateral frontal and parietal cortices during the social, relative to non-social, referencing trials. In addition, during the object test phase, relative to the comparison group, the social anxiety group showed increased bilateral amygdala activation to objects associated with others’ fearful reactions, and a trend towards decreased amygdala activation to objects associated with others’ happy and neutral reactions. Conclusions These results suggest perturbed observational learning in social anxiety disorder. In addition, they further implicate the amygdala and dorsomedial prefrontal cortex in the disorder, and underscore their importance in future biomarker developments. PMID:27476529
Peng, Rongxue; Zhang, Rui; Lin, Guigao; Yang, Xin; Li, Ziyang; Zhang, Kuo; Zhang, Jiawei; Li, Jinming
2017-09-01
The echinoderm microtubule-associated protein-like 4 and anaplastic lymphoma kinase (ALK) receptor tyrosine kinase (EML4-ALK) rearrangement is an important biomarker that plays a pivotal role in therapeutic decision making for non-small-cell lung cancer (NSCLC) patients. Ensuring accuracy and reproducibility of EML4-ALK testing by fluorescence in situ hybridization, immunohistochemistry, RT-PCR, and next-generation sequencing requires reliable reference materials for monitoring assay sensitivity and specificity. Herein, we developed novel reference materials for various kinds of EML4-ALK testing. CRISPR/Cas9 was used to edit various NSCLC cell lines containing EML4-ALK rearrangement variants 1, 2, and 3a/b. After s.c. inoculation, the formalin-fixed, paraffin-embedded (FFPE) samples from xenografts were prepared and tested for suitability as candidate reference materials by fluorescence in situ hybridization, immunohistochemistry, RT-PCR, and next-generation sequencing. Sample validation and commutability assessments showed that all types of FFPE samples derived from xenograft tumors have typical histological structures, and EML4-ALK testing results were similar to the clinical ALK-positive NSCLC specimens. Among the four methods for EML4-ALK detection, the validation test showed 100% concordance. Furthermore, these novel FFPE reference materials showed good stability and homogeneity. Without limitations on variant types and production, our novel FFPE samples based on CRISPR/Cas9 editing and xenografts are suitable as candidate reference materials for the validation, verification, internal quality control, and proficiency testing of EML4-ALK detection. Copyright © 2017 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Concept for an off-line gain stabilisation method.
Pommé, S; Sibbens, G
2004-01-01
Conceptual ideas are presented for an off-line gain stabilisation method for spectrometry, in particular for alpha-particle spectrometry at low count rate. The method involves list mode storage of individual energy and time stamp data pairs. The 'Stieltjes integral' of measured spectra with respect to a reference spectrum is proposed as an indicator for gain instability. 'Exponentially moving averages' of the latter show the gain shift as a function of time. With this information, the data are relocated stochastically on a point-by-point basis.
Distributed Combinatorial Optimization Using Privacy on Mobile Phones
NASA Astrophysics Data System (ADS)
Ono, Satoshi; Katayama, Kimihiro; Nakayama, Shigeru
This paper proposes a method for distributed combinatorial optimization which uses mobile phones as computers. In the proposed method, an ordinary computer generates solution candidates and mobile phones evaluates them by referring privacy — private information and preferences. Users therefore does not have to send their privacy to any other computers and does not have to refrain from inputting their preferences. They therefore can obtain satisfactory solution. Experimental results have showed the proposed method solved room assignment problems without sending users' privacy to a server.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Reference and Class I Equivalent Methods for PM2.5 and PM10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent Methods for PM2.5 or PM10â2.5 Pt. 53...
Study on the evaluation method for fault displacement based on characterized source model
NASA Astrophysics Data System (ADS)
Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.
2016-12-01
In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.
Direct-to-digital holography reduction of reference hologram noise and fourier space smearing
Voelkl, Edgar
2006-06-27
Systems and methods are described for reduction of reference hologram noise and reduction of Fourier space smearing, especially in the context of direct-to-digital holography (off-axis interferometry). A method of reducing reference hologram noise includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference image waves; and transforming the corresponding plurality of reference image waves into a reduced noise reference image wave. A method of reducing smearing in Fourier space includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference complex image waves; transforming the corresponding plurality of reference image waves into a reduced noise reference complex image wave; recording a hologram of an object; processing the hologram of the object into an object complex image wave; and dividing the complex image wave of the object by the reduced noise reference complex image wave to obtain a reduced smearing object complex image wave.
Ramírez-Vélez, Robinson; Tordecilla-Sanders, Alejandra; Correa-Bautista, Jorge Enrique; González-Ruíz, Katherine; González-Jiménez, Emilio; Triana-Reina, Hector Reynaldo; García-Hermoso, Antonio; Schmidt-RioValle, Jacqueline
2018-01-01
To verify the validity of multi-frequency bioelectrical impedance analysis (mBCA) for predicting body fat percentage (BF%) in overweight/obese adults using dual-energy X-ray absorptiometry (DXA) as the reference method. Forty-eight adults participated (54% women, mean age = 41.0 ± 7.3 years old). The Pearson's correlation coefficient was used to evaluate the correlation between BIA and BF% assessed by DXA. The concordance between BF% measured by both methods was obtained with Lin's concordance correlation coefficient and Bland-Altman difference plots. Measures of BF% were estimated as 39.0 (SD = 6.1) and 38.3 (SD = 6.5) using DXA and mBCA, respectively. The Pearson's correlation coefficient reflected a strong correlation (r =.921, P = .001). The paired t-test showed a significant mean difference between these methods for obese men BF% of -0.6 [(SD 1.95; 95% CI = -4.0 to 3.0), P =.037]. Overall, the bias of the mBCA was -0.6 [(SD 2.2; 95% CI = -5.0 to 3.7), P =.041], which indicated that the mBCA method significantly underestimated BF% in comparison to the reference method. Finally, in both genders, Lin's concordance correlation coefficient showed a strong agreement. More specifically the DXA value was ρc = 0.943 (95% CI = 0.775 to 0.950) and the mBCA value was ρc = 0.948 (95% CI = 0.778 to 0.978). Our analysis showed a strong agreement between the two methods as reflected in the range of BF%. These results show that mBCA and DXA are comparable methods for measuring body composition with higher body fat percentages. However, due to broad limits of agreement, we can only recommend mBCA for groups of populations. © 2017 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Wang, Wen-Chung; Su, Ya-Hui
2004-01-01
In this study we investigated the effects of the average signed area (ASA) between the item characteristic curves of the reference and focal groups and three test purification procedures on the uniform differential item functioning (DIF) detection via the Mantel-Haenszel (M-H) method through Monte Carlo simulations. The results showed that ASA,…
Research on a high-precision calibration method for tunable lasers
NASA Astrophysics Data System (ADS)
Xiang, Na; Li, Zhengying; Gui, Xin; Wang, Fan; Hou, Yarong; Wang, Honghai
2018-03-01
Tunable lasers are widely used in the field of optical fiber sensing, but nonlinear tuning exists even for zero external disturbance and limits the accuracy of the demodulation. In this paper, a high-precision calibration method for tunable lasers is proposed. A comb filter is introduced and the real-time output wavelength and scanning rate of the laser are calibrated by linear fitting several time-frequency reference points obtained from it, while the beat signal generated by the auxiliary interferometer is interpolated and frequency multiplied to find more accurate zero crossing points, with these points being used as wavelength counters to resample the comb signal to correct the nonlinear effect, which ensures that the time-frequency reference points of the comb filter are linear. A stability experiment and a strain sensing experiment verify the calibration precision of this method. The experimental result shows that the stability and wavelength resolution of the FBG demodulation can reach 0.088 pm and 0.030 pm, respectively, using a tunable laser calibrated by the proposed method. We have also compared the demodulation accuracy in the presence or absence of the comb filter, with the result showing that the introduction of the comb filter results to a 15-fold wavelength resolution enhancement.
Validation of Modifications to the ANSR(®) Listeria Method for Improved Ease of Use and Performance.
Caballero, Oscar; Alles, Susan; Le, Quynh-Nhi; Gray, R Lucas; Hosking, Edan; Pinkava, Lisa; Norton, Paul; Tolan, Jerry; Mozola, Mark; Rice, Jennifer; Chen, Yi; Odumeru, Joseph; Ryser, Elliot
2016-01-01
A study was conducted to validate minor reagent formulation, enrichment, and procedural changes to the ANSR(®) Listeria method, Performance-Tested Method(SM) 101202. In order to improve ease of use and diminish risk of amplicon contamination, the lyophilized reagent components were reformulated for increased solubility, thus eliminating the need to mix by pipetting. In the alternative procedure, an aliquot of the lysate is added to lyophilized ANSR reagents, immediately capped, and briefly mixed by vortexing. When three foods (hot dogs, Mexican-style cheese, and cantaloupe) and sponge samples taken from a stainless steel surface were tested, significant differences in performance between the ANSR and U.S. Food and Drug Administration Bacteriological Analytical Manual or U.S. Department of Agriculture, Food Safety and Inspection Service Microbiology Laboratory Guidebook reference culture procedures were seen with hot dogs and Mexican-style cheese after 16 h enrichment, with the reference methods producing more positive results. After 24 h enrichment, however, there were no significant differences in method performance for any of the four matrixes tested. Robustness testing was also conducted, with variations to lysis buffer volume, lysis time, and sample volume having no demonstrable effect on assay results. Accelerated stability testing was carried out over a 10-week period and showed no diminishment in assay performance. A second phase of the study examined performance of the ANSR assay following enrichment in a new medium, LESS Plus broth, designed for use with all food and environmental sample types. With the alternative LESS Plus broth, there were no significant differences in performance between the ANSR method and the reference culture procedures for any of the matrixes tested after either 16 or 24 h enrichment, although 24 h enrichment is recommended for hot dogs due to higher sensitivity. Results of inclusivity and exclusivity testing using LESS Plus broth showed that the ANSR assay is highly specific, with 100% expected results for target and nontarget bacteria.
Zhou, Jianhua; Ren, Kangning; Zheng, Yizhe; Su, Jing; Zhao, Yihua; Ryan, Declan; Wu, Hongkai
2010-09-01
This report describes a convenient method for the fabrication of a miniaturized, reliable Ag/AgCl reference electrode with nanofluidic channels acting as a salt bridge that can be easily integrated into microfluidic chips. The Ag/AgCl reference electrode shows high stability with millivolt variations. We demonstrated the application of this reference electrode in a portable microfluidic chip that is connected to a USB-port microelectrochemical station and to a computer for data collection and analysis. The low fabrication cost of the chip with the potential for mass production makes it disposable and an excellent candidate for real-world analysis and measurement. We used the chip to quantitatively analyze the concentrations of heavy metal ions (Cd(2+) and Pb(2+)) in sea water. We believe that the Ag/AgCl reference microelectrode and the portable electrochemical system will be of interest to people in microfluidics, environmental science, clinical diagnostics, and food research.
NASA Astrophysics Data System (ADS)
Bai, Chao-ying; He, Lei-yu; Li, Xing-wang; Sun, Jia-yu
2018-05-01
To conduct forward and simultaneous inversion in a complex geological model, including an irregular topography (or irregular reflector or velocity anomaly), we in this paper combined our previous multiphase arrival tracking method (referred as triangular shortest-path method, TSPM) in triangular (2D) or tetrahedral (3D) cell model and a linearized inversion solver (referred to as damped minimum norms and constrained least squares problem solved using the conjugate gradient method, DMNCLS-CG) to formulate a simultaneous travel time inversion method for updating both velocity and reflector geometry by using multiphase arrival times. In the triangular/tetrahedral cells, we deduced the partial derivative of velocity variation with respective to the depth change of reflector. The numerical simulation results show that the computational accuracy can be tuned to a high precision in forward modeling and the irregular velocity anomaly and reflector geometry can be accurately captured in the simultaneous inversion, because the triangular/tetrahedral cell can be easily used to stitch the irregular topography or subsurface interface.
Kang, Yeona; Mozley, P David; Verma, Ajay; Schlyer, David; Henchcliffe, Claire; Gauthier, Susan A; Chiao, Ping C; He, Bin; Nikolopoulou, Anastasia; Logan, Jean; Sullivan, Jenna M; Pryor, Kane O; Hesterman, Jacob; Kothari, Paresh J; Vallabhajosula, Shankar
2018-05-04
Neuroinflammation has been implicated in the pathophysiology of Parkinson's disease (PD), which might be influenced by successful neuroprotective drugs. The uptake of [ 11 C](R)-PK11195 (PK) is often considered to be a proxy for neuroinflammation, and can be quantified using the Logan graphical method with an image-derived blood input function, or the Logan reference tissue model using automated reference region extraction. The purposes of this study were (1) to assess whether these noninvasive image analysis methods can discriminate between patients with PD and healthy volunteers (HVs), and (2) to establish the effect size that would be required to distinguish true drug-induced changes from system variance in longitudinal trials. The sample consisted of 20 participants with PD and 19 HVs. Two independent teams analyzed the data to compare the volume of distribution calculated using image-derived input functions (IDIFs), and binding potentials calculated using the Logan reference region model. With all methods, the higher signal-to-background in patients resulted in lower variability and better repeatability than in controls. We were able to use noninvasive techniques showing significantly increased uptake of PK in multiple brain regions of participants with PD compared to HVs. Although not necessarily reflecting absolute values, these noninvasive image analysis methods can discriminate between PD patients and HVs. We see a difference of 24% in the substantia nigra between PD and HV with a repeatability coefficient of 13%, showing that it will be possible to estimate responses in longitudinal, within subject trials of novel neuroprotective drugs. © 2018 The Authors. Journal of Neuroimaging published by Wiley Periodicals, Inc. on behalf of American Society of Neuroimaging.
Quick acquisition and recognition method for the beacon in deep space optical communications.
Wang, Qiang; Liu, Yuefei; Ma, Jing; Tan, Liying; Yu, Siyuan; Li, Changjiang
2016-12-01
In deep space optical communications, it is very difficult to acquire the beacon given the long communication distance. Acquisition efficiency is essential for establishing and holding the optical communication link. Here we proposed a quick acquisition and recognition method for the beacon in deep optical communications based on the characteristics of the deep optical link. To identify the beacon from the background light efficiently, we utilized the maximum similarity between the collecting image and the reference image for accurate recognition and acquisition of the beacon in the area of uncertainty. First, the collecting image and the reference image were processed by Fourier-Mellin. Second, image sampling and image matching were applied for the accurate positioning of the beacon. Finally, the field programmable gate array (FPGA)-based system was used to verify and realize this method. The experimental results showed that the acquisition time for the beacon was as fast as 8.1s. Future application of this method in the system design of deep optical communication will be beneficial.
NASA Astrophysics Data System (ADS)
Bai, Chao-ying; He, Lei-yu; Li, Xing-wang; Sun, Jia-yu
2017-12-01
To conduct forward and simultaneous inversion in a complex geological model, including an irregular topography (or irregular reflector or velocity anomaly), we in this paper combined our previous multiphase arrival tracking method (referred as triangular shortest-path method, TSPM) in triangular (2D) or tetrahedral (3D) cell model and a linearized inversion solver (referred to as damped minimum norms and constrained least squares problem solved using the conjugate gradient method, DMNCLS-CG) to formulate a simultaneous travel time inversion method for updating both velocity and reflector geometry by using multiphase arrival times. In the triangular/tetrahedral cells, we deduced the partial derivative of velocity variation with respective to the depth change of reflector. The numerical simulation results show that the computational accuracy can be tuned to a high precision in forward modeling and the irregular velocity anomaly and reflector geometry can be accurately captured in the simultaneous inversion, because the triangular/tetrahedral cell can be easily used to stitch the irregular topography or subsurface interface.
NASA Astrophysics Data System (ADS)
Ferus, Martin; Koukal, Jakub; Lenža, Libor; Srba, Jiří; Kubelík, Petr; Laitl, Vojtěch; Zanozina, Ekaterina M.; Váňa, Pavel; Kaiserová, Tereza; Knížek, Antonín; Rimmer, Paul; Chatzitheodoridis, Elias; Civiš, Svatopluk
2018-03-01
Aims: We aim to analyse real-time Perseid and Leonid meteor spectra using a novel calibration-free (CF) method, which is usually applied in the laboratory for laser-induced breakdown spectroscopic (LIBS) chemical analysis. Methods: Reference laser ablation spectra of specimens of chondritic meteorites were measured in situ simultaneously with a high-resolution laboratory echelle spectrograph and a spectral camera for meteor observation. Laboratory data were subsequently evaluated via the CF method and compared with real meteor emission spectra. Additionally, spectral features related to airglow plasma were compared with the spectra of laser-induced breakdown and electric discharge in the air. Results: We show that this method can be applied in the evaluation of meteor spectral data observed in real time. Specifically, CF analysis can be used to determine the chemical composition of meteor plasma, which, in the case of the Perseid and Leonid meteors analysed in this study, corresponds to that of the C-group of chondrites.
An inherent curvature-compensated voltage reference using non-linearity of gate coupling coefficient
NASA Astrophysics Data System (ADS)
Hande, Vinayak; Shojaei Baghini, Maryam
2015-08-01
A novel current-mode voltage reference circuit which is capable of generating sub-1 V output voltage is presented. The proposed architecture exhibits the inherent curvature compensation ability. The curvature compensation is achieved by utilizing the non-linear behavior of gate coupling coefficient to compensate non-linear temperature dependence of base-emitter voltage. We have also utilized the developments in CMOS process to reduce power and area consumption. The proposed voltage reference is analyzed theoretically and compared with other existing methods. The circuit is designed and simulated in 180 nm mixed-mode CMOS UMC technology which gives a reference level of 246 mV. The minimum required supply voltage is 1 V with maximum current drawn of 9.24 μA. A temperature coefficient of 9 ppm/°C is achieved over -25 to 125 °C temperature range. The reference voltage varies by ±11 mV across process corners. The reference circuit shows the line sensitivity of 0.9 mV/V with area consumption of 100 × 110 μm2
Yang, Qingsheng; Mwenda, Kevin M; Ge, Miao
2013-03-12
The measurement of the Erythrocyte Sedimentation Rate (ESR) value is a standard procedure performed during a typical blood test. In order to formulate a unified standard of establishing reference ESR values, this paper presents a novel prediction model in which local normal ESR values and corresponding geographical factors are used to predict reference ESR values using multi-layer feed-forward artificial neural networks (ANN). Local normal ESR values were obtained from hospital data, while geographical factors that include altitude, sunshine hours, relative humidity, temperature and precipitation were obtained from the National Geographical Data Information Centre in China.The results show that predicted values are statistically in agreement with measured values. Model results exhibit significant agreement between training data and test data. Consequently, the model is used to predict the unseen local reference ESR values. Reference ESR values can be established with geographical factors by using artificial intelligence techniques. ANN is an effective method for simulating and predicting reference ESR values because of its ability to model nonlinear and complex relationships.
von Holst, Christoph; Robouch, Piotr; Bellorini, Stefano; de la Huebra, María José González; Ezerskis, Zigmas
2016-01-01
ABSTRACT This paper describes the operation of the European Union Reference Laboratory for Feed Additives (EURL) and its role in the authorisation procedure of feed additives in the European Union. Feed additives are authorised according to Regulation (EC) No. 1831/2003, which introduced a completely revised authorisation procedure and also established the EURL. The regulations authorising feed additives contain conditions of use such as legal limits of the feed additives, which require the availability of a suitable method of analysis for official control purposes under real world conditions. It is the task of the EURL to evaluate the suitability of analytical methods as proposed by the industry for this purpose. Moreover, the paper shows that one of the major challenges is the huge variety of the methodology applied in feed additive analysis, thus requiring expertise in quite different analytical areas. In order to cope with this challenge, the EURL is supported by a network of national reference laboratories (NRLs) and only the merged knowledge of all NRLs allows for a scientifically sound assessment of the analytical methods. PMID:26540604
Pérez-Olmos, R; Rios, A; Fernández, J R; Lapa, R A; Lima, J L
2001-01-05
In this paper, the construction and evaluation of an electrode selective to nitrate with improved sensitivity, constructed like a conventional electrode (ISE) but using an operational amplifier to sum the potentials supplied by four membranes (ESOA) is described. The two types of electrodes, without an inner reference solution, were constructed using tetraoctylammonium bromide as sensor, dibutylphthalate as solvent mediator and PVC as plastic matrix, the membranes obtained directly applied onto a conductive epoxy resin support. After the comparative evaluation of their working characteristics they were used in the determination of nitrate in different types of tobacco. The limit of detection of the direct potentiometric method developed was found to be 0.18 g kg(-1) and the precision and accuracy of the method, when applied to eight different samples of tobacco, expressed in terms of mean R.S.D. and average percentage of spike recovery was 0.6 and 100.3%, respectively. The comparison of variances showed, on all ocassions, that the results obtained by the ESOA were similar to those obtained by the conventional ISE, but with higher precision. Linear regression analysis showed good agreement (r=0.9994) between the results obtained by the developed potentiometric method and those of a spectrophotometric method based on brucine, adopted as reference method, when applied simultaneously to 32 samples of different types of tobacco.
NASA Technical Reports Server (NTRS)
Seguin, B.; Petit, V.; Devillard, R.; Reich, P.; Thouy, G. (Principal Investigator)
1980-01-01
Evapotranspiration was calculated for both the dry and irrigated zone by four methods which were compared with the energy balance method serving as a reference. Two methods did not involve the surface temperature. They are ETR(n) = R(n), liable to be valid under wet conditions and ET(eq) = (delta/delta + gamma) R(n) i.e, the first term of Penman's equation, adapted to moderately dry conditions. The methods using surface temperature were the combined energy balance aerodynamic approach and a simplified approach proposed by Jackson et al. Tests show the surface temperature methods give relatively satisfactory results both in the dry and wet zone, with a precision of 10% to 15% compared with the reference method. As was to be expected, ET(eq) gave satisfactory results only in the dry zone and ET(Rn) in the irrigated zone. Thermography increased the precision in the estimate of ET relative to the most suitable classical method by 5% to 8% and is equally suitable for both dry and wet conditions. The Jackson method does not require extensive ground measurements and the evaluation of the surface roughness.
Optimized SIFTFlow for registration of whole-mount histology to reference optical images
Shojaii, Rushin; Martel, Anne L.
2016-01-01
Abstract. The registration of two-dimensional histology images to reference images from other modalities is an important preprocessing step in the reconstruction of three-dimensional histology volumes. This is a challenging problem because of the differences in the appearances of histology images and other modalities, and the presence of large nonrigid deformations which occur during slide preparation. This paper shows the feasibility of using densely sampled scale-invariant feature transform (SIFT) features and a SIFTFlow deformable registration algorithm for coregistering whole-mount histology images with blockface optical images. We present a method for jointly optimizing the regularization parameters used by the SIFTFlow objective function and use it to determine the most appropriate values for the registration of breast lumpectomy specimens. We demonstrate that tuning the regularization parameters results in significant improvements in accuracy and we also show that SIFTFlow outperforms a previously described edge-based registration method. The accuracy of the histology images to blockface images registration using the optimized SIFTFlow method was assessed using an independent test set of images from five different lumpectomy specimens and the mean registration error was 0.32±0.22 mm. PMID:27774494
Lenters-Westra, Erna; Strunk, Annuska; Campbell, Paul; Slingerland, Robbert J
2017-02-01
Hb-variant interference when reporting HbA1c has been an ongoing challenge since HbA1c was introduced to monitor patients with diabetes mellitus. Most Hb-variants show an abnormal chromatogram when cation-exchange HPLC is used for the determination of HbA1c. Unfortunately, the Tosoh G8 generates what appears to be normal chromatogram in the presence of Hb-Tacoma, yielding a falsely high HbA1c value. The primary aim of the study was to investigate if the Afinion HbA1c point-of-care (POC) instrument could be used as an alternative method for the Tosoh G8 when testing for HbA1c in the presence of Hb-Tacoma. Whole blood samples were collected in K 2 EDTA tubes from individuals homozygous for HbA (n = 40) and heterozygous for Hb-Tacoma (n = 20). Samples were then immediately analyzed with the Afinion POC instrument. After analysis, aliquots of each sample were frozen at -80 °C. The frozen samples were shipped on dry ice to the European Reference Laboratory for Glycohemoglobin (ERL) and analyzed with three International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) and National Glycohemoglobin Standardization Program (NGSP) Secondary Reference Measurement Procedures (SRMPs). The Premier Hb9210 was used as the reference method. When compared to the reference method, samples with Hb-Tacoma yielded mean relative differences of 31.8% on the Tosoh G8, 21.5% on the Roche Tina-quant Gen. 2 and 16.8% on the Afinion. The Afinion cannot be used as an alternative method for the Tosoh G8 when testing for HbA1c in the presence of Hb-Tacoma.
NASA Astrophysics Data System (ADS)
Wang, H.; Cheng, J.
2017-12-01
A method to Synthesis natural electric and magnetic Time series is proposed whereby the time series of local site are derived using an Impulse Response and a reference (STIR). The method is based on the assumption that the external source of magnetic fields are uniform, and the electric and magnetic fields acquired at the surface satisfy a time-independent linear relation in frequency domain.According to the convolution theorem, we can synthesize natural electric and magnetic time series using the impulse responses of inter-station transfer functions with a reference. Applying this method, two impulse responses need to be estimated: the quasi-MT impulse response tensor and the horizontal magnetic impulse response tensor. These impulse response tensors relate the local horizontal electric and magnetic components with the horizontal magnetic components at a reference site, respectively. Some clean segments of times series are selected to estimate impulse responses by using least-square (LS) method. STIR is similar with STIN (Wang, 2017), but STIR does not need to estimate the inter-station transfer functions, and the synthesized data are more accurate in high frequency, where STIN fails when the inter-station transfer functions are contaminated severely. A test with good quality of MT data shows that synthetic time-series are similar to natural electric and magnetic time series. For contaminated AMT example, when this method is used to remove noise present at the local site, the scatter of MT sounding curves are clear reduced, and the data quality are improved. *This work is funded by National Key R&D Program of China(2017YFC0804105),National Natural Science Foundation of China (41604064, 51574250), State Key Laboratory of Coal Resources and Safe Mining ,China University of Mining & Technology,(SKLCRSM16DC09)
Venturelli, Gustavo L; Brod, Fábio C A; Rossi, Gabriela B; Zimmermann, Naíra F; Oliveira, Jaison P; Faria, Josias C; Arisi, Ana C M
2014-11-01
The Embrapa 5.1 genetically modified (GM) common bean was approved for commercialization in Brazil. Methods for the quantification of this new genetically modified organism (GMO) are necessary. The development of a suitable endogenous reference is essential for GMO quantification by real-time PCR. Based on this, a new taxon-specific endogenous reference quantification assay was developed for Phaseolus vulgaris L. Three genes encoding common bean proteins (phaseolin, arcelin, and lectin) were selected as candidates for endogenous reference. Primers targeting these candidate genes were designed and the detection was evaluated using the SYBR Green chemistry. The assay targeting lectin gene showed higher specificity than the remaining assays, and a hydrolysis probe was then designed. This assay showed high specificity for 50 common bean samples from two gene pools, Andean and Mesoamerican. For GM common bean varieties, the results were similar to those obtained for non-GM isogenic varieties with PCR efficiency values ranging from 92 to 101 %. Moreover, this assay presented a limit of detection of ten haploid genome copies. The primers and probe developed in this work are suitable to detect and quantify either GM or non-GM common bean.
Generating Artificial Reference Images for Open Loop Correlation Wavefront Sensors
NASA Astrophysics Data System (ADS)
Townson, M. J.; Love, G. D.; Saunter, C. D.
2018-05-01
Shack-Hartmann wavefront sensors for both solar and laser guide star adaptive optics (with elongated spots) need to observe extended objects. Correlation techniques have been successfully employed to measure the wavefront gradient in solar adaptive optics systems and have been proposed for laser guide star systems. In this paper we describe a method for synthesising reference images for correlation Shack-Hartmann wavefront sensors with a larger field of view than individual sub-apertures. We then show how these supersized reference images can increase the performance of correlation wavefront sensors in regimes where large relative shifts are induced between sub-apertures, such as those observed in open-loop wavefront sensors. The technique we describe requires no external knowledge outside of the wavefront-sensor images, making it available as an entirely "software" upgrade to an existing adaptive optics system. For solar adaptive optics we show the supersized reference images extend the magnitude of shifts which can be accurately measured from 12% to 50% of the field of view of a sub-aperture and in laser guide star wavefront sensors the magnitude of centroids that can be accurately measured is increased from 12% to 25% of the total field of view of the sub-aperture.
Optical monitoring of QSO in the framework of the Gaia space mission
NASA Astrophysics Data System (ADS)
Taris, F.; Damljanovic, G.; Andrei, A.; Klotz, A.; Vachier, F.
2015-08-01
The Gaia astrometric mission of the European Space Agency has been launched the 19th December 2013. It will provide an astrometric catalogue of 500 000 extragalactic sources that could be the basis of a new optical reference frame. On the other hand, the current International Celestial Reference Frame (ICRF) is based on the observations of extragalactic sources at radio wavelength. The astrometric coordinates of sources in these two reference systems will have roughly the same uncertainty. It is then mandatory to observe a set of common targets at both optical and radio wavelength to link the ICRF with what could be called the GCRF (Gaia Celestial Reference Frame). We will show in this paper some results obtained with the TJO, Telescopi Juan Oro, from Observatori Astronomic del Montsec in Spain. It also presents some results obtained with the Lomb-Scargle and CLEAN algorithm methods applied to optical magnitude obtained with the TAROT telescopes.
NASA Astrophysics Data System (ADS)
Valotto, Gabrio; Cattaruzza, Elti; Bardelli, Fabrizio
2017-02-01
The appropriate selection of representative pure compounds to be used as reference is a crucial step for successful analysis of X-ray absorption near edge spectroscopy (XANES) data, and it is often not a trivial task. This is particularly true when complex environmental matrices are investigated, being their elemental speciation a priori unknown. In this paper, an investigation on the speciation of Cu, Zn, and Sb based on the use of conventional (stoichiometric compounds) and non-conventional (environmental samples or relevant certified materials) references is explored. This method can be useful in when the effectiveness of XANES analysis is limited because of the difficulty in obtaining a set of references sufficiently representative of the investigated samples. Road dust samples collected along the bridge connecting Venice to the mainland were used to show the potentialities and the limits of this approach.
A Comparative Analysis of Three Monocular Passive Ranging Methods on Real Infrared Sequences
NASA Astrophysics Data System (ADS)
Bondžulić, Boban P.; Mitrović, Srđan T.; Barbarić, Žarko P.; Andrić, Milenko S.
2013-09-01
Three monocular passive ranging methods are analyzed and tested on the real infrared sequences. The first method exploits scale changes of an object in successive frames, while other two use Beer-Lambert's Law. Ranging methods are evaluated by comparing with simultaneously obtained reference data at the test site. Research is addressed on scenarios where multiple sensor views or active measurements are not possible. The results show that these methods for range estimation can provide the fidelity required for object tracking. Maximum values of relative distance estimation errors in near-ideal conditions are less than 8%.
De Pauw, Ruben; Shoykhet Choikhet, Konstantin; Desmet, Gert; Broeckhoven, Ken
2016-08-12
When using compressible mobile phases such as fluidic CO2, the density, the volumetric flow rates and volumetric fractions are pressure dependent. The pressure and temperature definition of these volumetric parameters (referred to as the reference conditions) may alter between systems, manufacturers and operating conditions. A supercritical fluid chromatography system was modified to operate in two modes with different definition of the eluent delivery parameters, referred to as fixed and variable mode. For the variable mode, the volumetric parameters are defined with reference to the pump operating pressure and actual pump head temperature. These conditions may vary when, e.g. changing the column length, permeability, flow rate, etc. and are thus variable reference conditions. For the fixed mode, the reference conditions were set at 150bar and 30°C, resulting in a mass flow rate and mass fraction of modifier definition which is independent of the operation conditions. For the variable mode, the mass flow rate of carbon dioxide increases with system pump operating pressure, decreasing the fraction of modifier. Comparing the void times and retention factor shows that the deviation between the two modes is almost independent of modifier percentage, but depends on the operating pressure. Recalculating the set volumetric fraction of modifier to the mass fraction results in the same retention behaviour for both modes. This shows that retention in SFC can be best modelled using the mass fraction of modifier. The fixed mode also simplifies method scaling as it only requires matching average column pressure. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wayson, Michael B.; Bolch, Wesley E.
2018-04-01
Internal radiation dose estimates for diagnostic nuclear medicine procedures are typically calculated for a reference individual. Resultantly, there is uncertainty when determining the organ doses to patients who are not at 50th percentile on either height or weight. This study aims to better personalize internal radiation dose estimates for individual patients by modifying the dose estimates calculated for reference individuals based on easily obtainable morphometric characteristics of the patient. Phantoms of different sitting heights and waist circumferences were constructed based on computational reference phantoms for the newborn, 10 year-old, and adult. Monoenergetic photons and electrons were then simulated separately at 15 energies. Photon and electron specific absorbed fractions (SAFs) were computed for the newly constructed non-reference phantoms and compared to SAFs previously generated for the age-matched reference phantoms. Differences in SAFs were correlated to changes in sitting height and waist circumference to develop scaling factors that could be applied to reference SAFs as morphometry corrections. A further set of arbitrary non-reference phantoms were then constructed and used in validation studies for the SAF scaling factors. Both photon and electron dose scaling methods were found to increase average accuracy when sitting height was used as the scaling parameter (~11%). Photon waist circumference-based scaling factors showed modest increases in average accuracy (~7%) for underweight individuals, but not for overweight individuals. Electron waist circumference-based scaling factors did not show increases in average accuracy. When sitting height and waist circumference scaling factors were combined, modest average gains in accuracy were observed for photons (~6%), but not for electrons. Both photon and electron absorbed doses are more reliably scaled using scaling factors computed in this study. They can be effectively scaled using sitting height alone as patient-specific morphometric parameter.
Wayson, Michael B; Bolch, Wesley E
2018-04-13
Internal radiation dose estimates for diagnostic nuclear medicine procedures are typically calculated for a reference individual. Resultantly, there is uncertainty when determining the organ doses to patients who are not at 50th percentile on either height or weight. This study aims to better personalize internal radiation dose estimates for individual patients by modifying the dose estimates calculated for reference individuals based on easily obtainable morphometric characteristics of the patient. Phantoms of different sitting heights and waist circumferences were constructed based on computational reference phantoms for the newborn, 10 year-old, and adult. Monoenergetic photons and electrons were then simulated separately at 15 energies. Photon and electron specific absorbed fractions (SAFs) were computed for the newly constructed non-reference phantoms and compared to SAFs previously generated for the age-matched reference phantoms. Differences in SAFs were correlated to changes in sitting height and waist circumference to develop scaling factors that could be applied to reference SAFs as morphometry corrections. A further set of arbitrary non-reference phantoms were then constructed and used in validation studies for the SAF scaling factors. Both photon and electron dose scaling methods were found to increase average accuracy when sitting height was used as the scaling parameter (~11%). Photon waist circumference-based scaling factors showed modest increases in average accuracy (~7%) for underweight individuals, but not for overweight individuals. Electron waist circumference-based scaling factors did not show increases in average accuracy. When sitting height and waist circumference scaling factors were combined, modest average gains in accuracy were observed for photons (~6%), but not for electrons. Both photon and electron absorbed doses are more reliably scaled using scaling factors computed in this study. They can be effectively scaled using sitting height alone as patient-specific morphometric parameter.
Some connections between importance sampling and enhanced sampling methods in molecular dynamics.
Lie, H C; Quer, J
2017-11-21
In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.
Some connections between importance sampling and enhanced sampling methods in molecular dynamics
NASA Astrophysics Data System (ADS)
Lie, H. C.; Quer, J.
2017-11-01
In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.
Removal of BCG artefact from concurrent fMRI-EEG recordings based on EMD and PCA.
Javed, Ehtasham; Faye, Ibrahima; Malik, Aamir Saeed; Abdullah, Jafri Malin
2017-11-01
Simultaneous electroencephalography (EEG) and functional magnetic resonance image (fMRI) acquisitions provide better insight into brain dynamics. Some artefacts due to simultaneous acquisition pose a threat to the quality of the data. One such problematic artefact is the ballistocardiogram (BCG) artefact. We developed a hybrid algorithm that combines features of empirical mode decomposition (EMD) with principal component analysis (PCA) to reduce the BCG artefact. The algorithm does not require extra electrocardiogram (ECG) or electrooculogram (EOG) recordings to extract the BCG artefact. The method was tested with both simulated and real EEG data of 11 participants. From the simulated data, the similarity index between the extracted BCG and the simulated BCG showed the effectiveness of the proposed method in BCG removal. On the other hand, real data were recorded with two conditions, i.e. resting state (eyes closed dataset) and task influenced (event-related potentials (ERPs) dataset). Using qualitative (visual inspection) and quantitative (similarity index, improved normalized power spectrum (INPS) ratio, power spectrum, sample entropy (SE)) evaluation parameters, the assessment results showed that the proposed method can efficiently reduce the BCG artefact while preserving the neuronal signals. Compared with conventional methods, namely, average artefact subtraction (AAS), optimal basis set (OBS) and combined independent component analysis and principal component analysis (ICA-PCA), the statistical analyses of the results showed that the proposed method has better performance, and the differences were significant for all quantitative parameters except for the power and sample entropy. The proposed method does not require any reference signal, prior information or assumption to extract the BCG artefact. It will be very useful in circumstances where the reference signal is not available. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Qianxin; Hu, Chao; Xu, Tianhe; Chang, Guobin; Hernández Moraleda, Alberto
2017-12-01
Analysis centers (ACs) for global navigation satellite systems (GNSSs) cannot accurately obtain real-time Earth rotation parameters (ERPs). Thus, the prediction of ultra-rapid orbits in the international terrestrial reference system (ITRS) has to utilize the predicted ERPs issued by the International Earth Rotation and Reference Systems Service (IERS) or the International GNSS Service (IGS). In this study, the accuracy of ERPs predicted by IERS and IGS is analyzed. The error of the ERPs predicted for one day can reach 0.15 mas and 0.053 ms in polar motion and UT1-UTC direction, respectively. Then, the impact of ERP errors on ultra-rapid orbit prediction by GNSS is studied. The methods for orbit integration and frame transformation in orbit prediction with introduced ERP errors dominate the accuracy of the predicted orbit. Experimental results show that the transformation from the geocentric celestial references system (GCRS) to ITRS exerts the strongest effect on the accuracy of the predicted ultra-rapid orbit. To obtain the most accurate predicted ultra-rapid orbit, a corresponding real-time orbit correction method is developed. First, orbits without ERP-related errors are predicted on the basis of ITRS observed part of ultra-rapid orbit for use as reference. Then, the corresponding predicted orbit is transformed from GCRS to ITRS to adjust for the predicted ERPs. Finally, the corrected ERPs with error slopes are re-introduced to correct the predicted orbit in ITRS. To validate the proposed method, three experimental schemes are designed: function extrapolation, simulation experiments, and experiments with predicted ultra-rapid orbits and international GNSS Monitoring and Assessment System (iGMAS) products. Experimental results show that using the proposed correction method with IERS products considerably improved the accuracy of ultra-rapid orbit prediction (except the geosynchronous BeiDou orbits). The accuracy of orbit prediction is enhanced by at least 50% (error related to ERP) when a highly accurate observed orbit is used with the correction method. For iGMAS-predicted orbits, the accuracy improvement ranges from 8.5% for the inclined BeiDou orbits to 17.99% for the GPS orbits. This demonstrates that the correction method proposed by this study can optimize the ultra-rapid orbit prediction.
DISTMIX: direct imputation of summary statistics for unmeasured SNPs from mixed ethnicity cohorts.
Lee, Donghyung; Bigdeli, T Bernard; Williamson, Vernell S; Vladimirov, Vladimir I; Riley, Brien P; Fanous, Ayman H; Bacanu, Silviu-Alin
2015-10-01
To increase the signal resolution for large-scale meta-analyses of genome-wide association studies, genotypes at unmeasured single nucleotide polymorphisms (SNPs) are commonly imputed using large multi-ethnic reference panels. However, the ever increasing size and ethnic diversity of both reference panels and cohorts makes genotype imputation computationally challenging for moderately sized computer clusters. Moreover, genotype imputation requires subject-level genetic data, which unlike summary statistics provided by virtually all studies, is not publicly available. While there are much less demanding methods which avoid the genotype imputation step by directly imputing SNP statistics, e.g. Directly Imputing summary STatistics (DIST) proposed by our group, their implicit assumptions make them applicable only to ethnically homogeneous cohorts. To decrease computational and access requirements for the analysis of cosmopolitan cohorts, we propose DISTMIX, which extends DIST capabilities to the analysis of mixed ethnicity cohorts. The method uses a relevant reference panel to directly impute unmeasured SNP statistics based only on statistics at measured SNPs and estimated/user-specified ethnic proportions. Simulations show that the proposed method adequately controls the Type I error rates. The 1000 Genomes panel imputation of summary statistics from the ethnically diverse Psychiatric Genetic Consortium Schizophrenia Phase 2 suggests that, when compared to genotype imputation methods, DISTMIX offers comparable imputation accuracy for only a fraction of computational resources. DISTMIX software, its reference population data, and usage examples are publicly available at http://code.google.com/p/distmix. dlee4@vcu.edu Supplementary Data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
Willingness To Pay for Information: An Analyst's Guide.
ERIC Educational Resources Information Center
Lee, Kyung Hee; Hatcher, Charles B.
2001-01-01
Compares methods for estimating consumer willingness to pay for information: contingent valuation, experimental auction, conjoint analysis, and hedonic price equations. Shows how, in the case of food dating, measurement of willingness is complicated by the question of whether the information adds to the product's value. (Contains 31 references.)…
Defining the Good Reading Teacher.
ERIC Educational Resources Information Center
Kupersmith, Judy; And Others
In the quest for a definition of the good reading teacher, a review of the literature shows that new or copious materials, one specific teaching method, and static teaching behaviors are not responsible for effective teaching. However, observations of five reading teachers, with good references and good reputations but with widely divergent…
Hennekinne, Jacques-Antoine; Gohier, Martine; Maire, Tiphaine; Lapeyre, Christiane; Lombard, Bertrand; Dragacci, Sylviane
2003-01-01
The European Commission has designed a network of European Union-National Reference Laboratories (EU-NRLs), coordinated by a Community Reference Laboratory (CRL), for control of hygiene of milk and milk products (Council Directive 92/46/ECC). As a common contaminant of milk and milk products such as cheese, staphylococcal enterotoxins are often involved in human outbreaks and should be monitored regularly. The main tasks of the EU-CRLs were to select and transfer to the EU-NRLs a reference method for detection of enterotoxins, and to set up proficiency testing to evaluate the competency of the European laboratory network. The first interlaboratory exercise was performed on samples of freeze-dried cheese inoculated with 2 levels of staphylococcal enterotoxins (0.1 and 0.25 ng/g) and on an uninoculated control. These levels were chosen considering the EU regulation for staphylococcal enterotoxins in milk and milk products and the limit of detection of the enzyme-linked immunosorbent assay test recommended in the reference method. The trial was conducted according to the recommendations of ISO Guide 43. Results produced by laboratories were compiled and compared through statistical analysis. Except for data from 2 laboratories for the uninoculated control and cheese inoculated at 0.1 ng/g, all laboratories produced satisfactory results, showing the ability of the EU-NRL network to monitor the enterotoxin contaminant.
Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio; Rispoli, Attilio
2010-01-01
This paper presents an innovative method for estimating the attitude of airborne electro-optical cameras with respect to the onboard autonomous navigation unit. The procedure is based on the use of attitude measurements under static conditions taken by an inertial unit and carrier-phase differential Global Positioning System to obtain accurate camera position estimates in the aircraft body reference frame, while image analysis allows line-of-sight unit vectors in the camera based reference frame to be computed. The method has been applied to the alignment of the visible and infrared cameras installed onboard the experimental aircraft of the Italian Aerospace Research Center and adopted for in-flight obstacle detection and collision avoidance. Results show an angular uncertainty on the order of 0.1° (rms). PMID:22315559
TECRA Unique test for rapid detection of Salmonella in food: collaborative study.
Hughes, D; Dailianis, A E; Hill, L; McIntyre, D A; Anderson, A
2001-01-01
The TECRA Unique Salmonella test uses the principle of immunoenrichment to allow rapid detection of Salmonellae in food. A collaborative study was conducted to compare the TECRA Salmonella Unique test with the reference culture method given in the U.S. Food and Drug Administration's Bacteriological Analytical Manual. Three food types (milk powder, pepper, and soy flour) were analyzed in Australia and 2 food types (milk chocolate and dried egg) were analyzed in the United States. Forty-one collaborators participated in the study. For each of the 5 foods at each of the 3 levels, a comparison showed no significant differences (p > or = 0.05) in the proportion of positive test samples for Unique and that for the reference method using the Chi-square test for independence with continuity correction.
Unsteady Cascade Aerodynamic Response Using a Multiphysics Simulation Code
NASA Technical Reports Server (NTRS)
Lawrence, C.; Reddy, T. S. R.; Spyropoulos, E.
2000-01-01
The multiphysics code Spectrum(TM) is applied to calculate the unsteady aerodynamic pressures of oscillating cascade of airfoils representing a blade row of a turbomachinery component. Multiphysics simulation is based on a single computational framework for the modeling of multiple interacting physical phenomena, in the present case being between fluids and structures. Interaction constraints are enforced in a fully coupled manner using the augmented-Lagrangian method. The arbitrary Lagrangian-Eulerian method is utilized to account for deformable fluid domains resulting from blade motions. Unsteady pressures are calculated for a cascade designated as the tenth standard, and undergoing plunging and pitching oscillations. The predicted unsteady pressures are compared with those obtained from an unsteady Euler co-de refer-red in the literature. The Spectrum(TM) code predictions showed good correlation for the cases considered.
Context-dependent logo matching and recognition.
Sahbi, Hichem; Ballan, Lamberto; Serra, Giuseppe; Del Bimbo, Alberto
2013-03-01
We contribute, through this paper, to the design of a novel variational framework able to match and recognize multiple instances of multiple reference logos in image archives. Reference logos and test images are seen as constellations of local features (interest points, regions, etc.) and matched by minimizing an energy function mixing: 1) a fidelity term that measures the quality of feature matching, 2) a neighborhood criterion that captures feature co-occurrence/geometry, and 3) a regularization term that controls the smoothness of the matching solution. We also introduce a detection/recognition procedure and study its theoretical consistency. Finally, we show the validity of our method through extensive experiments on the challenging MICC-Logos dataset. Our method overtakes, by 20%, baseline as well as state-of-the-art matching/recognition procedures.
Schmitz, E M H; Boonen, K; van den Heuvel, D J A; van Dongen, J L J; Schellings, M W M; Emmen, J M A; van der Graaf, F; Brunsveld, L; van de Kerkhof, D
2014-10-01
Three novel direct oral anticoagulants (DOACs) have recently been registered by the Food and Drug Administration and European Medicines Agency Commission: dabigatran, rivaroxaban, and apixaban. To quantify DOACs in plasma, various dedicated coagulation assays have been developed. To develop and validate a reference ultra-performance liquid chromatography - tandem mass spectrometry (UPLC-MS/MS) method and to evaluate the analytical performance of several coagulation assays for quantification of dabigatran, rivaroxaban, and apixaban. The developed UPLC-MS/MS method was validated by determination of precision, accuracy, specificity, matrix effects, lower limits of detection, carry-over, recovery, stability, and robustness. The following coagulation assays were evaluated for accuracy and precision: laboratory-developed (LD) diluted thrombin time (dTT), Hemoclot dTT, Pefakit PiCT, ECA, Liquid anti-Xa, Biophen Heparin (LRT), and Biophen DiXal anti-Xa. Agreement between the various coagulation assays and UPLC-MS/MS was determined with random samples from patients using dabigatran or rivaroxaban. The UPLC-MS/MS method was shown to be accurate, precise, sensitive, stable, and robust. The dabigatran coagulation assay showing the best precision, accuracy and agreement with the UPLC-MS/MS method was the LD dTT test. For rivaroxaban, the anti-factor Xa assays were superior to the PiCT-Xa assay with regard to precision, accuracy, and agreement with the reference method. For apixaban, the Liquid anti-Xa assay was superior to the PiCT-Xa assay. Statistically significant differences were observed between the various coagulation assays as compared with the UPLC-MS/MS reference method. It is currently unknown whether these differences are clinically relevant. When DOACs are quantified with coagulation assays, comparison with a reference method as part of proficiency testing is therefore pivotal. © 2014 International Society on Thrombosis and Haemostasis.
Stability of steady hand force production explored across spaces and methods of analysis.
de Freitas, Paulo B; Freitas, Sandra M S F; Lewis, Mechelle M; Huang, Xuemei; Latash, Mark L
2018-06-01
We used the framework of the uncontrolled manifold (UCM) hypothesis and explored the reliability of several outcome variables across different spaces of analysis during a very simple four-finger accurate force production task. Fourteen healthy, young adults performed the accurate force production task with each hand on 3 days. Small spatial finger perturbations were generated by the "inverse piano" device three times per trial (lifting the fingers 1 cm/0.5 s and lowering them). The data were analyzed using the following main methods: (1) computation of indices of the structure of inter-trial variance and motor equivalence in the space of finger forces and finger modes, and (2) analysis of referent coordinates and apparent stiffness values for the hand. Maximal voluntary force and the index of enslaving (unintentional finger force production) showed good to excellent reliability. Strong synergies stabilizing total force were reflected in both structure of variance and motor equivalence indices. Variance within the UCM and the index of motor equivalent motion dropped over the trial duration and showed good to excellent reliability. Variance orthogonal to the UCM and the index of non-motor equivalent motion dropped over the 3 days and showed poor to moderate reliability. Referent coordinate and apparent stiffness indices co-varied strongly and both showed good reliability. In contrast, the computed index of force stabilization showed poor reliability. The findings are interpreted within the scheme of neural control with referent coordinates involving the hierarchy of two basic commands, the r-command and c-command. The data suggest natural drifts in the finger force space, particularly within the UCM. We interpret these drifts as reflections of a trade-off between stability and optimization of action. The implications of these findings for the UCM framework and future clinical applications are explored in the discussion. Indices of the structure of variance and motor equivalence show good reliability and can be recommended for applied studies.
A comparative analysis of the density of the SNOMED CT conceptual content for semantic harmonization
He, Zhe; Geller, James; Chen, Yan
2015-01-01
Objectives Medical terminologies vary in the amount of concept information (the “density”) represented, even in the same sub-domains. This causes problems in terminology mapping, semantic harmonization and terminology integration. Moreover, complex clinical scenarios need to be encoded by a medical terminology with comprehensive content. SNOMED Clinical Terms (SNOMED CT), a leading clinical terminology, was reported to lack concepts and synonyms, problems that cannot be fully alleviated by using post-coordination. Therefore, a scalable solution is needed to enrich the conceptual content of SNOMED CT. We are developing a structure-based, algorithmic method to identify potential concepts for enriching the conceptual content of SNOMED CT and to support semantic harmonization of SNOMED CT with selected other Unified Medical Language System (UMLS) terminologies. Methods We first identified a subset of English terminologies in the UMLS that have ‘PAR’ relationship labeled with ‘IS_A’ and over 10% overlap with one or more of the 19 hierarchies of SNOMED CT. We call these “reference terminologies” and we note that our use of this name is different from the standard use. Next, we defined a set of topological patterns across pairs of terminologies, with SNOMED CT being one terminology in each pair and the other being one of the reference terminologies. We then explored how often these topological patterns appear between SNOMED CT and each reference terminology, and how to interpret them. Results Four viable reference terminologies were identified. Large density differences between terminologies were found. Expected interpretations of these differences were indeed observed, as follows. A random sample of 299 instances of special topological patterns (“2:3 and 3:2 trapezoids”) showed that 39.1% and 59.5% of analyzed concepts in SNOMED CT and in a reference terminology, respectively, were deemed to be alternative classifications of the same conceptual content. In 30.5% and 17.6% of the cases, it was found that intermediate concepts could be imported into SNOMED CT or into the reference terminology, respectively, to enhance their conceptual content, if approved by a human curator. Other cases included synonymy and errors in one of the terminologies. Conclusion These results show that structure-based algorithmic methods can be used to identify potential concepts to enrich SNOMED CT and the four reference terminologies. The comparative analysis has the future potential of supporting terminology authoring by suggesting new content to improve content coverage and semantic harmonization between terminologies. PMID:25890688
NASA Astrophysics Data System (ADS)
Kang, Qian; Ru, Qingguo; Liu, Yan; Xu, Lingyan; Liu, Jia; Wang, Yifei; Zhang, Yewen; Li, Hui; Zhang, Qing; Wu, Qing
2016-01-01
An on-line near infrared (NIR) spectroscopy monitoring method with an appropriate multivariate calibration method was developed for the extraction process of Fu-fang Shuanghua oral solution (FSOS). On-line NIR spectra were collected through two fiber optic probes, which were designed to transmit NIR radiation by a 2 mm flange. Partial least squares (PLS), interval PLS (iPLS) and synergy interval PLS (siPLS) algorithms were used comparatively for building the calibration regression models. During the extraction process, the feasibility of NIR spectroscopy was employed to determine the concentrations of chlorogenic acid (CA) content, total phenolic acids contents (TPC), total flavonoids contents (TFC) and soluble solid contents (SSC). High performance liquid chromatography (HPLC), ultraviolet spectrophotometric method (UV) and loss on drying methods were employed as reference methods. Experiment results showed that the performance of siPLS model is the best compared with PLS and iPLS. The calibration models for AC, TPC, TFC and SSC had high values of determination coefficients of (R2) (0.9948, 0.9992, 0.9950 and 0.9832) and low root mean square error of cross validation (RMSECV) (0.0113, 0.0341, 0.1787 and 1.2158), which indicate a good correlation between reference values and NIR predicted values. The overall results show that the on line detection method could be feasible in real application and would be of great value for monitoring the mixed decoction process of FSOS and other Chinese patent medicines.
40 CFR 75.22 - Reference test methods.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Reference test methods. 75.22 Section...) CONTINUOUS EMISSION MONITORING Operation and Maintenance Requirements § 75.22 Reference test methods. (a) The owner or operator shall use the following methods, which are found in appendices A-1 through A-4 to part...
7 CFR 801.7 - Reference methods and tolerances for near-infrared spectroscopy (NIRS) analyzers.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Reference methods and tolerances for near-infrared spectroscopy (NIRS) analyzers. 801.7 Section 801.7 Agriculture Regulations of the Department of Agriculture... methods and tolerances for near-infrared spectroscopy (NIRS) analyzers. (a) Reference methods. (1) The...
78 FR 40000 - Method for the Determination of Lead in Total Suspended Particulate Matter
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-03
.... Purpose of the New Reference Method B. Rationale for Selection of the New Reference Method C. Comments on.../files/ambient/criteria/reference-equivalent-methods-list.pdf . C. Comments on the Proposed Rule On... information collection requirements beyond those imposed by the existing Pb monitoring requirements. C...
Chowdhury, Muhammad E H; Mullinger, Karen J; Glover, Paul; Bowtell, Richard
2014-01-01
Large artefacts compromise EEG data quality during simultaneous fMRI. These artefact voltages pose heavy demands on the bandwidth and dynamic range of EEG amplifiers and mean that even small fractional variations in the artefact voltages give rise to significant residual artefacts after average artefact subtraction. Any intrinsic reduction in the magnitude of the artefacts would be highly advantageous, allowing data with a higher bandwidth to be acquired without amplifier saturation, as well as reducing the residual artefacts that can easily swamp signals from brain activity measured using current methods. Since these problems currently limit the utility of simultaneous EEG-fMRI, new approaches for reducing the magnitude and variability of the artefacts are required. One such approach is the use of an EEG cap that incorporates electrodes embedded in a reference layer that has similar conductivity to tissue and is electrically isolated from the scalp. With this arrangement, the artefact voltages produced on the reference layer leads by time-varying field gradients, cardiac pulsation and subject movement are similar to those induced in the scalp leads, but neuronal signals are not detected in the reference layer. Taking the difference of the voltages in the reference and scalp channels will therefore reduce the artefacts, without affecting sensitivity to neuronal signals. Here, we test this approach by using a simple experimental realisation of the reference layer to investigate the artefacts induced on the leads attached to the reference layer and scalp and to evaluate the degree of artefact attenuation that can be achieved via reference layer artefact subtraction (RLAS). Through a series of experiments on phantoms and human subjects, we show that RLAS significantly reduces the gradient (GA), pulse (PA) and motion (MA) artefacts, while allowing accurate recording of neuronal signals. The results indicate that RLAS generally outperforms AAS when motion is present in the removal of the GA and PA, while the combination of AAS and RLAS always produces higher artefact attenuation than AAS. Additionally, we demonstrate that RLAS greatly attenuates the unpredictable and highly variable MAs that are very hard to remove using post-processing methods. © 2013. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hagita, Norihiro; Sawaki, Minako
1995-03-01
Most conventional methods in character recognition extract geometrical features such as stroke direction, connectivity of strokes, etc., and compare them with reference patterns in a stored dictionary. Unfortunately, geometrical features are easily degraded by blurs, stains and the graphical background designs used in Japanese newspaper headlines. This noise must be removed before recognition commences, but no preprocessing method is completely accurate. This paper proposes a method for recognizing degraded characters and characters printed on graphical background designs. This method is based on the binary image feature method and uses binary images as features. A new similarity measure, called the complementary similarity measure, is used as a discriminant function. It compares the similarity and dissimilarity of binary patterns with reference dictionary patterns. Experiments are conducted using the standard character database ETL-2 which consists of machine-printed Kanji, Hiragana, Katakana, alphanumeric, an special characters. The results show that this method is much more robust against noise than the conventional geometrical feature method. It also achieves high recognition rates of over 92% for characters with textured foregrounds, over 98% for characters with textured backgrounds, over 98% for outline fonts, and over 99% for reverse contrast characters.
Unconditionally stable finite-difference time-domain methods for modeling the Sagnac effect
NASA Astrophysics Data System (ADS)
Novitski, Roman; Scheuer, Jacob; Steinberg, Ben Z.
2013-02-01
We present two unconditionally stable finite-difference time-domain (FDTD) methods for modeling the Sagnac effect in rotating optical microsensors. The methods are based on the implicit Crank-Nicolson scheme, adapted to hold in the rotating system reference frame—the rotating Crank-Nicolson (RCN) methods. The first method (RCN-2) is second order accurate in space whereas the second method (RCN-4) is fourth order accurate. Both methods are second order accurate in time. We show that the RCN-4 scheme is more accurate and has better dispersion isotropy. The numerical results show good correspondence with the expression for the classical Sagnac resonant frequency splitting when using group refractive indices of the resonant modes of a microresonator. Also we show that the numerical results are consistent with the perturbation theory for the rotating degenerate microcavities. We apply our method to simulate the effect of rotation on an entire Coupled Resonator Optical Waveguide (CROW) consisting of a set of coupled microresonators. Preliminary results validate the formation of a rotation-induced gap at the center of a transfer function of a CROW.
Alkahtani, Shaea A
2017-03-21
The aim of this study was to determine reference values for sarcopenia indices using different methods in healthy Saudi young men. Participants included 232 Saudi men aged between 20 and 35 years. The study measured anthropometric indices, blood pressure, hand grip strength, and lean muscle mass using dual-energy X-ray absorptiometry (DXA), and bioelectrical impedance analysis (BIA) was performed using Inbody 770 and Tanita 980 devices. Using DXA, the mean value of appendicular lean mass divided by the height squared (ALM/ht 2 ) was found to be 8.97 ± 1.23 kg/m 2 ; hand grip strength measured 42.8 ± 7.6 kg. While the differences between DXA and BIA (Tanita) were significant for all parameters, the differences between DXA and Inbody values were significant only for ALM parameters. Inbody sensitivity and specificity values were 73% and 95.9%, respectively. The kappa (P = 0.80) and p values (P < 0.001) showed good agreement between Inbody and DXA, whereas Tanita sensitivity and specificity values were 54.2% and 98.3%, respectively. Bland-Altman plots for differences in lean mass values between Tanita, Inbody, and DXA methods showed very high bias for Tanita and DXA, with significant differences (P < 0.001). The cut-off values for sarcopenia indices for Saudi young men are different from those of other ethnicities. The use of tailored cut-off reference values instead of a general cut-off for BIA devices is recommended.
Multicentre evaluation of the Premier Hb9210 HbA1c analyser
John, W. Garry; Little, Randie; Sacks, David B.; Weykamp, Cas; Lenters-Westra, Erna; Hornsby, Theresa; Zhao, Zhen; Siebelder, Carla; Tennill, Alethea; English, Emma
2017-01-01
Background The accurate and precise quantification of HbA1c is essential for the diagnosis and routine monitoring of patients with diabetes. We report an evaluation of the Trinity Biotech Premier Hb9210 analyser (Bray, Ireland/Kansas City, US), a boronate affinity chromatography-based high performance liquid chromatography (HPLC) system for the measurement of glycated haemoglobin. Methods We evaluated the analytical performance of the Hb9210 as part of a multicentre evaluation. The effect of haemoglobin variants, other potential interferences and the performance in comparison to both the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) and National Glycohemoglobin Standardization Program (NGSP) reference systems, was assessed. Most of the centres participating also act as reference laboratories for both the IFCC standardisation network for HbA1c and the NGSP. Results The combined data from all centres showed total CVs of 2.71%, 2.32% and 2.14% at low medium and high values respectively for mmol/mol (SI units) and 1.62%, 1.59% and 1.68% for % (NGSP units), which are well below the recommended upper limits of 3% CV for SI (IFCC) units and 2% CV for % (NGSP). The analyser showed a good correlation to HbA1c methods currently used in clinical practice and the IFCC reference method procedure. Haemoglobin variants AC, AS, AE and AD do not affect the measurement of HbA1c. Overall the Hb9210 performs well across the whole analytical range. Conclusions The Hb9210 performs well and is suitable for clinical application in the analysis of HbA1c. PMID:25274956
Swezey, Robert; Shinn, Walter; Green, Carol; Drover, David R.; Hammer, Gregory B.; Schulman, Scott R.; Zajicek, Anne; Jett, David A.; Boss, Gerry R.
2013-01-01
Most hospital laboratories do not measure blood cyanide concentrations, and samples must be sent to reference laboratories. A simple method is needed for measuring cyanide in hospitals. The authors previously developed a method to quantify cyanide based on the high binding affinity of the vitamin B12 analog, cobinamide, for cyanide and a major spectral change observed for cyanide-bound cobinamide. This method is now validated in human blood, and the findings include a mean inter-assay accuracy of 99.1%, precision of 8.75% and a lower limit of quantification of 3.27 µM cyanide. The method was applied to blood samples from children treated with sodium nitroprusside and it yielded measurable results in 88 of 172 samples (51%), whereas the reference laboratory yielded results in only 19 samples (11%). In all 19 samples, the cobinamide-based method also yielded measurable results. The two methods showed reasonable agreement when analyzed by linear regression, but not when analyzed by a standard error of the estimate or paired t-test. Differences in results between the two methods may be because samples were assayed at different times on different sample types. The cobinamide-based method is applicable to human blood, and can be used in hospital laboratories and emergency rooms. PMID:23653045
Feldsine, Philip; Kaur, Mandeep; Shah, Khyati; Immerman, Amy; Jucker, Markus; Lienau, Andrew
2015-01-01
Assurance GDSTM for Salmonella Tq has been validated according to the AOAC INTERNATIONAL Methods Committee Guidelines for Validation of Microbiological Methods for Food and Environmental Surfaces for the detection of selected foods and environmental surfaces (Official Method of AnalysisSM 2009.03, Performance Tested MethodSM No. 050602). The method also completed AFNOR validation (following the ISO 16140 standard) compared to the reference method EN ISO 6579. For AFNOR, GDS was given a scope covering all human food, animal feed stuff, and environmental surfaces (Certificate No. TRA02/12-01/09). Results showed that Assurance GDS for Salmonella (GDS) has high sensitivity and is equivalent to the reference culture methods for the detection of motile and non-motile Salmonella. As part of the aforementioned validations, inclusivity and exclusivity studies, stability, and ruggedness studies were also conducted. Assurance GDS has 100% inclusivity and exclusivity among the 100 Salmonella serovars and 35 non-Salmonella organisms analyzed. To add to the scope of the Assurance GDS for Salmonella method, a matrix extension study was conducted, following the AOAC guidelines, to validate the application of the method for selected spices, specifically curry powder, cumin powder, and chili powder, for the detection of Salmonella.
Naser, Fuad J; Mahieu, Nathaniel G; Wang, Lingjue; Spalding, Jonathan L; Johnson, Stephen L; Patti, Gary J
2018-02-01
Although it is common in untargeted metabolomics to apply reversed-phase liquid chromatography (RPLC) and hydrophilic interaction liquid chromatography (HILIC) methods that have been systematically optimized for lipids and central carbon metabolites, here we show that these established protocols provide poor coverage of semipolar metabolites because of inadequate retention. Our objective was to develop an RPLC approach that improved detection of these metabolites without sacrificing lipid coverage. We initially evaluated columns recently released by Waters under the CORTECS line by analyzing 47 small-molecule standards that evenly span the nonpolar and semipolar ranges. An RPLC method commonly used in untargeted metabolomics was considered a benchmarking reference. We found that highly nonpolar and semipolar metabolites cannot be reliably profiled with any single method because of retention and solubility limitations of the injection solvent. Instead, we optimized a multiplexed approach using the CORTECS T3 column to analyze semipolar compounds and the CORTECS C 8 column to analyze lipids. Strikingly, we determined that combining these methods allowed detection of 41 of the total 47 standards, whereas our reference RPLC method detected only 10 of the 47 standards. We then applied credentialing to compare method performance at the comprehensive scale. The tandem method showed more than a fivefold increase in credentialing coverage relative to our RPLC benchmark. Our results demonstrate that comprehensive coverage of metabolites amenable to reversed-phase separation necessitates two reconstitution solvents and chromatographic methods. Thus, we suggest complementing HILIC methods with a dual T3 and C 8 RPLC approach to increase coverage of semipolar metabolites and lipids for untargeted metabolomics. Graphical abstract Analysis of semipolar and nonpolar metabolites necessitates two reversed-phase chromatography (RPLC) methods, which extend metabolome coverage more than fivefold for untargeted profiling. HILIC hydrophilic interaction liquid chromatography.
Validity of two alternative systems for measuring vertical jump height.
Leard, John S; Cirillo, Melissa A; Katsnelson, Eugene; Kimiatek, Deena A; Miller, Tim W; Trebincevic, Kenan; Garbalosa, Juan C
2007-11-01
Vertical jump height is frequently used by coaches, health care professionals, and strength and conditioning professionals to objectively measure function. The purpose of this study is to determine the concurrent validity of the jump and reach method (Vertec) and the contact mat method (Just Jump) in assessing vertical jump height when compared with the criterion reference 3-camera motion analysis system. Thirty-nine college students, 25 females and 14 males between the ages of 18 and 25 (mean age 20.65 years), were instructed to perform the countermovement jump. Reflective markers were placed at the base of the individual's sacrum for the 3-camera motion analysis system to measure vertical jump height. The subject was then instructed to stand on the Just Jump mat beneath the Vertec and perform the jump. Measurements were recorded from each of the 3 systems simultaneously for each jump. The Pearson r statistic between the video and the jump and reach (Vertec) was 0.906. The Pearson r between the video and contact mat (Just Jump) was 0.967. Both correlations were significant at the 0.01 level. Analysis of variance showed a significant difference among the 3 means F(2,235) = 5.51, p < 0.05. The post hoc analysis showed a significant difference between the criterion reference (M = 0.4369 m) and the Vertec (M = 0.3937 m, p = 0.005) but not between the criterion reference and the Just Jump system (M = 0.4420 m, p = 0.972). The Just Jump method of measuring vertical jump height is a valid measure when compared with the 3-camera system. The Vertec was found to have a high correlation with the criterion reference, but the mean differed significantly. This study indicates that a higher degree of confidence is warranted when comparing Just Jump results with a 3-camera system study.
Yan, Cunling; Hu, Jian; Yang, Jia; Chen, Zhaoyun; Li, Huijun; Wei, Lianhua; Zhang, Wei; Xing, Hao; Sang, Guoyao; Wang, Xiaoqin; Han, Ruilin; Liu, Ping; Li, Zhihui; Li, Zhiyan; Huang, Ying; Jiang, Li; Li, Shunjun; Dai, Shuyang; Wang, Nianyue; Yang, Yongfeng; Ma, Li; Soh, Andrew; Beshiri, Agim; Shen, Feng; Yang, Tian; Fan, Zhuping; Zheng, Yijie; Chen, Wei
2018-04-01
Protein induced by vitamin K absence or antagonist-II (PIVKA-II) has been widely used as a biomarker for liver cancer diagnosis in Japan for decades. However, the reference intervals for serum ARCHITECT PIVKA-II have not been established in the Chinese population. Thus, this study aimed to measure serum PIVKA-II levels in healthy Chinese subjects. This is a sub-analysis from the prospective, cross-sectional and multicenter study (ClinicalTrials.gov Identifier: NCT03047603). A total of 892 healthy participants (777 Han and 115 Uygur) with complete health checkup results were recruited from 7 regional centers in China. Serum PIVKA-II level was measured by ARCHITECT immunoassay. All 95% reference ranges were estimated by nonparametric method. The distribution of PIVKA-II values showed significant difference with ethnicity and sex, but not age. The 95% reference range of PIVKA-II was 13.62-40.38 mAU/ml in Han Chinese subjects and 15.16-53.74 mAU/ml in Uygur subjects. PIVKA-II level was significantly higher in males than in females (P < 0.001). The 95% reference range of PIVKA-II was 15.39-42.01 mAU/ml in Han males while 11.96-39.13 mAU/ml in Han females. The reference interval of serum PIVKA-II on the Architect platform was established in healthy Chinese adults. This will be valuable for future clinical and laboratory studies performed using the Architect analyzer. Different ethnic backgrounds and analytical methods underline the need for redefining the reference interval of analytes such as PIVKA-II, in central laboratories in different countries. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
40 CFR 53.11 - Cancellation of reference or equivalent method designation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Cancellation of reference or equivalent method designation. 53.11 Section 53.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General...
Fast lossless compression via cascading Bloom filters
2014-01-01
Background Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. Results We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Conclusions Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space slightly. PMID:25252952
Fast lossless compression via cascading Bloom filters.
Rozov, Roye; Shamir, Ron; Halperin, Eran
2014-01-01
Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space slightly.
Standard reference materials: Thermal conductivity of electrolytic iron, SRM 734, from 4 to 300 K
NASA Technical Reports Server (NTRS)
Hust, J. G.; Sparks, L. L.
1971-01-01
Thermal conductivity data were obtained by the axial one-dimensional heat flow method for a cylindrical rod 3.6 mm in diameter and 23 cm long with an electric heater at one end and a temperature controlled sink at the other. Variability of this iron was studied by means of electrical residual resistivity ratio measurements on 63 specimens. This study showed that with a two-hour anneal at 1000 C one can obtain a thermal conductivity Standard Reference Material that has variability of less than 1% in thermal conductivity.
Fernández-Cidón, Bárbara; Padró-Miquel, Ariadna; Alía-Ramos, Pedro; Castro-Castro, María José; Fanlo-Maresma, Marta; Dot-Bach, Dolors; Valero-Politi, José; Pintó-Sala, Xavier; Candás-Estébanez, Beatriz
2017-01-01
High serum concentrations of small dense low-density lipoprotein cholesterol (sd-LDL-c) particles are associated with risk of cardiovascular disease (CVD). Their clinical application has been hindered as a consequence of the laborious current method used for their quantification. Optimize a simple and fast precipitation method to isolate sd-LDL particles and establish a reference interval in a Mediterranean population. Forty-five serum samples were collected, and sd-LDL particles were isolated using a modified heparin-Mg 2+ precipitation method. sd-LDL-c concentration was calculated by subtracting high-density lipoprotein cholesterol (HDL-c) from the total cholesterol measured in the supernatant. This method was compared with the reference method (ultracentrifugation). Reference values were estimated according to the Clinical and Laboratory Standards Institute and The International Federation of Clinical Chemistry and Laboratory Medicine recommendations. sd-LDL-c concentration was measured in serums from 79 subjects with no lipid metabolism abnormalities. The Passing-Bablok regression equation is y = 1.52 (0.72 to 1.73) + 0.07 x (-0.1 to 0.13), demonstrating no significant statistical differences between the modified precipitation method and the ultracentrifugation reference method. Similarly, no differences were detected when considering only sd-LDL-c from dyslipidemic patients, since the modifications added to the precipitation method facilitated the proper sedimentation of triglycerides and other lipoproteins. The reference interval for sd-LDL-c concentration estimated in a Mediterranean population was 0.04-0.47 mmol/L. An optimization of the heparin-Mg 2+ precipitation method for sd-LDL particle isolation was performed, and reference intervals were established in a Spanish Mediterranean population. Measured values were equivalent to those obtained with the reference method, assuring its clinical application when tested in both normolipidemic and dyslipidemic subjects.
NASA Astrophysics Data System (ADS)
Tang, Ronglin; Li, Zhao-Liang; Sun, Xiaomin; Bi, Yuyun
2017-01-01
Surface evapotranspiration (ET) is an important component of water and energy in land and atmospheric systems. This paper investigated whether using variable surface resistances in the reference ET estimates from the full-form Penman-Monteith (PM) equation could improve the upscaled daily ET estimates in the constant reference evaporative fraction (EFr, the ratio of actual to reference grass/alfalfa ET) method on clear-sky days using ground-based measurements. Half-hourly near-surface meteorological variables and eddy covariance (EC) system-measured latent heat flux data on clear-sky days were collected at two sites with different climatic conditions, namely, the subhumid Yucheng station in northern China and the arid Yingke site in northwestern China and were used as the model input and ground-truth, respectively. The results showed that using the Food and Agriculture Organization (FAO)-PM equation, the American Society of Civil Engineers-PM equation, and the full-form PM equation to estimate the reference ET in the constant EFr method produced progressively smaller upscaled daily ET at a given time from midmorning to midafternoon. Using all three PM equations produced the best results at noon at both sites regardless of whether the energy imbalance of the EC measurements was closed. When the EC measurements were not corrected for energy imbalance, using variable surface resistance in the full-form PM equation could improve the ET upscaling in the midafternoon, but worse results may occur in the midmorning to noon. Site-to-site and time-to-time variations were found in the performances of a given PM equation (with fixed or variable surface resistances) before and after the energy imbalance was closed.
Side Effects in Time Discounting Procedures: Fixed Alternatives Become the Reference Point
2016-01-01
Typical research on intertemporal choice utilizes a two-alternative forced choice (2AFC) paradigm requiring participants to choose between a smaller sooner and larger later payoff. In the adjusting-amount procedure (AAP) one of the alternatives is fixed and the other is adjusted according to particular choices made by the participant. Such a method makes the alternatives unequal in status and is speculated to make the fixed alternative a reference point for choices, thereby affecting the decision made. The current study shows that fixing different alternatives in the AAP influences discount rates in intertemporal choices. Specifically, individuals’ (N = 283) choices were affected to just the same extent by merely fixing an alternative as when choices were preceded by scenarios explicitly imposing reference points. PMID:27768759
Validity of two methods for estimation of vertical jump height.
Dias, Jonathan Ache; Dal Pupo, Juliano; Reis, Diogo C; Borges, Lucas; Santos, Saray G; Moro, Antônio R P; Borges, Noé G
2011-07-01
The objectives of this study were (a) to determine the concurrent validity of the flight time (FT) and double integration of vertical reaction force (DIF) methods in the estimation of vertical jump height with the video method (VID) as reference; (b) to verify the degree of agreement among the 3 methods; (c) to propose regression equations to predict the jump height using the FT and DIF. Twenty healthy male and female nonathlete college students participated in this study. The experiment involved positioning a contact mat (CTM) on the force platform (FP), with a video camera 3 m from the FP and perpendicular to the sagittal plane of the subject being assessed. Each participant performed 15 countermovement jumps with 60-second intervals between the trials. Significant differences were found between the jump height obtained by VID and the results with FT (p ≤ 0.01) and DIF (p ≤ 0.01), showing that the methods are not valid. Additionally, the DIF showed a greater degree of agreement with the reference method than the FT did, and both presented a systematic error. From the linear regression test was determined the prediction equations with a high degree of linearity between the methods VID vs. DIF (R = 0.988) and VID vs. FT (R = 0.979). Therefore, the prediction equations suggested may allow coaches to measure the vertical jump performance of athletes by the FT and DIF, using a CTM or an FP, which represents more practical and viable approaches in the sports field; comparisons can then be made with the results of other athletes evaluated by VID.
40 CFR 53.14 - Modification of a reference or equivalent method.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Modification of a reference or equivalent method. 53.14 Section 53.14 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions...
40 CFR 53.8 - Designation of reference and equivalent methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Designation of reference and equivalent methods. 53.8 Section 53.8 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.8...
Reference-free compression of high throughput sequencing data with a probabilistic de Bruijn graph.
Benoit, Gaëtan; Lemaitre, Claire; Lavenier, Dominique; Drezen, Erwan; Dayris, Thibault; Uricaru, Raluca; Rizk, Guillaume
2015-09-14
Data volumes generated by next-generation sequencing (NGS) technologies is now a major concern for both data storage and transmission. This triggered the need for more efficient methods than general purpose compression tools, such as the widely used gzip method. We present a novel reference-free method meant to compress data issued from high throughput sequencing technologies. Our approach, implemented in the software LEON, employs techniques derived from existing assembly principles. The method is based on a reference probabilistic de Bruijn Graph, built de novo from the set of reads and stored in a Bloom filter. Each read is encoded as a path in this graph, by memorizing an anchoring kmer and a list of bifurcations. The same probabilistic de Bruijn Graph is used to perform a lossy transformation of the quality scores, which allows to obtain higher compression rates without losing pertinent information for downstream analyses. LEON was run on various real sequencing datasets (whole genome, exome, RNA-seq or metagenomics). In all cases, LEON showed higher overall compression ratios than state-of-the-art compression software. On a C. elegans whole genome sequencing dataset, LEON divided the original file size by more than 20. LEON is an open source software, distributed under GNU affero GPL License, available for download at http://gatb.inria.fr/software/leon/.
In situ LTE exposure of the general public: Characterization and extrapolation.
Joseph, Wout; Verloock, Leen; Goeminne, Francis; Vermeeren, Günter; Martens, Luc
2012-09-01
In situ radiofrequency (RF) exposure of the different RF sources is characterized in Reading, United Kingdom, and an extrapolation method to estimate worst-case long-term evolution (LTE) exposure is proposed. All electric field levels satisfy the International Commission on Non-Ionizing Radiation Protection (ICNIRP) reference levels with a maximal total electric field value of 4.5 V/m. The total values are dominated by frequency modulation (FM). Exposure levels for LTE of 0.2 V/m on average and 0.5 V/m maximally are obtained. Contributions of LTE to the total exposure are limited to 0.4% on average. Exposure ratios from 0.8% (LTE) to 12.5% (FM) are obtained. An extrapolation method is proposed and validated to assess the worst-case LTE exposure. For this method, the reference signal (RS) and secondary synchronization signal (S-SYNC) are measured and extrapolated to the worst-case value using an extrapolation factor. The influence of the traffic load and output power of the base station on in situ RS and S-SYNC signals are lower than 1 dB for all power and traffic load settings, showing that these signals can be used for the extrapolation method. The maximal extrapolated field value for LTE exposure equals 1.9 V/m, which is 32 times below the ICNIRP reference levels for electric fields. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Kuriakose, Jean W.; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Guo, Yanhui; Patel, Smita; Kazerooni, Ella A.
2012-03-01
Vessel segmentation is a fundamental step in an automated pulmonary embolism (PE) detection system. The purpose of this study is to improve the segmentation scheme for pulmonary vessels affected by PE and other lung diseases. We have developed a multiscale hierarchical vessel enhancement and segmentation (MHES) method for pulmonary vessel tree extraction based on the analysis of eigenvalues of Hessian matrices. However, it is difficult to segment the pulmonary vessels accurately under suboptimal conditions, such as vessels occluded by PEs, surrounded by lymphoid tissues or lung diseases, and crossing with other vessels. In this study, we developed a new vessel refinement method utilizing curved planar reformation (CPR) technique combined with optimal path finding method (MHES-CROP). The MHES segmented vessels straightened in the CPR volume was refined using adaptive gray level thresholding where the local threshold was obtained from least-square estimation of a spline curve fitted to the gray levels of the vessel along the straightened volume. An optimal path finding method based on Dijkstra's algorithm was finally used to trace the correct path for the vessel of interest. Two and eight CTPA scans were randomly selected as training and test data sets, respectively. Forty volumes of interest (VOIs) containing "representative" vessels were manually segmented by a radiologist experienced in CTPA interpretation and used as reference standard. The results show that, for the 32 test VOIs, the average percentage volume error relative to the reference standard was improved from 32.9+/-10.2% using the MHES method to 9.9+/-7.9% using the MHES-CROP method. The accuracy of vessel segmentation was improved significantly (p<0.05). The intraclass correlation coefficient (ICC) of the segmented vessel volume between the automated segmentation and the reference standard was improved from 0.919 to 0.988. Quantitative comparison of the MHES method and the MHES-CROP method with the reference standard was also evaluated by the Bland-Altman plot. This preliminary study indicates that the MHES-CROP method has the potential to improve PE detection.
LinkImpute: Fast and Accurate Genotype Imputation for Nonmodel Organisms
Money, Daniel; Gardner, Kyle; Migicovsky, Zoë; Schwaninger, Heidi; Zhong, Gan-Yuan; Myles, Sean
2015-01-01
Obtaining genome-wide genotype data from a set of individuals is the first step in many genomic studies, including genome-wide association and genomic selection. All genotyping methods suffer from some level of missing data, and genotype imputation can be used to fill in the missing data and improve the power of downstream analyses. Model organisms like human and cattle benefit from high-quality reference genomes and panels of reference genotypes that aid in imputation accuracy. In nonmodel organisms, however, genetic and physical maps often are either of poor quality or are completely absent, and there are no panels of reference genotypes available. There is therefore a need for imputation methods designed specifically for nonmodel organisms in which genomic resources are poorly developed and marker order is unreliable or unknown. Here we introduce LinkImpute, a software package based on a k-nearest neighbor genotype imputation method, LD-kNNi, which is designed for unordered markers. No physical or genetic maps are required, and it is designed to work on unphased genotype data from heterozygous species. It exploits the fact that markers useful for imputation often are not physically close to the missing genotype but rather distributed throughout the genome. Using genotyping-by-sequencing data from diverse and heterozygous accessions of apples, grapes, and maize, we compare LD-kNNi with several genotype imputation methods and show that LD-kNNi is fast, comparable in accuracy to the best-existing methods, and exhibits the least bias in allele frequency estimates. PMID:26377960
A Gauss-Seidel Iteration Scheme for Reference-Free 3-D Histological Image Reconstruction
Daum, Volker; Steidl, Stefan; Maier, Andreas; Köstler, Harald; Hornegger, Joachim
2015-01-01
Three-dimensional (3-D) reconstruction of histological slice sequences offers great benefits in the investigation of different morphologies. It features very high-resolution which is still unmatched by in-vivo 3-D imaging modalities, and tissue staining further enhances visibility and contrast. One important step during reconstruction is the reversal of slice deformations introduced during histological slice preparation, a process also called image unwarping. Most methods use an external reference, or rely on conservative stopping criteria during the unwarping optimization to prevent straightening of naturally curved morphology. Our approach shows that the problem of unwarping is based on the superposition of low-frequency anatomy and high-frequency errors. We present an iterative scheme that transfers the ideas of the Gauss-Seidel method to image stacks to separate the anatomy from the deformation. In particular, the scheme is universally applicable without restriction to a specific unwarping method, and uses no external reference. The deformation artifacts are effectively reduced in the resulting histology volumes, while the natural curvature of the anatomy is preserved. The validity of our method is shown on synthetic data, simulated histology data using a CT data set and real histology data. In the case of the simulated histology where the ground truth was known, the mean Target Registration Error (TRE) between the unwarped and original volume could be reduced to less than 1 pixel on average after 6 iterations of our proposed method. PMID:25312918
Dynamic balancing of dual-rotor system with very little rotating speed difference.
Yang, Jian; He, Shi-zheng; Wang, Le-qin
2003-01-01
Unbalanced vibration in dual-rotor rotating machinery was studied with numerical simulations and experiments. A new method is proposed to separate vibration signals of inner and outer rotors for a system with very little difference in rotating speeds. Magnitudes and phase values of unbalance defects can be obtained directly by sampling the vibration signal synchronized with reference signal. The balancing process is completed by the reciprocity influence coefficients of inner and outer rotors method. Results showed the advantage of such method for a dual-rotor system as compared with conventional balancing.
DFTB Parameters for the Periodic Table: Part 1, Electronic Structure.
Wahiduzzaman, Mohammad; Oliveira, Augusto F; Philipsen, Pier; Zhechkov, Lyuben; van Lenthe, Erik; Witek, Henryk A; Heine, Thomas
2013-09-10
A parametrization scheme for the electronic part of the density-functional based tight-binding (DFTB) method that covers the periodic table is presented. A semiautomatic parametrization scheme has been developed that uses Kohn-Sham energies and band structure curvatures of real and fictitious homoatomic crystal structures as reference data. A confinement potential is used to tighten the Kohn-Sham orbitals, which includes two free parameters that are used to optimize the performance of the method. The method is tested on more than 100 systems and shows excellent overall performance.
Reference tissue modeling with parameter coupling: application to a study of SERT binding in HIV
NASA Astrophysics Data System (ADS)
Endres, Christopher J.; Hammoud, Dima A.; Pomper, Martin G.
2011-04-01
When applicable, it is generally preferred to evaluate positron emission tomography (PET) studies using a reference tissue-based approach as that avoids the need for invasive arterial blood sampling. However, most reference tissue methods have been shown to have a bias that is dependent on the level of tracer binding, and the variability of parameter estimates may be substantially affected by noise level. In a study of serotonin transporter (SERT) binding in HIV dementia, it was determined that applying parameter coupling to the simplified reference tissue model (SRTM) reduced the variability of parameter estimates and yielded the strongest between-group significant differences in SERT binding. The use of parameter coupling makes the application of SRTM more consistent with conventional blood input models and reduces the total number of fitted parameters, thus should yield more robust parameter estimates. Here, we provide a detailed evaluation of the application of parameter constraint and parameter coupling to [11C]DASB PET studies. Five quantitative methods, including three methods that constrain the reference tissue clearance (kr2) to a common value across regions were applied to the clinical and simulated data to compare measurement of the tracer binding potential (BPND). Compared with standard SRTM, either coupling of kr2 across regions or constraining kr2 to a first-pass estimate improved the sensitivity of SRTM to measuring a significant difference in BPND between patients and controls. Parameter coupling was particularly effective in reducing the variance of parameter estimates, which was less than 50% of the variance obtained with standard SRTM. A linear approach was also improved when constraining kr2 to a first-pass estimate, although the SRTM-based methods yielded stronger significant differences when applied to the clinical study. This work shows that parameter coupling reduces the variance of parameter estimates and may better discriminate between-group differences in specific binding.
Fenton, Tanis R; Anderson, Diane; Groh-Wargo, Sharon; Hoyos, Angela; Ehrenkranz, Richard A; Senterre, Thibault
2018-05-01
To examine how well growth velocity recommendations for preterm infants fit with current growth references: Fenton 2013, Olsen 2010, INTERGROWTH 2015, and the World Health Organization Growth Standard 2006. The Average (2-point), Exponential (2-point), Early (1-point) method weight-gains were calculated for 1,4,8,12, and 16-week time-periods. Growth references' weekly velocities (g/kg/d, gram/day and cm/week) were illustrated graphically with frequently-quoted 15 g/kg/d, 10-30 grams/day and 1 cm/week rates superimposed. The 15 g/kg/d and 1 cm/week growth velocity rates were calculated from 24-50 weeks, superimposed on the Fenton and Olsen preterm growth charts. The Average and Exponential g/kg/d estimates showed close agreement for all ages (range 5.0-18.9 g/kg/d), while the Early method yielded values as high as 41 g/kg/d. All 3 preterm growth references were similar to 15 g/kg/d rate at 34 weeks, but rates were higher prior and lower at older ages. For gram/day, the growth references changed from 10 to 30 grams/day for 24-33 weeks. Head growth rates generally fit the 1 cm/week velocity for 23-30 weeks, and length growth rates fit for 37-40 weeks. The calculated g/kg/d curves deviated from the growth charts, first downward, then steeply crossed the median curves near term. Human growth is not constant through gestation and early infancy. The frequently-quoted 15 g/kg/d, 10-30 gram/day and 1 cm/week only fit current growth references for limited time periods. Rates of 15-20 g/kg/d (calculated using average or exponential methods) are a reasonable goal for infants 23-36 weeks, but not beyond. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dong, Fang
1999-09-01
The research described in this dissertation is related to characterization of tissue microstructure using a system- independent spatial autocorrelation function (SAF). The function was determined using a reference phantom method, which employed a well-defined ``point- scatterer'' reference phantom to account for instrumental factors. The SAF's were estimated for several tissue-mimicking (TM) phantoms and fresh dog livers. Both phantom tests and in vitro dog liver measurements showed that the reference phantom method is relatively simple and fairly accurate, providing the bandwidth of the measurement system is sufficient for the size of the scatterer being involved in the scattering process. Implementation of this method in clinical scanner requires that distortions from patient's body wall be properly accounted for. The SAF's were estimated for two phantoms with body-wall-like distortions. The experimental results demonstrated that body wall distortions have little effect if echo data are acquired from a large scattering volume. One interesting application of the SAF is to form a ``scatterer size image''. The scatterer size image may help providing diagnostic tools for those diseases in which the tissue microstructure is different from the normal. Another method, the BSC method, utilizes information contained in the frequency dependence of the backscatter coefficient to estimate the scatterer size. The SAF technique produced accurate scatterer size images of homogeneous TM phantoms and the BSC method was capable of generating accurate size images for heterogeneous phantoms. In the scatterer size image of dog kidneys, the contrast-to-noise-ratio (CNR) between renal cortex and medulla was improved dramatically compared to the gray- scale image. The effect of nonlinear propagation was investigated by using a custom-designed phantom with overlaying TM fat layer. The results showed that the correlation length decreased when the transmitting power increased. The measurement results support the assumption that nonlinear propagation generates harmonic energies and causes underestimation of scatterer diameters. Nonlinear propagation can be further enhanced by those materials with high B/A value-a parameter which characterizes the degree of nonlinearity. Nine versions of TM fat and non-fat materials were measured for their B/A values using a new measurement technique, the ``simplified finite amplitude insertion substitution'' (SFAIS) method.
The influence of sex difference on self-reference effects in a male-dominated culture.
Song, Xuan; Shang, Rui; Bi, Qi; Zhang, Xin; Wu, Yanhong
2012-10-01
52 secondary school students from the Chaoshan, China, area, where males are highly valued, were examined for self-reference, mother-reference, and father-reference effects. Because the father is the primary role model in Chaoshan culture, it was predicted that male participants would demonstrate a father-reference effect while females would show a mother-reference effect. The results confirmed that females showed significant self-, mother-, and father-reference effects in terms of memory performance, while males showed only a significant father-reference effect and a marginally significant self-reference effect. This study highlights the importance of researching subcultures such as the Chaoshan subculture to gain a comprehensive understanding of self-construct.
Amount of Postcue Encoding Predicts Amount of Directed Forgetting
ERIC Educational Resources Information Center
Pastotter, Bernhard; Bauml, Karl-Heinz
2010-01-01
In list-method directed forgetting, participants are cued to intentionally forget a previously studied list (List 1) before encoding a subsequently presented list (List 2). Compared with remember-cued participants, forget-cued participants typically show impaired recall of List 1 and improved recall of List 2, referred to as List 1 forgetting and…
Aravena-Román, Max; Harnett, Gerald B.; Riley, Thomas V.; Inglis, Timothy J. J.; Chang, Barbara J.
2011-01-01
Genotypic characterization of 215 Aeromonas strains (143 clinical, 52 environmental, and 20 reference strains) showed that Aeromonas aquariorum (60 strains, 30.4%) was the most frequently isolated species in clinical and water samples and could be misidentified as Aeromonas hydrophila by phenotypic methods. PMID:21697316
ERIC Educational Resources Information Center
Transler, Catherine; Eilander, Ans; Mitchell, Siobhan; van de Meer, Nelly
2010-01-01
Objectives: To review the impact of polyunsaturated fatty acids (PUFA) in reducing ADHD symptoms in children. Methods: Peer-reviewed experimental literature published from 1980 to Mai 2009 is consulted (Psychinfo, Medline, and resulting reference lists). Results: Placebo-controlled studies with ADHD or hyperactive children show no effects on…
Introduction: Links between Social Interaction and Executive Function
ERIC Educational Resources Information Center
Lewis, Charlie; Carpendale, Jeremy I. M.
2009-01-01
The term executive function is used increasingly within developmental psychology and is often taken to refer to unfolding brain processes. We trace the origins of research on executive function to show that the link with social interaction has a long history. We suggest that a recent frenzy of research exploring methods for studying individual…
Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing
NASA Astrophysics Data System (ADS)
Ou, Meiying; Li, Shihua; Wang, Chaoli
2013-12-01
This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.
Head Circumference Charts for Turkish Children Aged Five to Eighteen Years
KARA, Bülent; ETİLER, Nilay; AYDOĞAN UNCUOĞLU, Ayşen; MARAŞ GENÇ, Hülya; ULAK GÜMÜŞLÜ, Esen; GÖKÇAY, Gülbin; FURMAN, Andrezej
2016-01-01
Introduction Most head circumference growth references are useful during the first years of life, but they are also useful for older children when screening for developmental, neurological, and genetic disorders. We aimed to develop head circumference growth reference charts for age, height, and waist circumference for Turkish children aged 5–18 years. Methods Head circumference, height, and waist circumference measurements were obtained from 5079 students aged 5–18 years from İzmit, Kocaeli Province, Turkey. The LMS method was used to construct reference centile curves. Results Head circumference measurements were strongly correlated with height (r=0.74), weight (r=0.76), and waist circumference (r=0.68). The mean head circumference values for boys were larger than those for girls at all ages. Compared with data from the United States, the World Health Organization, and other studies from Turkey, our data showed a decrease in head circumference at all ages for both sexes. Conclusion Local growth charts can be used to evaluate head circumference growth in older Turkish children and adolescents. PMID:28360767
Research on filter’s parameter selection based on PROMETHEE method
NASA Astrophysics Data System (ADS)
Zhu, Hui-min; Wang, Hang-yu; Sun, Shi-yan
2018-03-01
The selection of filter’s parameters in target recognition was studied in this paper. The PROMETHEE method was applied to the optimization problem of Gabor filter parameters decision, the correspondence model of the elemental relation between two methods was established. The author took the identification of military target as an example, problem about the filter’s parameter decision was simulated and calculated by PROMETHEE. The result showed that using PROMETHEE method for the selection of filter’s parameters was more scientific. The human disturbance caused by the experts method and empirical method could be avoided by this way. The method can provide reference for the parameter configuration scheme decision of the filter.
Comparative Accuracy of Facial Models Fabricated Using Traditional and 3D Imaging Techniques.
Lincoln, Ketu P; Sun, Albert Y T; Prihoda, Thomas J; Sutton, Alan J
2016-04-01
The purpose of this investigation was to compare the accuracy of facial models fabricated using facial moulage impression methods to the three-dimensional printed (3DP) fabrication methods using soft tissue images obtained from cone beam computed tomography (CBCT) and 3D stereophotogrammetry (3D-SPG) scans. A reference phantom model was fabricated using a 3D-SPG image of a human control form with ten fiducial markers placed on common anthropometric landmarks. This image was converted into the investigation control phantom model (CPM) using 3DP methods. The CPM was attached to a camera tripod for ease of image capture. Three CBCT and three 3D-SPG images of the CPM were captured. The DICOM and STL files from the three 3dMD and three CBCT were imported to the 3DP, and six testing models were made. Reversible hydrocolloid and dental stone were used to make three facial moulages of the CPM, and the impressions/casts were poured in type IV gypsum dental stone. A coordinate measuring machine (CMM) was used to measure the distances between each of the ten fiducial markers. Each measurement was made using one point as a static reference to the other nine points. The same measuring procedures were accomplished on all specimens. All measurements were compared between specimens and the control. The data were analyzed using ANOVA and Tukey pairwise comparison of the raters, methods, and fiducial markers. The ANOVA multiple comparisons showed significant difference among the three methods (p < 0.05). Further, the interaction of methods versus fiducial markers also showed significant difference (p < 0.05). The CBCT and facial moulage method showed the greatest accuracy. 3DP models fabricated using 3D-SPG showed statistical difference in comparison to the models fabricated using the traditional method of facial moulage and 3DP models fabricated from CBCT imaging. 3DP models fabricated using 3D-SPG were less accurate than the CPM and models fabricated using facial moulage and CBCT imaging techniques. © 2015 by the American College of Prosthodontists.
10 CFR 434.505 - Reference building method.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 3 2013-01-01 2013-01-01 false Reference building method. 434.505 Section 434.505 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Cost Compliance Alternative § 434.505 Reference building method. 505...
10 CFR 434.505 - Reference building method.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 3 2014-01-01 2014-01-01 false Reference building method. 434.505 Section 434.505 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Cost Compliance Alternative § 434.505 Reference building method. 505.1...
10 CFR 434.505 - Reference building method.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 3 2012-01-01 2012-01-01 false Reference building method. 434.505 Section 434.505 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Cost Compliance Alternative § 434.505 Reference building method. 505...
10 CFR 434.505 - Reference building method.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 3 2011-01-01 2011-01-01 false Reference building method. 434.505 Section 434.505 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Cost Compliance Alternative § 434.505 Reference building method. 505...
10 CFR 434.505 - Reference building method.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Reference building method. 434.505 Section 434.505 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Cost Compliance Alternative § 434.505 Reference building method. 505...
Riffelmann, M.; Thiel, K.; Schmetz, J.; Wirsing von Koenig, C. H.
2010-01-01
Measuring antibodies to Bordetella pertussis antigens is mostly done by enzyme-linked immunosorbent assays (ELISAs). We compared the performance of ELISA kits that were commercially available in Germany. Eleven measured IgG antibodies, and nine measured IgA antibodies. An in-house ELISA with purified antigens served as a reference method. Samples included two WHO reference preparations, the former Food and Drug Administration (FDA)/Center for Biologics Evaluation and Research (CBER) reference preparations, serum samples from patients with clinically suspected pertussis, and serum samples from patients having received a combined tetanus, diphtheria, and pertussis (Tdap) vaccination. Kits using pertussis toxin (PT) as an antigen showed linearity compared to the WHO Reference preparation (r2 between 0.82 and 0.99), and these kits could quantify antibodies according to the reference preparation. ELISA kits using mixed antigens showed no linear correlation to the reference preparations. Patient results were compared to results of in-house ELISAs using a dual cutoff of either ≥100 IU/ml anti-PT IgG or ≥40 IU/ml anti-PT IgG together with ≥12 IU/ml anti-PT IgA. The sensitivities of kits measuring IgG antibodies ranged between 0.84 and 1.00. The specificities of kits using PT as an antigen were between 0.81 and 0.93. The specificities of kits using mixed antigens were between 0.51 and 0.59 and were thus not acceptable. The sensitivities of kits measuring IgA antibodies ranged between 0.53 and 0.73, and the specificities were between 0.67 and 0.94, indicating that IgA antibodies may be of limited diagnostic value. Our data suggest that ELISAs should use purified PT as an antigen and be standardized to the 1st International Reference preparation. PMID:20943873
A Flexile and High Precision Calibration Method for Binocular Structured Light Scanning System
Yuan, Jianying; Wang, Qiong; Li, Bailin
2014-01-01
3D (three-dimensional) structured light scanning system is widely used in the field of reverse engineering, quality inspection, and so forth. Camera calibration is the key for scanning precision. Currently, 2D (two-dimensional) or 3D fine processed calibration reference object is usually applied for high calibration precision, which is difficult to operate and the cost is high. In this paper, a novel calibration method is proposed with a scale bar and some artificial coded targets placed randomly in the measuring volume. The principle of the proposed method is based on hierarchical self-calibration and bundle adjustment. We get initial intrinsic parameters from images. Initial extrinsic parameters in projective space are estimated with the method of factorization and then upgraded to Euclidean space with orthogonality of rotation matrix and rank 3 of the absolute quadric as constraint. Last, all camera parameters are refined through bundle adjustment. Real experiments show that the proposed method is robust, and has the same precision level as the result using delicate artificial reference object, but the hardware cost is very low compared with the current calibration method used in 3D structured light scanning system. PMID:25202736
[Modernized study on eye's signs of blood-stasis syndrome].
Wu, Rui; Xie, Jian-xiang; Zhao, Feng-da
2011-03-01
To make out a computerized formula to diagnose eye's signs of blood-stasis syndrome (BSS), and to improve the previous diagnostic methods by naked eyes. The formula was created by detecting and analyzing the changes of eye's signs in 544 patients (261 of non-BSS and 283 of BSS) quantitatively, adopting computer's color scale principle. And the sensitivity, specificity and accuracy of the formula were verified in 382 patients (97 non-BSS and 285 of BSS). The computerized integral was compared with the naked eye integral, and the normal reference value was calculated with percentile. Various observatory indices of eye's sign were positively correlated with BSS. The specificity of the computerized method was 83.5%, and the diagnostic sensitivity was 89.8%, the accuracy 88.2%, and the correct index 0.733. Comparisons between the computerized integral method and the naked eye integral method showed significant difference in patients of non-BSS or of BSS in various degrees (including mild, moderate and severe) (P < 0.01). The reference value of the naked eye method was below 15. The computerized formula of eye's signs has higher specificity and sensitivity in the diagnosis of BSS, while the naked eye integral method is proved to be useful.
NASA Astrophysics Data System (ADS)
Xu, Jing; Liu, Xiaofei; Wang, Yutian
2016-08-01
Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components.
Generalized equation of state for refrigerants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Y.; Sonntag, R.E.; Borgnakke, C.
1995-08-01
A new four-parameter generalized equation of state with three reference fluids has been developed for predicting thermodynamic properties of the methane and ethane-series refrigerants. The four chosen characteristic parameters are critical temperature, critical pressure, acentric factor, and the polarity factor proposed in this work. The three selected reference fluids are argon, n-butane and 1,1-difluoroethane (R-152a). When the results of this work are compared with the refrigerant experimental data, they show significant improvement over Lee and Kesler (1975) and Wu and Stiel (1985). If the characteristic parameters of the refrigerants of interest are not available, an estimation method based on themore » group contribution method is given. The ideal vapor-compression refrigeration cycle was studied using the newly developed generalized equation of state to verify the accuracy of this work.« less
Chromý, Vratislav; Vinklárková, Bára; Šprongl, Luděk; Bittová, Miroslava
2015-01-01
We found previously that albumin-calibrated total protein in certified reference materials causes unacceptable positive bias in analysis of human sera. The simplest way to cure this defect is the use of human-based serum/plasma standards calibrated by the Kjeldahl method. Such standards, commutative with serum samples, will compensate for bias caused by lipids and bilirubin in most human sera. To find a suitable primary reference procedure for total protein in reference materials, we reviewed Kjeldahl methods adopted by laboratory medicine. We found two methods recommended for total protein in human samples: an indirect analysis based on total Kjeldahl nitrogen corrected for its nonprotein nitrogen and a direct analysis made on isolated protein precipitates. The methods found will be assessed in a subsequent article.
HEAD CIRCUMFERENCE REFERENCES FOR SCHOOL AGE CHILDREN IN WESTERN ROMANIA.
Chirita-Emandi, Adela; Doros, Gabriela; Simina, Iulia Jurca; Gafencu, Mihai; Puiu, Maria
2015-01-01
To provide head circumference references for school-aged children in western Romania, and compare them with references from other European countries. A total of 2742 children, aged 6-19 years, from Timis county, were examined by medical students, between February 2010-June 2011. Head circumference references were constructed by Cole's LMS method with LMSChartMaker software. The Romanian 3rd, 50th and 97th percentiles for head circumference were compared with recent references from Belgium and Germany. Generally, boys show significantly larger head circumference compared to girls at any age. The head circumference increments between 6 and 19 years are < 1 cm/year. Head circumference increments decrease in increasing age of the children. In girls, adult head circumference is reached at the age of 16 years, whereas head circumference growth continues, in boys, slowly until 18 years. The comparison of Romanian head percentiles with those from Belgium and Germany revealed a smaller head circumference in Romanian children (both girls and boys). Comparing head circumference references from Romania to those from Germany and Belgium, we found lower median head circumference in Romanian boys and girls, that could be explained by a taller stature of boys and girls in Germany and Belgium compared to Romania.
Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György
2018-01-01
Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.
Stochastic model search with binary outcomes for genome-wide association studies.
Russu, Alberto; Malovini, Alberto; Puca, Annibale A; Bellazzi, Riccardo
2012-06-01
The spread of case-control genome-wide association studies (GWASs) has stimulated the development of new variable selection methods and predictive models. We introduce a novel Bayesian model search algorithm, Binary Outcome Stochastic Search (BOSS), which addresses the model selection problem when the number of predictors far exceeds the number of binary responses. Our method is based on a latent variable model that links the observed outcomes to the underlying genetic variables. A Markov Chain Monte Carlo approach is used for model search and to evaluate the posterior probability of each predictor. BOSS is compared with three established methods (stepwise regression, logistic lasso, and elastic net) in a simulated benchmark. Two real case studies are also investigated: a GWAS on the genetic bases of longevity, and the type 2 diabetes study from the Wellcome Trust Case Control Consortium. Simulations show that BOSS achieves higher precisions than the reference methods while preserving good recall rates. In both experimental studies, BOSS successfully detects genetic polymorphisms previously reported to be associated with the analyzed phenotypes. BOSS outperforms the other methods in terms of F-measure on simulated data. In the two real studies, BOSS successfully detects biologically relevant features, some of which are missed by univariate analysis and the three reference techniques. The proposed algorithm is an advance in the methodology for model selection with a large number of features. Our simulated and experimental results showed that BOSS proves effective in detecting relevant markers while providing a parsimonious model.
NASA Astrophysics Data System (ADS)
Moise Famien, Adjoua; Janicot, Serge; Delfin Ochou, Abe; Vrac, Mathieu; Defrance, Dimitri; Sultan, Benjamin; Noël, Thomas
2018-03-01
The objective of this paper is to present a new dataset of bias-corrected CMIP5 global climate model (GCM) daily data over Africa. This dataset was obtained using the cumulative distribution function transform (CDF-t) method, a method that has been applied to several regions and contexts but never to Africa. Here CDF-t has been applied over the period 1950-2099 combining Historical runs and climate change scenarios for six variables: precipitation, mean near-surface air temperature, near-surface maximum air temperature, near-surface minimum air temperature, surface downwelling shortwave radiation, and wind speed, which are critical variables for agricultural purposes. WFDEI has been used as the reference dataset to correct the GCMs. Evaluation of the results over West Africa has been carried out on a list of priority user-based metrics that were discussed and selected with stakeholders. It includes simulated yield using a crop model simulating maize growth. These bias-corrected GCM data have been compared with another available dataset of bias-corrected GCMs using WATCH Forcing Data as the reference dataset. The impact of WFD, WFDEI, and also EWEMBI reference datasets has been also examined in detail. It is shown that CDF-t is very effective at removing the biases and reducing the high inter-GCM scattering. Differences with other bias-corrected GCM data are mainly due to the differences among the reference datasets. This is particularly true for surface downwelling shortwave radiation, which has a significant impact in terms of simulated maize yields. Projections of future yields over West Africa are quite different, depending on the bias-correction method used. However all these projections show a similar relative decreasing trend over the 21st century.
Comparison of three commercially available fit-test methods.
Janssen, Larry L; Luinenburg, D Michael; Mullins, Haskell E; Nelson, Thomas J
2002-01-01
American National Standards Institute (ANSI) standard Z88.10, Respirator Fit Testing Methods, includes criteria to evaluate new fit-tests. The standard allows generated aerosol, particle counting, or controlled negative pressure quantitative fit-tests to be used as the reference method to determine acceptability of a new test. This study examined (1) comparability of three Occupational Safety and Health Administration-accepted fit-test methods, all of which were validated using generated aerosol as the reference method; and (2) the effect of the reference method on the apparent performance of a fit-test method under evaluation. Sequential fit-tests were performed using the controlled negative pressure and particle counting quantitative fit-tests and the bitter aerosol qualitative fit-test. Of 75 fit-tests conducted with each method, the controlled negative pressure method identified 24 failures; bitter aerosol identified 22 failures; and the particle counting method identified 15 failures. The sensitivity of each method, that is, agreement with the reference method in identifying unacceptable fits, was calculated using each of the other two methods as the reference. None of the test methods met the ANSI sensitivity criterion of 0.95 or greater when compared with either of the other two methods. These results demonstrate that (1) the apparent performance of any fit-test depends on the reference method used, and (2) the fit-tests evaluated use different criteria to identify inadequately fitting respirators. Although "acceptable fit" cannot be defined in absolute terms at this time, the ability of existing fit-test methods to reject poor fits can be inferred from workplace protection factor studies.
A simple transformation independent method for outlier definition.
Johansen, Martin Berg; Christensen, Peter Astrup
2018-04-10
Definition and elimination of outliers is a key element for medical laboratories establishing or verifying reference intervals (RIs). Especially as inclusion of just a few outlying observations may seriously affect the determination of the reference limits. Many methods have been developed for definition of outliers. Several of these methods are developed for the normal distribution and often data require transformation before outlier elimination. We have developed a non-parametric transformation independent outlier definition. The new method relies on drawing reproducible histograms. This is done by using defined bin sizes above and below the median. The method is compared to the method recommended by CLSI/IFCC, which uses Box-Cox transformation (BCT) and Tukey's fences for outlier definition. The comparison is done on eight simulated distributions and an indirect clinical datasets. The comparison on simulated distributions shows that without outliers added the recommended method in general defines fewer outliers. However, when outliers are added on one side the proposed method often produces better results. With outliers on both sides the methods are equally good. Furthermore, it is found that the presence of outliers affects the BCT, and subsequently affects the determined limits of current recommended methods. This is especially seen in skewed distributions. The proposed outlier definition reproduced current RI limits on clinical data containing outliers. We find our simple transformation independent outlier detection method as good as or better than the currently recommended methods.
Ahn, Sang Hoon; Chun, Ji-Yong; Shin, Soo-Kyung; Park, Jun Yong; Yoo, Wangdon; Hong, Sun Pyo; Han, Kwang-Hyub
2013-01-01
Background/Aims Molecular diagnostic methods have enabled the rapid diagnosis of drug-resistant mutations in hepatitis B virus (HBV) and have reduced both unnecessary therapeutic interventions and medical costs. In this study we evaluated the analytical and clinical performances of the HepB Typer-Entecavir kit (GeneMatrix, Korea) in detecting entecavir-resistance-associated mutations. Methods The HepB Typer-Entecavir kit was evaluated for its limit of detection, interference, cross-reactivity, and precision using HBV reference standards made by diluting high-titer viral stocks in HBV-negative human serum. The performance of the HepB Typer-Entecavir kit for detecting mutations related to entecavir resistance was compared with direct sequencing for 396 clinical samples from 108 patients. Results Using the reference standards, the detection limit of the HepB Typer-Entecavir kit was found to be as low as 500 copies/mL. No cross-reactivity was observed, and elevated levels of various interfering substances did not adversely affect its analytical performance. The precision test conducted by repetitive analysis of 2,400 replicates with reference standards at various concentrations showed 99.9% agreement (2398/2400). The overall concordance rate between the HepB Typer-Entecavir kit and direct sequencing assays in 396 clinical samples was 99.5%. Conclusions The HepB Typer-Entecavir kit showed high reliability and precision, and comparable sensitivity and specificity for detecting mutant virus populations in reference and clinical samples in comparison with direct sequencing. Therefore, this assay would be clinically useful in the diagnosis of entecavir-resistance-associated mutations in chronic hepatitis B. PMID:24459645
[Comparative research on the NIR and MIR micro-imaging of two similar plastic materials].
Wang, Dong; Ma, Zhi-Hong; Zhao, Liu; Pan, Li-Gang; Li, Xiao-Ting; Wang, Ji-Hua
2011-09-01
The NIR/MIR micro-imaging can supply not only the information of spectra, but also the information of spacial distribution of the sample, which is superior to the traditional NIR/MIR spectroscopy analysis. In the present paper, polyethylene and parafilm, with similar appearances, were regarded as the research objects, of which the NIR/MIR micro-imaging was collected. Chemical imaging (CI) and compare correlation imaging were carried out for the two materials respectively to discuss the imaging methods of the two materials. The result indicated that the differentiation of the CI values of the two materials in the NIR/MIR CI for material II was 0.004 8 and 0.254 8 respectively, while those in the NIR/MIR CI for material I were 0.002 6 and 0.326 5, respectively. Clear CI was acquired, and the two materials could be differentiated. The result of the compare correlation imagings indicated that the compare correlation imagings, in which the NIR/MIR spectra of the two materials were regarded as reference spectra respectively, can differentiate the two materials remarkably with clear imagings. In the compare correlation imagings of MIR micro-imaging, the difference of the correlation coefficients between the two materials' MIR spectra and the reference spectrum was more than 0.12, which showed a better imaging result; while a tiny difference of the correlation coefficients between the two materials' NIR spectra and the reference spectrum could be employed to show a clear imaging result for NIR compare correlation imaging so as to differentiate the two materials. This thesis, to some extent, can supply the reference to not only the rapid discrimination of the safety of the packaging material for agri-food, but also the imaging methods for NIR/MIR micro-imaging to differentiate the different materials.
NASA Astrophysics Data System (ADS)
Tasić, Viša; Jovašević-Stojanović, Milena; Vardoulakis, Sotiris; Milošević, Novica; Kovačević, Renata; Petrović, Jelena
2012-07-01
Accurate monitoring of indoor mass concentrations of particulate matter is very important for health risk assessment as people in developed countries spend approximately 90% of their time indoors. The direct reading, aerosol monitoring device, Turnkey, OSIRIS Particle Monitor (Model 2315) and the European reference low volume sampler, LVS3 (Sven/Leckel LVS3) with size-selective inlets for PM10 and PM2.5 fractions were used to assess the comparability of available optical and gravimetric methods for particulate matter characterization in indoor air. Simultaneous 24-hour samples were collected in an indoor environment for 60 sampling periods in the town of Bor, Serbia. The 24-hour mean PM10 levels from the OSIRIS monitor were well correlated with the LVS3 levels (R2 = 0.87) and did not show statistically significant bias. The 24-hour mean PM2.5 levels from the OSIRIS monitor were moderately correlated with the LVS3 levels (R2 = 0.71), but show statistically significant bias. The results suggest that the OSIRIS monitor provides sufficiently accurate measurements for PM10. The OSIRIS monitor underestimated the indoor PM10 concentrations by approximately 12%, relative to the reference LVS3 sampler. The accuracy of PM10 measurements could be further improved through empirical adjustment. For the fine fraction of particulate matter, PM2.5, it was found that the OSIRIS monitor underestimated indoor concentrations by approximately 63%, relative to the reference LVS3 sampler. This could lead to exposure misclassification in health effects studies relying on PM2.5 measurements collected with this instrument in indoor environments.
Toward Automatic Georeferencing of Archival Aerial Photogrammetric Surveys
NASA Astrophysics Data System (ADS)
Giordano, S.; Le Bris, A.; Mallet, C.
2018-05-01
Images from archival aerial photogrammetric surveys are a unique and relatively unexplored means to chronicle 3D land-cover changes over the past 100 years. They provide a relatively dense temporal sampling of the territories with very high spatial resolution. Such time series image analysis is a mandatory baseline for a large variety of long-term environmental monitoring studies. The current bottleneck for accurate comparison between epochs is their fine georeferencing step. No fully automatic method has been proposed yet and existing studies are rather limited in terms of area and number of dates. State-of-the art shows that the major challenge is the identification of ground references: cartographic coordinates and their position in the archival images. This task is manually performed, and extremely time-consuming. This paper proposes to use a photogrammetric approach, and states that the 3D information that can be computed is the key to full automation. Its original idea lies in a 2-step approach: (i) the computation of a coarse absolute image orientation; (ii) the use of the coarse Digital Surface Model (DSM) information for automatic absolute image orientation. It only relies on a recent orthoimage+DSM, used as master reference for all epochs. The coarse orthoimage, compared with such a reference, allows the identification of dense ground references and the coarse DSM provides their position in the archival images. Results on two areas and 5 dates show that this method is compatible with long and dense archival aerial image series. Satisfactory planimetric and altimetric accuracies are reported, with variations depending on the ground sampling distance of the images and the location of the Ground Control Points.
Spectrum-to-Spectrum Searching Using a Proteome-wide Spectral Library*
Yen, Chia-Yu; Houel, Stephane; Ahn, Natalie G.; Old, William M.
2011-01-01
The unambiguous assignment of tandem mass spectra (MS/MS) to peptide sequences remains a key unsolved problem in proteomics. Spectral library search strategies have emerged as a promising alternative for peptide identification, in which MS/MS spectra are directly compared against a reference library of confidently assigned spectra. Two problems relate to library size. First, reference spectral libraries are limited to rediscovery of previously identified peptides and are not applicable to new peptides, because of their incomplete coverage of the human proteome. Second, problems arise when searching a spectral library the size of the entire human proteome. We observed that traditional dot product scoring methods do not scale well with spectral library size, showing reduction in sensitivity when library size is increased. We show that this problem can be addressed by optimizing scoring metrics for spectrum-to-spectrum searches with large spectral libraries. MS/MS spectra for the 1.3 million predicted tryptic peptides in the human proteome are simulated using a kinetic fragmentation model (MassAnalyzer version2.1) to create a proteome-wide simulated spectral library. Searches of the simulated library increase MS/MS assignments by 24% compared with Mascot, when using probabilistic and rank based scoring methods. The proteome-wide coverage of the simulated library leads to 11% increase in unique peptide assignments, compared with parallel searches of a reference spectral library. Further improvement is attained when reference spectra and simulated spectra are combined into a hybrid spectral library, yielding 52% increased MS/MS assignments compared with Mascot searches. Our study demonstrates the advantages of using probabilistic and rank based scores to improve performance of spectrum-to-spectrum search strategies. PMID:21532008
Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng
2017-01-01
Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.
Reiner, Jessica L; O'Connell, Steven G; Butt, Craig M; Mabury, Scott A; Small, Jeff M; De Silva, Amila O; Muir, Derek C G; Delinsky, Amy D; Strynar, Mark J; Lindstrom, Andrew B; Reagen, William K; Malinsky, Michelle; Schäfer, Sandra; Kwadijk, Christiaan J A F; Schantz, Michele M; Keller, Jennifer M
2012-11-01
Standard reference materials (SRMs) are homogeneous, well-characterized materials used to validate measurements and improve the quality of analytical data. The National Institute of Standards and Technology (NIST) has a wide range of SRMs that have mass fraction values assigned for legacy pollutants. These SRMs can also serve as test materials for method development, method validation, and measurement for contaminants of emerging concern. Because inter-laboratory comparison studies have revealed substantial variability of measurements of perfluoroalkyl acids (PFAAs), future analytical measurements will benefit from determination of consensus values for PFAAs in SRMs to provide a means to demonstrate method-specific performance. To that end, NIST, in collaboration with other groups, has been measuring concentrations of PFAAs in a variety of SRMs. Here we report levels of PFAAs and perfluorooctane sulfonamide (PFOSA) determined in four biological SRMs: fish tissue (SRM 1946 Lake Superior Fish Tissue, SRM 1947 Lake Michigan Fish Tissue), bovine liver (SRM 1577c), and mussel tissue (SRM 2974a). We also report concentrations for three in-house quality-control materials: beluga whale liver, pygmy sperm whale liver, and white-sided dolphin liver. Measurements in SRMs show an array of PFAAs, with perfluorooctane sulfonate (PFOS) being the most frequently detected. Reference and information values are reported for PFAAs measured in these biological SRMs.
Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering
NASA Astrophysics Data System (ADS)
Jiang, Lu; Piao, Yan
2018-04-01
The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.
Poisson-Box Sampling algorithms for three-dimensional Markov binary mixtures
NASA Astrophysics Data System (ADS)
Larmier, Coline; Zoia, Andrea; Malvagi, Fausto; Dumonteil, Eric; Mazzolo, Alain
2018-02-01
Particle transport in Markov mixtures can be addressed by the so-called Chord Length Sampling (CLS) methods, a family of Monte Carlo algorithms taking into account the effects of stochastic media on particle propagation by generating on-the-fly the material interfaces crossed by the random walkers during their trajectories. Such methods enable a significant reduction of computational resources as opposed to reference solutions obtained by solving the Boltzmann equation for a large number of realizations of random media. CLS solutions, which neglect correlations induced by the spatial disorder, are faster albeit approximate, and might thus show discrepancies with respect to reference solutions. In this work we propose a new family of algorithms (called 'Poisson Box Sampling', PBS) aimed at improving the accuracy of the CLS approach for transport in d-dimensional binary Markov mixtures. In order to probe the features of PBS methods, we will focus on three-dimensional Markov media and revisit the benchmark problem originally proposed by Adams, Larsen and Pomraning [1] and extended by Brantley [2]: for these configurations we will compare reference solutions, standard CLS solutions and the new PBS solutions for scalar particle flux, transmission and reflection coefficients. PBS will be shown to perform better than CLS at the expense of a reasonable increase in computational time.
Li, Cheng; Pan, Xinyi; Ying, Kui; Zhang, Qiang; An, Jing; Weng, Dehe; Qin, Wen; Li, Kuncheng
2009-11-01
The conventional phase difference method for MR thermometry suffers from disturbances caused by the presence of lipid protons, motion-induced error, and field drift. A signal model is presented with multi-echo gradient echo (GRE) sequence using a fat signal as an internal reference to overcome these problems. The internal reference signal model is fit to the water and fat signals by the extended Prony algorithm and the Levenberg-Marquardt algorithm to estimate the chemical shifts between water and fat which contain temperature information. A noise analysis of the signal model was conducted using the Cramer-Rao lower bound to evaluate the noise performance of various algorithms, the effects of imaging parameters, and the influence of the water:fat signal ratio in a sample on the temperature estimate. Comparison of the calculated temperature map and thermocouple temperature measurements shows that the maximum temperature estimation error is 0.614 degrees C, with a standard deviation of 0.06 degrees C, confirming the feasibility of this model-based temperature mapping method. The influence of sample water:fat signal ratio on the accuracy of the temperature estimate is evaluated in a water-fat mixed phantom experiment with an optimal ratio of approximately 0.66:1. (c) 2009 Wiley-Liss, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petroccia, H; O'Reilly, S; Bolch, W
Purpose: Radiation-induced cancer effects are well-documented following radiotherapy. Further investigation is needed to more accurately determine a dose-response relationship for late radiation effects. Recent dosimetry studies tend to use representative patients (Taylor 2009) or anthropomorphic phantoms (Wirth 2008) for estimating organ mean doses. In this study, we compare hybrid computational phantoms to patient-specific voxel phantoms to test the accuracy of University of Florida Hybrid Phantom Library (UFHP Library) for historical dose reconstructions. Methods: A cohort of 10 patients with CT images was used to reproduce the data that was collected historically for Hodgkin's lymphoma patients (i.e. caliper measurements and photographs).more » Four types of phantoms were generated to show a range of refinement from reference hybrid-computational phantom to patient-specific phantoms. Each patient is matched to a reference phantom from the UFHP Library based on height and weight. The reference phantom is refined in the anterior/posterior direction to create a ‘caliper-scaled phantom’. A photograph is simulated using a surface rendering from segmented CT images. Further refinement in the lateral direction is performed using ratios from a simulated-photograph to create a ‘photograph and caliper-scaled phantom’; breast size and position is visually adjusted. Patient-specific hybrid phantoms, with matched organ volumes, are generated and show the capabilities of the UF Hybrid Phantom Library. Reference, caliper-scaled, photograph and caliper-scaled, and patient-specific hybrid phantoms are compared with patient-specific voxel phantoms to determine the accuracy of the study. Results: Progression from reference phantom to patient specific hybrid shows good agreement with the patient specific voxel phantoms. Each stage of refinement shows an overall trend of improvement in dose accuracy within the study, which suggests that computational phantoms can show improved accuracy in historical dose estimates. Conclusion: Computational hybrid phantoms show promise for improved accuracy within retrospective studies when CTs and other x-ray images are not available.« less
Xu, Yuanyuan; Zhu, Xianwen; Gong, Yiqin; Xu, Liang; Wang, Yan; Liu, Liwang
2012-08-03
Real-time quantitative reverse transcription PCR (RT-qPCR) is a rapid and reliable method for gene expression studies. Normalization based on reference genes can increase the reliability of this technique; however, recent studies have shown that almost no single reference gene is universal for all possible experimental conditions. In this study, eight frequently used reference genes were investigated, including Glyceraldehyde-3-phosphate dehydrogenase (GAPDH), Actin2/7 (ACT), Tubulin alpha-5 (TUA), Tubulin beta-1 (TUB), 18S ribosomal RNA (18SrRNA), RNA polymerase-II transcription factor (RPII), Elongation factor 1-b (EF-1b) and Translation elongation factor 2 (TEF2). Expression stability of candidate reference genes was examined across 27 radish samples, representing a range of tissue types, cultivars, photoperiodic and vernalization treatments, and developmental stages. The eight genes in these sample pools displayed a wide range of Ct values and were variably expressed. Two statistical software packages, geNorm and NormFinder showed that TEF2, RPII and ACT appeared to be relatively stable and therefore the most suitable for use as reference genes. These results facilitate selection of desirable reference genes for accurate gene expression studies in radish. Copyright © 2012 Elsevier Inc. All rights reserved.
Antibacterial activity of different honeys against pathogenic bacteria.
Voidarou, C; Alexopoulos, A; Plessas, S; Karapanou, A; Mantzourani, I; Stavropoulou, E; Fotou, K; Tzora, A; Skoufos, I; Bezirtzoglou, E
2011-12-01
To study the antimicrobial activity of honey, 60 samples of various botanical origin were evaluated for their antimicrobial activities against 16 clinical pathogens and their respective reference strains. The microbiological quality of honeys and the antibiotic susceptibility of the various isolates were also examined. The bioassay applied for determining the antimicrobial effect employs the well-agar diffusion method and the estimation of minimum active dilution which produces a 1mm diameter inhibition zone. All honey samples, despite their origin (coniferous, citrus, thyme or polyfloral), showed antibacterial activity against the pathogenic and their respective reference strains at variable levels. Coniferous and thyme honeys showed the highest activity with an average minimum dilution of 17.4 and 19.2% (w/v) followed by citrus and polyfloral honeys with 20.8 and 23.8% respectively. Clinical isolates of Staphylococcus aureus subsp. aureus, Escherichia coli, Salmonella enterica subsp. Enterica, Streptococcus pyogenes, Bacillus cereus and Bacillus subtilis were proven to be up to 60% more resistant than their equal reference strains thus emphasizing the variability in the antibacterial effect of honey and the need for further research. Copyright © 2011 Elsevier Ltd. All rights reserved.
Similarity to the Self Affects Memory for Impressions of Others in Younger and Older Adults
Park, Jung M.; Gutchess, Angela H.
2015-01-01
Objectives. Similarity to the self has been shown to affect memory for impressions in younger adults, suggesting a self-reference effect in person memory. Because older adults show comparable self-reference effects, but prioritize memory for positive over negative information relative to young adults, we examined age differences in self-similarity effects on memory for positive and negative impressions. Method. Younger and older adults formed positive and negative impressions of others differing in the degree of similarity to the self (high, medium, low). Results. For positive impressions, both groups showed enhanced memory for self-similar others relative to dissimilar others, whereas for negative impressions, memory was poorer for those similar to the self. When collapsed across similarity to the self, younger adults remembered negative impressions better than older adults, but interestingly, older adults exhibited a trend for better memory for the positive impressions. Discussion. Results suggest that self-reference effects in impression memory are preserved with age and that older adults exhibit positivity effects in person memory consistent with previous findings. PMID:24389124
Learning to Rank the Severity of Unrepaired Cleft Lip Nasal Deformity on 3D Mesh Data.
Wu, Jia; Tse, Raymond; Shapiro, Linda G
2014-08-01
Cleft lip is a birth defect that results in deformity of the upper lip and nose. Its severity is widely variable and the results of treatment are influenced by the initial deformity. Objective assessment of severity would help to guide prognosis and treatment. However, most assessments are subjective. The purpose of this study is to develop and test quantitative computer-based methods of measuring cleft lip severity. In this paper, a grid-patch based measurement of symmetry is introduced, with which a computer program learns to rank the severity of cleft lip on 3D meshes of human infant faces. Three computer-based methods to define the midfacial reference plane were compared to two manual methods. Four different symmetry features were calculated based upon these reference planes, and evaluated. The result shows that the rankings predicted by the proposed features were highly correlated with the ranking orders provided by experts that were used as the ground truth.
Standard setting: comparison of two methods.
George, Sanju; Haque, M Sayeed; Oyebode, Femi
2006-09-14
The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard-setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. The norm-reference method of standard-setting (mean minus 1 SD) was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ). Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart). We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. The pass rate with the norm-reference method was 85% (66/78) and that by the Angoff method was 100% (78 out of 78). The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% - 87%). The modified Angoff method had an inter-rater reliability of 0.81-0.82 and a test-retest reliability of 0.59-0.74. There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.
Iodine Absorption Cells Purity Testing.
Hrabina, Jan; Zucco, Massimo; Philippe, Charles; Pham, Tuan Minh; Holá, Miroslava; Acef, Ouali; Lazar, Josef; Číp, Ondřej
2017-01-06
This article deals with the evaluation of the chemical purity of iodine-filled absorption cells and the optical frequency references used for the frequency locking of laser standards. We summarize the recent trends and progress in absorption cell technology and we focus on methods for iodine cell purity testing. We compare two independent experimental systems based on the laser-induced fluorescence method, showing an improvement of measurement uncertainty by introducing a compensation system reducing unwanted influences. We show the advantages of this technique, which is relatively simple and does not require extensive hardware equipment. As an alternative to the traditionally used methods we propose an approach of hyperfine transitions' spectral linewidth measurement. The key characteristic of this method is demonstrated on a set of testing iodine cells. The relationship between laser-induced fluorescence and transition linewidth methods will be presented as well as a summary of the advantages and disadvantages of the proposed technique (in comparison with traditional measurement approaches).
Iodine Absorption Cells Purity Testing
Hrabina, Jan; Zucco, Massimo; Philippe, Charles; Pham, Tuan Minh; Holá, Miroslava; Acef, Ouali; Lazar, Josef; Číp, Ondřej
2017-01-01
This article deals with the evaluation of the chemical purity of iodine-filled absorption cells and the optical frequency references used for the frequency locking of laser standards. We summarize the recent trends and progress in absorption cell technology and we focus on methods for iodine cell purity testing. We compare two independent experimental systems based on the laser-induced fluorescence method, showing an improvement of measurement uncertainty by introducing a compensation system reducing unwanted influences. We show the advantages of this technique, which is relatively simple and does not require extensive hardware equipment. As an alternative to the traditionally used methods we propose an approach of hyperfine transitions’ spectral linewidth measurement. The key characteristic of this method is demonstrated on a set of testing iodine cells. The relationship between laser-induced fluorescence and transition linewidth methods will be presented as well as a summary of the advantages and disadvantages of the proposed technique (in comparison with traditional measurement approaches). PMID:28067834
Evaluation of Gas-filled Ionization Chamber Method for Radon Measurement at Two Reference Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishikawa, Tetsuo; Tokonami, Shinji; Kobayashi, Yosuke
2008-08-07
For quality assurance, gas-filled ionization chamber method was tested at two reference facilities for radon calibration: EML (USA) and PTB (Germany). Consequently, the radon concentrations estimated by the ionization chamber method were in good agreement with the reference radon concentrations provided by EML as well as PTB.
Debode, Frederic; Janssen, Eric; Bragard, Claude; Berben, Gilbert
2017-08-01
The presence of genetically modified organisms (GMOs) in food and feed is mainly detected by the use of targets focusing on promoters and terminators. As some genes are frequently used in genetically modified (GM) construction, they also constitute excellent screening elements and their use is increasing. In this paper we propose a new target for the detection of cry1Ab and cry1Ac genes by real-time polymerase chain reaction (PCR) and pyrosequencing. The specificity, sensitivity and robustness of the real-time PCR method were tested following the recommendations of international guidelines and the method met the expected performance criteria. This paper also shows how the robustness testing was assessed. This new cry1Ab/Ac method can provide a positive signal with a larger number of GM events than do the other existing methods using double dye-probes. The method permits the analysis of results with less ambiguity than the SYBRGreen method recommended by the European Reference Laboratory (EURL) GM Food and Feed (GMFF). A pyrosequencing method was also developed to gain additional information thanks to the sequence of the amplicon. This method of sequencing-by-synthesis can determine the sequence between the primers used for PCR. Pyrosequencing showed that the sequences internal to the primers present differences following the GM events considered and three different sequences were observed. The sensitivity of the pyrosequencing was tested on reference flours with a low percentage GM content and different copy numbers. Improvements in the pyrosequencing protocol provided correct sequences with 50 copies of the target. Below this copy number, the quality of the sequence was more random.
Subtask 4.24 - Field Evaluation of Novel Approach for Obtaining Metal Emission Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlish, John; Laudal, Dennis; Thompson, Jeffrey
2013-12-31
Over the past two decades, emissions of mercury, nonmercury metals, and acid gases from energy generation and chemical production have increasingly become an environmental concern. On February 16, 2012, the U.S. Environmental Protection Agency (EPA) promulgated the Mercury and Air Toxics Standards (MATS) to reduce mercury, nonmercury metals, and HCl emissions from coal-fired power plants. The current reference methods for trace metals and halogens are wet-chemistry methods, EPA Method (M) 29 and M26A, respectively. As a possible alternative to EPA M29 and M26A, the Energy & Environmental Research Center (EERC) has developed a novel multielement sorbent trap (ME-ST) method tomore » be used to sample for trace elements and/or halogens. Testing was conducted at three different power plants, and the results show that for halogens, the ME-ST halogen (ME-ST-H) method did not show any significant bias compared to EPA M26A and appears to be a potential candidate to serve as an alternative to the reference method. For metals, the ME-ST metals (ME-ST-M) method offers a lower detection limit compared to EPA M29 and generally produced comparable data for Sb, As, Be, Cd, Co, Hg, and Se. Both the ME-ST-M and M29 had problems associated with high blanks for Ni, Pb, Cr, and Mn. Although this problem has been greatly reduced through improved trap design and material selection, additional research is still needed to explore possible longer sampling durations and/or selection of lower background materials before the ME-ST-M can be considered as a potential alternative method for all the trace metals listed in MATS.« less
Oberson, Jean-Marie; Campos-Giménez, Esther; Rivière, Johann; Martin, Frédéric
2018-06-01
In the present manuscript, we describe a fully optimized and validated method suitable to analyse nine compounds (retinyl acetate, retinyl palmitate, retinol, α-tocopherol, α-tocopheryl acetate, cholecalciferol, ergocalciferol, phylloquinone, menaquinone-4) representing the major contributors to the fat-soluble vitamin activity of selected food products (infant formulas, adult nutritionals, infant cereals and mixed meals). Sample preparation involves direct solvent extraction using enzyme-assisted matrix disintegration and methanolic protein precipitation. Direct injection of the extract allows quantification of vitamins A, E and K in only 7 min, while vitamin D is determined after fast derivatization of the extract. Separation is achieved by supercritical fluid chromatography and detection performed by tandem mass spectrometry in positive Atmospheric Pressure Chemical Ionization mode. Results on a Standard Reference Material (SRM 1849a Infant/Adult Nutritional) were not statistically different from reference values. Full validation of the method showed excellent overall performance. Average recovery rate was between 90 and 110% for all vitamins and matrixes. The methodology shows enhanced safety and reduced cost as compared with previously published methods, together with potential for application to more complex matrixes. The full procedure can be easily applied in control laboratories dramatically increasing sample throughput and reducing solvent consumption. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hudoklin, D.; Šetina, J.; Drnovšek, J.
2012-09-01
The measurement of the water-vapor permeation rate (WVPR) through materials is very important in many industrial applications such as the development of new fabrics and construction materials, in the semiconductor industry, packaging, vacuum techniques, etc. The demand for this kind of measurement grows considerably and thus many different methods for measuring the WVPR are developed and standardized within numerous national and international standards. However, comparison of existing methods shows a low level of mutual agreement. The objective of this paper is to demonstrate the necessary uncertainty evaluation for WVPR measurements, so as to provide a basis for development of a corresponding reference measurement standard. This paper presents a specially developed measurement setup, which employs a precision dew-point sensor for WVPR measurements on specimens of different shapes. The paper also presents a physical model, which tries to account for both dynamic and quasi-static methods, the common types of WVPR measurements referred to in standards and scientific publications. An uncertainty evaluation carried out according to the ISO/IEC guide to the expression of uncertainty in measurement (GUM) shows the relative expanded ( k = 2) uncertainty to be 3.0 % for WVPR of 6.71 mg . h-1 (corresponding to permeance of 30.4 mg . m-2. day-1 . hPa-1).
Wittenmeier, Eva; Bellosevich, Sophia; Mauff, Susanne; Schmidtmann, Irene; Eli, Michael; Pestel, Gunther; Noppens, Ruediger R
2015-10-01
Collecting a blood sample is usually necessary to measure hemoglobin levels in children. Especially in small children, noninvasively measuring the hemoglobin level could be extraordinarily helpful, but its precision and accuracy in the clinical environment remain unclear. In this study, noninvasive hemoglobin measurement and blood gas analysis were compared to hemoglobin measurement in a clinical laboratory. In 60 healthy preoperative children (0.2-7.6 years old), hemoglobin was measured using a noninvasive method (SpHb; Radical-7 Pulse Co-Oximeter), a blood gas analyzer (clinical standard, BGAHb; ABL 800 Flex), and a laboratory hematology analyzer (reference method, labHb; Siemens Advia). Agreement between the results was assessed by Bland-Altman analysis and by determining the percentage of outliers. Sixty SpHb measurements, 60 labHb measurements, and 59 BGAHb measurements were evaluated. In 38% of the children, the location of the SpHb sensor had to be changed more than twice for the signal quality to be sufficient. The bias/limits of agreement between SpHb and labHb were -0.65/-3.4 to 2.1 g·dl(-1) . Forty-four percent of the SpHb values differed from the reference value by more than 1 g·dl(-1) . Age, difficulty of measurement, and the perfusion index (PI) had no influence on the accuracy of SpHb. The bias/limits of agreement between BGAHb and labHb were 1.14/-1.6 to 3.9 g·dl(-1) . Furthermore, 66% of the BGAHb values differed from the reference values by more than 1 g·dl(-1) . The absolute mean difference between SpHb and labHb (1.1 g·dl(-1) ) was smaller than the absolute mean difference between BGAHb and labHb (1.5 g·dl(-1) /P = 0.024). Noninvasive measurement of hemoglobin agrees more with the reference method than the measurement of hemoglobin using a blood gas analyzer. However, both methods can show clinically relevant differences from the reference method (ClinicalTrials.gov: NCT01693016). © 2015 John Wiley & Sons Ltd.
Evaluation of constant-Weber-number scaling for icing tests
NASA Technical Reports Server (NTRS)
Anderson, David N.
1996-01-01
Previous studies showed that for conditions simulating an aircraft encountering super-cooled water droplets the droplets may splash before freezing. Other surface effects dependent on the water surface tension may also influence the ice accretion process. Consequently, the Weber number appears to be important in accurately scaling ice accretion. A scaling method which uses a constant-Weber-number approach has been described previously; this study provides an evaluation of this scaling method. Tests are reported on cylinders of 2.5 to 15-cm diameter and NACA 0012 airfoils with chords of 18 to 53 cm in the NASA Lewis Icing Research Tunnel (IRT). The larger models were used to establish reference ice shapes, the scaling method was applied to determine appropriate scaled test conditions using the smaller models, and the ice shapes were compared. Icing conditions included warm glaze, horn glaze and mixed. The smallest size scaling attempted was 1/3, and scale and reference ice shapes for both cylinders and airfoils indicated that the constant-Weber-number scaling method was effective for the conditions tested.
Construction and comparative evaluation of different activity detection methods in brain FDG-PET.
Buchholz, Hans-Georg; Wenzel, Fabian; Gartenschläger, Martin; Thiele, Frank; Young, Stewart; Reuss, Stefan; Schreckenberger, Mathias
2015-08-18
We constructed and evaluated reference brain FDG-PET databases for usage by three software programs (Computer-aided diagnosis for dementia (CAD4D), Statistical Parametric Mapping (SPM) and NEUROSTAT), which allow a user-independent detection of dementia-related hypometabolism in patients' brain FDG-PET. Thirty-seven healthy volunteers were scanned in order to construct brain FDG reference databases, which reflect the normal, age-dependent glucose consumption in human brain, using either software. Databases were compared to each other to assess the impact of different stereotactic normalization algorithms used by either software package. In addition, performance of the new reference databases in the detection of altered glucose consumption in the brains of patients was evaluated by calculating statistical maps of regional hypometabolism in FDG-PET of 20 patients with confirmed Alzheimer's dementia (AD) and of 10 non-AD patients. Extent (hypometabolic volume referred to as cluster size) and magnitude (peak z-score) of detected hypometabolism was statistically analyzed. Differences between the reference databases built by CAD4D, SPM or NEUROSTAT were observed. Due to the different normalization methods, altered spatial FDG patterns were found. When analyzing patient data with the reference databases created using CAD4D, SPM or NEUROSTAT, similar characteristic clusters of hypometabolism in the same brain regions were found in the AD group with either software. However, larger z-scores were observed with CAD4D and NEUROSTAT than those reported by SPM. Better concordance with CAD4D and NEUROSTAT was achieved using the spatially normalized images of SPM and an independent z-score calculation. The three software packages identified the peak z-scores in the same brain region in 11 of 20 AD cases, and there was concordance between CAD4D and SPM in 16 AD subjects. The clinical evaluation of brain FDG-PET of 20 AD patients with either CAD4D-, SPM- or NEUROSTAT-generated databases from an identical reference dataset showed similar patterns of hypometabolism in the brain regions known to be involved in AD. The extent of hypometabolism and peak z-score appeared to be influenced by the calculation method used in each software package rather than by different spatial normalization parameters.
Evaluation of the reliability of maize reference assays for GMO quantification.
Papazova, Nina; Zhang, David; Gruden, Kristina; Vojvoda, Jana; Yang, Litao; Buh Gasparic, Meti; Blejec, Andrej; Fouilloux, Stephane; De Loose, Marc; Taverniers, Isabel
2010-03-01
A reliable PCR reference assay for relative genetically modified organism (GMO) quantification must be specific for the target taxon and amplify uniformly along the commercialised varieties within the considered taxon. Different reference assays for maize (Zea mays L.) are used in official methods for GMO quantification. In this study, we evaluated the reliability of eight existing maize reference assays, four of which are used in combination with an event-specific polymerase chain reaction (PCR) assay validated and published by the Community Reference Laboratory (CRL). We analysed the nucleotide sequence variation in the target genomic regions in a broad range of transgenic and conventional varieties and lines: MON 810 varieties cultivated in Spain and conventional varieties from various geographical origins and breeding history. In addition, the reliability of the assays was evaluated based on their PCR amplification performance. A single base pair substitution, corresponding to a single nucleotide polymorphism (SNP) reported in an earlier study, was observed in the forward primer of one of the studied alcohol dehydrogenase 1 (Adh1) (70) assays in a large number of varieties. The SNP presence is consistent with a poor PCR performance observed for this assay along the tested varieties. The obtained data show that the Adh1 (70) assay used in the official CRL NK603 assay is unreliable. Based on our results from both the nucleotide stability study and the PCR performance test, we can conclude that the Adh1 (136) reference assay (T25 and Bt11 assays) as well as the tested high mobility group protein gene assay, which also form parts of CRL methods for quantification, are highly reliable. Despite the observed uniformity in the nucleotide sequence of the invertase gene assay, the PCR performance test reveals that this target sequence might occur in more than one copy. Finally, although currently not forming a part of official quantification methods, zein and SSIIb assays are found to be highly reliable in terms of nucleotide stability and PCR performance and are proposed as good alternative targets for a reference assay for maize.
Estimation of reference intervals from small samples: an example using canine plasma creatinine.
Geffré, A; Braun, J P; Trumel, C; Concordet, D
2009-12-01
According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.
Pediatric Reference Intervals for Free Thyroxine and Free Triiodothyronine
Jang, Megan; Guo, Tiedong; Soldin, Steven J.
2009-01-01
Background The clinical value of free thyroxine (FT4) and free triiodothyronine (FT3) analysis depends on the reference intervals with which they are compared. We determined age- and sex-specific reference intervals for neonates, infants, and children 0–18 years of age for FT4 and FT3 using tandem mass spectrometry. Methods Reference intervals were calculated for serum FT4 (n = 1426) and FT3 (n = 1107) obtained from healthy children between January 1, 2008, and June 30, 2008, from Children's National Medical Center and Georgetown University Medical Center Bioanalytical Core Laboratory, Washington, DC. Serum samples were analyzed using isotope dilution liquid chromatography tandem mass spectrometry (LC/MS/MS) with deuterium-labeled internal standards. Results FT4 reference intervals were very similar for males and females of all ages and ranged between 1.3 and 2.4 ng/dL for children 1 to 18 years old. FT4 reference intervals for 1- to 12-month-old infants were 1.3–2.8 ng/dL. These 2.5 to 97.5 percentile intervals were much tighter than reference intervals obtained using immunoassay platforms 0.48–2.78 ng/dL for males and 0.85–2.09 ng/dL for females. Similarly, FT3 intervals were consistent and similar for males and females and for all ages, ranging between 1.5 pg/mL and approximately 6.0 pg/mL for children 1 month of age to 18 years old. Conclusions This is the first study to provide pediatric reference intervals of FT4 and FT3 for children from birth to 18 years of age using LC/MS/MS. Analysis using LC/MS/MS provides more specific quantification of thyroid hormones. A comparison of the ultrafiltration tandem mass spectrometric method with equilibrium dialysis showed very good correlation. PMID:19583487
Stochastic model search with binary outcomes for genome-wide association studies
Malovini, Alberto; Puca, Annibale A; Bellazzi, Riccardo
2012-01-01
Objective The spread of case–control genome-wide association studies (GWASs) has stimulated the development of new variable selection methods and predictive models. We introduce a novel Bayesian model search algorithm, Binary Outcome Stochastic Search (BOSS), which addresses the model selection problem when the number of predictors far exceeds the number of binary responses. Materials and methods Our method is based on a latent variable model that links the observed outcomes to the underlying genetic variables. A Markov Chain Monte Carlo approach is used for model search and to evaluate the posterior probability of each predictor. Results BOSS is compared with three established methods (stepwise regression, logistic lasso, and elastic net) in a simulated benchmark. Two real case studies are also investigated: a GWAS on the genetic bases of longevity, and the type 2 diabetes study from the Wellcome Trust Case Control Consortium. Simulations show that BOSS achieves higher precisions than the reference methods while preserving good recall rates. In both experimental studies, BOSS successfully detects genetic polymorphisms previously reported to be associated with the analyzed phenotypes. Discussion BOSS outperforms the other methods in terms of F-measure on simulated data. In the two real studies, BOSS successfully detects biologically relevant features, some of which are missed by univariate analysis and the three reference techniques. Conclusion The proposed algorithm is an advance in the methodology for model selection with a large number of features. Our simulated and experimental results showed that BOSS proves effective in detecting relevant markers while providing a parsimonious model. PMID:22534080
Immunoturbidimetric quantification of serum immunoglobulin G concentration in foals.
Bauer, J E; Brooks, T P
1990-08-01
Immunoturbidimetric determination of serum IgG concentration in foals was compared with the reference methods of single radial immunodiffusion and serum protein electrophoresis. High positive correlations were discovered when the technique was compared with either of these reference methods. The zinc sulfate turbidity test for serum IgG estimation was also evaluated. Although a positive correlation was discovered when the latter method was compared with reference methods, it was not as strong as the correlation between reference methods and the immunoturbidimetric method. The immunoturbidimetric method used in this study is specific and precise for equine serum IgG determination. It is rapid and, thus, is advantageous when timely evaluation of critically ill foals is necessary. The technique should be adaptable to various spectrophotometers and microcomputers for widespread application in veterinary medicine.
Peck, Michelle A; Sturk-Andreaggi, Kimberly; Thomas, Jacqueline T; Oliver, Robert S; Barritt-Ross, Suzanne; Marshall, Charla
2018-05-01
Generating mitochondrial genome (mitogenome) data from reference samples in a rapid and efficient manner is critical to harnessing the greater power of discrimination of the entire mitochondrial DNA (mtDNA) marker. The method of long-range target enrichment, Nextera XT library preparation, and Illumina sequencing on the MiSeq is a well-established technique for generating mitogenome data from high-quality samples. To this end, a validation was conducted for this mitogenome method processing up to 24 samples simultaneously along with analysis in the CLC Genomics Workbench and utilizing the AQME (AFDIL-QIAGEN mtDNA Expert) tool to generate forensic profiles. This validation followed the Federal Bureau of Investigation's Quality Assurance Standards (QAS) for forensic DNA testing laboratories and the Scientific Working Group on DNA Analysis Methods (SWGDAM) validation guidelines. The evaluation of control DNA, non-probative samples, blank controls, mixtures, and nonhuman samples demonstrated the validity of this method. Specifically, the sensitivity was established at ≥25 pg of nuclear DNA input for accurate mitogenome profile generation. Unreproducible low-level variants were observed in samples with low amplicon yields. Further, variant quality was shown to be a useful metric for identifying sequencing error and crosstalk. Success of this method was demonstrated with a variety of reference sample substrates and extract types. These studies further demonstrate the advantages of using NGS techniques by highlighting the quantitative nature of heteroplasmy detection. The results presented herein from more than 175 samples processed in ten sequencing runs, show this mitogenome sequencing method and analysis strategy to be valid for the generation of reference data. Copyright © 2018 Elsevier B.V. All rights reserved.
Evaluating Level of Specificity of Normative Referents in Relation to Personal Drinking Behavior*
Larimer, Mary E.; Kaysen, Debra L.; Lee, Christine M.; Kilmer, Jason R.; Lewis, Melissa A.; Dillworth, Tiara; Montoya, Heidi D.; Neighbors, Clayton
2009-01-01
Objective: Research has found perceived descriptive norms to be one of the strongest predictors of college student drinking, and several intervention approaches have incorporated normative feedback to correct misperceptions of peer drinking behavior. Little research has focused on the role of the reference group in normative perceptions. The current study sought to examine whether normative perceptions vary based on specificity of the reference group and whether perceived norms for more specific reference-group norms are related to individual drinking behavior. Method: Participants were first-year undergraduates (n = 1,276, 58% female) randomly selected from a university list of incoming students. Participants reported personal drinking behavior and perceived descriptive norms for eight reference groups, including typical student; same gender, ethnicity, or residence; and combinations of those reference groups (e.g., same gender and residence). Results: Findings indicated that participants distinguished among different reference groups in estimating descriptive drinking norms. Moreover, results indicated misperceptions in drinking norms were evident at all levels of specificity of the reference group. Additionally, findings showed perceived norms for more specific groups were uniquely related to participants' own drinking. Conclusions: These results suggest that providing normative feedback targeting at least one level of specificity to the participant (i.e., beyond what the “typical” student does) may be an important tool in normative feedback interventions. PMID:19538919
Leal, Mariana Ferreira; Astur, Diego Costa; Debieux, Pedro; Arliani, Gustavo Gonçalves; Silveira Franciozi, Carlos Eduardo; Loyola, Leonor Casilla; Andreoli, Carlos Vicente; Smith, Marília Cardoso; Pochini, Alberto de Castro; Ejnisman, Benno; Cohen, Moises
2015-01-01
The anterior cruciate ligament (ACL) is one of the most frequently injured structures during high-impact sporting activities. Gene expression analysis may be a useful tool for understanding ACL tears and healing failure. Reverse transcription-quantitative polymerase chain reaction (RT-qPCR) has emerged as an effective method for such studies. However, this technique requires the use of suitable reference genes for data normalization. Here, we evaluated the suitability of six reference genes (18S, ACTB, B2M, GAPDH, HPRT1, and TBP) by using ACL samples of 39 individuals with ACL tears (20 with isolated ACL tears and 19 with ACL tear and combined meniscal injury) and of 13 controls. The stability of the candidate reference genes was determined by using the NormFinder, geNorm, BestKeeper DataAssist, and RefFinder software packages and the comparative ΔCt method. ACTB was the best single reference gene and ACTB+TBP was the best gene pair. The GenEx software showed that the accumulated standard deviation is reduced when a larger number of reference genes is used for gene expression normalization. However, the use of a single reference gene may not be suitable. To identify the optimal combination of reference genes, we evaluated the expression of FN1 and PLOD1. We observed that at least 3 reference genes should be used. ACTB+HPRT1+18S is the best trio for the analyses involving isolated ACL tears and controls. Conversely, ACTB+TBP+18S is the best trio for the analyses involving (1) injured ACL tears and controls, and (2) ACL tears of patients with meniscal tears and controls. Therefore, if the gene expression study aims to compare non-injured ACL, isolated ACL tears and ACL tears from patients with meniscal tear as three independent groups ACTB+TBP+18S+HPRT1 should be used. In conclusion, 3 or more genes should be used as reference genes for analysis of ACL samples of individuals with and without ACL tears.
Arun, Alok; Baumlé, Véronique; Amelot, Gaël; Nieberding, Caroline M.
2015-01-01
Real-time quantitative reverse transcription PCR (qRT-PCR) is a technique widely used to quantify the transcriptional expression level of candidate genes. qRT-PCR requires the selection of one or several suitable reference genes, whose expression profiles remain stable across conditions, to normalize the qRT-PCR expression profiles of candidate genes. Although several butterfly species (Lepidoptera) have become important models in molecular evolutionary ecology, so far no study aimed at identifying reference genes for accurate data normalization for any butterfly is available. The African bush brown butterfly Bicyclus anynana has drawn considerable attention owing to its suitability as a model for evolutionary ecology, and we here provide a maiden extensive study to identify suitable reference gene in this species. We monitored the expression profile of twelve reference genes: eEF-1α, FK506, UBQL40, RpS8, RpS18, HSP, GAPDH, VATPase, ACT3, TBP, eIF2 and G6PD. We tested the stability of their expression profiles in three different tissues (wings, brains, antennae), two developmental stages (pupal and adult) and two sexes (male and female), all of which were subjected to two food treatments (food stress and control feeding ad libitum). The expression stability and ranking of twelve reference genes was assessed using two algorithm-based methods, NormFinder and geNorm. Both methods identified RpS8 as the best suitable reference gene for expression data normalization. We also showed that the use of two reference genes is sufficient to effectively normalize the qRT-PCR data under varying tissues and experimental conditions that we used in B. anynana. Finally, we tested the effect of choosing reference genes with different stability on the normalization of the transcript abundance of a candidate gene involved in olfactory communication in B. anynana, the Fatty Acyl Reductase 2, and we confirmed that using an unstable reference gene can drastically alter the expression profile of the target candidate genes. PMID:25793735
Arun, Alok; Baumlé, Véronique; Amelot, Gaël; Nieberding, Caroline M
2015-01-01
Real-time quantitative reverse transcription PCR (qRT-PCR) is a technique widely used to quantify the transcriptional expression level of candidate genes. qRT-PCR requires the selection of one or several suitable reference genes, whose expression profiles remain stable across conditions, to normalize the qRT-PCR expression profiles of candidate genes. Although several butterfly species (Lepidoptera) have become important models in molecular evolutionary ecology, so far no study aimed at identifying reference genes for accurate data normalization for any butterfly is available. The African bush brown butterfly Bicyclus anynana has drawn considerable attention owing to its suitability as a model for evolutionary ecology, and we here provide a maiden extensive study to identify suitable reference gene in this species. We monitored the expression profile of twelve reference genes: eEF-1α, FK506, UBQL40, RpS8, RpS18, HSP, GAPDH, VATPase, ACT3, TBP, eIF2 and G6PD. We tested the stability of their expression profiles in three different tissues (wings, brains, antennae), two developmental stages (pupal and adult) and two sexes (male and female), all of which were subjected to two food treatments (food stress and control feeding ad libitum). The expression stability and ranking of twelve reference genes was assessed using two algorithm-based methods, NormFinder and geNorm. Both methods identified RpS8 as the best suitable reference gene for expression data normalization. We also showed that the use of two reference genes is sufficient to effectively normalize the qRT-PCR data under varying tissues and experimental conditions that we used in B. anynana. Finally, we tested the effect of choosing reference genes with different stability on the normalization of the transcript abundance of a candidate gene involved in olfactory communication in B. anynana, the Fatty Acyl Reductase 2, and we confirmed that using an unstable reference gene can drastically alter the expression profile of the target candidate genes.
Valotto, Gabrio; Cattaruzza, Elti; Bardelli, Fabrizio
2017-02-15
The appropriate selection of representative pure compounds to be used as reference is a crucial step for successful analysis of X-ray absorption near edge spectroscopy (XANES) data, and it is often not a trivial task. This is particularly true when complex environmental matrices are investigated, being their elemental speciation a priori unknown. In this paper, an investigation on the speciation of Cu, Zn, and Sb based on the use of conventional (stoichiometric compounds) and non-conventional (environmental samples or relevant certified materials) references is explored. This method can be useful in when the effectiveness of XANES analysis is limited because of the difficulty in obtaining a set of references sufficiently representative of the investigated samples. Road dust samples collected along the bridge connecting Venice to the mainland were used to show the potentialities and the limits of this approach. Copyright © 2016 Elsevier B.V. All rights reserved.
Adapting the ISO 20462 softcopy ruler method for online image quality studies
NASA Astrophysics Data System (ADS)
Burns, Peter D.; Phillips, Jonathan B.; Williams, Don
2013-01-01
In this paper we address the problem of Image Quality Assessment of no reference metrics, focusing on JPEG corrupted images. In general no reference metrics are not able to measure with the same performance the distortions within their possible range and with respect to different image contents. The crosstalk between content and distortion signals influences the human perception. We here propose two strategies to improve the correlation between subjective and objective quality data. The first strategy is based on grouping the images according to their spatial complexity. The second one is based on a frequency analysis. Both the strategies are tested on two databases available in the literature. The results show an improvement in the correlations between no reference metrics and psycho-visual data, evaluated in terms of the Pearson Correlation Coefficient.
NASA Astrophysics Data System (ADS)
Swamy, N.; Prashanth, K. N.; Basavaiah, K.
2015-07-01
Three simple, rapid, inexpensive, and highly sensitive spectrophotometric methods are described for the quantifi cation of pyrantel pamoate (PYP) in pure drug and formulations. The methods are based on the molecular charge-transfer (CT) complexation reaction involving pyrantel base (PYL) as n-donor and iodine as σ-acceptor (I 2 , method A), and 2,4-dinitrophenol (DNP, method B) or picric acid (PA, method C) as π-acceptors. Spectrophotometrically, the CT complexes showed absorption maxima at 380, 420, and 430 nm, for methods A, B, and C, respectively. Under optimum conditions, Beer's law was obeyed over the concentration ranges 0.12-2.9, 0.12-3.75, and 0.12-2.9 μg/ml for methods A, B, and C, respectively. The apparent molar absorptivity of the CT complexes at the respective λmax are calculated to be 2.63 × 10 5 , 6.91 × 10 4 , and 1.73 × 10 5 l/mol· cm respectively and the corresponding Sandell sensitivity values are 0.0009, 0.003, and 0.0012. The limits of detection (LOD) and quantification (LOQ) are calculated to be (0.02 and 0.07), (0.05 and 0.15), and (0.02 and 0.07) μg/ml with methods A, B, and C, respectively. The intra-day and inter-day accuracy expressed as %RE and precision expressed as %RSD are less than 3%. The methods have been applied to the determination of PYP in tablets, suspensions, and spiked human urine. Parallel assay by a reference method and statistical analysis of the results obtained show no significant difference between the proposed methods and the reference method with respect to accuracy and precision, as evident from the Student's t and variation ratio tests. The accuracy of the methods has been further ascertained by recovery tests via the standard addition technique.
Vinklárková, Bára; Chromý, Vratislav; Šprongl, Luděk; Bittová, Miroslava; Rikanová, Milena; Ohnútková, Ivana; Žaludová, Lenka
2015-01-01
To select a Kjeldahl procedure suitable for the determination of total protein in reference materials used in laboratory medicine, we reviewed in our previous article Kjeldahl methods adopted by clinical chemistry and found an indirect two-step analysis by total Kjeldahl nitrogen corrected for its nonprotein nitrogen and a direct analysis made on isolated protein precipitates. In this article, we compare both procedures on various reference materials. An indirect Kjeldahl method gave falsely lower results than a direct analysis. Preliminary performance parameters qualify the direct Kjeldahl analysis as a suitable primary reference procedure for the certification of total protein in reference laboratories.
A novel adaptive scoring system for segmentation validation with multiple reference masks
NASA Astrophysics Data System (ADS)
Moltz, Jan H.; Rühaak, Jan; Hahn, Horst K.; Peitgen, Heinz-Otto
2011-03-01
The development of segmentation algorithms for different anatomical structures and imaging protocols is an important task in medical image processing. The validation of these methods, however, is often treated as a subordinate task. Since manual delineations, which are widely used as a surrogate for the ground truth, exhibit an inherent uncertainty, it is preferable to use multiple reference segmentations for an objective validation. This requires a consistent framework that should fulfill three criteria: 1) it should treat all reference masks equally a priori and not demand consensus between the experts; 2) it should evaluate the algorithmic performance in relation to the inter-reference variability, i.e., be more tolerant where the experts disagree about the true segmentation; 3) it should produce results that are comparable for different test data. We show why current state-of-the-art frameworks as the one used at several MICCAI segmentation challenges do not fulfill these criteria and propose a new validation methodology. A score is computed in an adaptive way for each individual segmentation problem, using a combination of volume- and surface-based comparison metrics. These are transformed into the score by relating them to the variability between the reference masks which can be measured by comparing the masks with each other or with an estimated ground truth. We present examples from a study on liver tumor segmentation in CT scans where our score shows a more adequate assessment of the segmentation results than the MICCAI framework.
Hinojosa-Nogueira, Daniel; Muros, Joaquín; Rufián-Henares, José A; Pastoriza, Silvia
2017-05-24
Polyphenols are bioactive substances of vegetal origin with a significant impact on human health. The assessment of polyphenol intake and excretion is therefore important. The Folin-Ciocalteu (F-C) method is the reference assay to measure polyphenols in foods as well as their excretion in urine. However, many substances can influence the method, making it necessary to conduct a prior cleanup using solid-phase extraction (SPE) cartridges. In this paper, we demonstrate the use of the Fast Blue BB reagent (FBBB) as a new tool to measure the excretion of polyphenols in urine. Contrary to F-C, FBBB showed no interference in urine, negating the time-consuming and costly SPE cleanup. In addition, it showed excellent linearity (r 2 = 0.9997), with a recovery of 96.4% and a precision of 1.86-2.11%. The FBBB method was validated to measure the excretion of polyphenols in spot urine samples from Spanish children, showing a good correlation between polyphenol intake and excretion.
Reference surfaces for bridge scour depths
Landers, Mark N.; Mueller, David S.; ,
1993-01-01
Depth of scour is measured as the vertical distance between scoured channel geometry and a measurement reference surface. A scour depth measurement can have a wide range depending on the method used to establish the reference surface. A consistent method to establish reference surfaces for bridge scour measurements is needed to facilitate transferability of scour data an scour analyses. This paper describes and evaluates techniques for establishing reference surfaces from which local and contraction scour are measured.
Kyiv UkrVO glass archives: new life
NASA Astrophysics Data System (ADS)
Pakuliak, L.; Golovnya, V.; Andruk, V.; Shatokhina, S.; Yizhakevych, O.; Kazantseva, L.; Lukianchuk, V.
In the framework of UkrVO national project the new methods of plate digital image processing are developed. The photographic material of the UkrVO Joint Digital Archive (JDA) is used for the solution of classic astrometric problem - positional and photometric determinations of objects registered on the plates. The results of tested methods show that the positional rms errors are better than ±150 mas for both coordinates and photometric ones are better than ±0.20m with the Tycho-2 catalogue as reference.
Harvested wood products : basis for future methodological development
Kenneth E. Skog
2003-01-01
The IPCC Guidelines (IPCC 1997) provide an outline of how harvested wood could be treated in national greenhouse gas (GHG) inventories. This section shows the relation of that outline to the approaches and estimation methods to be presented in this Appendix. Wood and paper products are referred to as harvested wood products (HWP). It does not include carbon in...
Illumination-based synchronization of high-speed vision sensors.
Hou, Lei; Kagami, Shingo; Hashimoto, Koichi
2010-01-01
To acquire images of dynamic scenes from multiple points of view simultaneously, the acquisition time of vision sensors should be synchronized. This paper describes an illumination-based synchronization method derived from the phase-locked loop (PLL) algorithm. Incident light to a vision sensor from an intensity-modulated illumination source serves as the reference signal for synchronization. Analog and digital computation within the vision sensor forms a PLL to regulate the output signal, which corresponds to the vision frame timing, to be synchronized with the reference. Simulated and experimental results show that a 1,000 Hz frame rate vision sensor was successfully synchronized with 32 μs jitters.
NASA Astrophysics Data System (ADS)
Oks, A.; Katashev, A.; Bernans, E.; Abolins, V.
2017-10-01
The aim of the study was to present a new DAid®Pressure Sock System for feet locomotion monitoring and to verify it’s temporal characteristics by data comparison with the same obtained by two other widely used methods as reference. Designed system is based on sensors which can be knitted directly in the garment or hosiery items. DAid®Pressure Sock System was created for sport and medical applications. Comparison of temporal characteristics of different types of locomotion, obtained using designed system and reference devises, showed good agreement between data.
NASA Astrophysics Data System (ADS)
Khoo, Geoffrey; Kuennemeyer, Rainer; Claycomb, Rod W.
2005-04-01
Currently, the state of the art of mastitis detection in dairy cows is the laboratory-based measurement of somatic cell count (SCC), which is time consuming and expensive. Alternative, rapid, and reliable on-farm measurement methods are required for effective farm management. We have investigated whether fluorescence lifetime measurements can determine SCC in fresh, unprocessed milk. The method is based on the change in fluorescence lifetime of ethidium bromide when it binds to DNA from the somatic cells. Milk samples were obtained from a Fullwood Merlin Automated Milking System and analysed within a twenty-four hour period, over which the SCC does not change appreciably. For reference, the milk samples were also sent to a testing laboratory where the SCC was determined by traditional methods. The results show that we can quantify SCC using the fluorescence photon migration method from a lower bound of 4x105 cells mL-1 to an upper bound of 1 x 107 cells mL-1. The upper bound is due to the reference method used while the cause of the lower boundary is unknown, yet.
Quantitative method for gait pattern detection based on fiber Bragg grating sensors
NASA Astrophysics Data System (ADS)
Ding, Lei; Tong, Xinglin; Yu, Lie
2017-03-01
This paper presents a method that uses fiber Bragg grating (FBG) sensors to distinguish the temporal gait patterns in gait cycles. Unlike most conventional methods that focus on electronic sensors to collect those physical quantities (i.e., strains, forces, pressure, displacements, velocity, and accelerations), the proposed method utilizes the backreflected peak wavelength from FBG sensors to describe the motion characteristics in human walking. Specifically, the FBG sensors are sensitive to external strain with the result that their backreflected peak wavelength will be shifted according to the extent of the influence of external strain. Therefore, when subjects walk in different gait patterns, the strains on FBG sensors will be different such that the magnitude of the backreflected peak wavelength varies. To test the reliability of the FBG sensor platform for gait pattern detection, the gold standard method using force-sensitive resistors (FSRs) for defining gait patterns is introduced as a reference platform. The reliability of the FBG sensor platform is determined by comparing the detection results between the FBG sensors and FSRs platforms. The experimental results show that the FBG sensor platform is reliable in gait pattern detection and gains high reliability when compared with the reference platform.
Analysis of Error Sources in STEP Astrometry
NASA Astrophysics Data System (ADS)
Liu, S. Y.; Liu, J. C.; Zhu, Z.
2017-11-01
The space telescope Search for Terrestrial Exo-Planets (STEP) employed a method of sub-pixel technology which ensures that the astrometric accuracy of telescope on the focal plane is at the order of 1 μas. This kind of astrometric precision is promising to detect earth-like planets beyond the solar system. In this paper, we analyze the influence of some key factors, including errors in the stellar proper motions, parallax, the optical center of the system, and the velocities and positions of the satellite, on the detection of exo-planets. We propose a relative angular distance method to evaluate the non-linear terms in stellar distance caused by possibly existing exo-planets. This method could avoid the direct influence of measured errors of the position and proper motion of the reference stars. Supposing that there are eight reference stars in the same field of view and a star with a planet system, we simulate their five-year observational data, and use the least square method to get the parameters of the planet orbit. Our results show that the method is robust to detect terrestrial planets based on the 1 μas precision of STEP.
Ghost detection and removal based on super-pixel grouping in exposure fusion
NASA Astrophysics Data System (ADS)
Jiang, Shenyu; Xu, Zhihai; Li, Qi; Chen, Yueting; Feng, Huajun
2014-09-01
A novel multi-exposure images fusion method for dynamic scenes is proposed. The commonly used techniques for high dynamic range (HDR) imaging are based on the combination of multiple differently exposed images of the same scene. The drawback of these methods is that ghosting artifacts will be introduced into the final HDR image if the scene is not static. In this paper, a super-pixel grouping based method is proposed to detect the ghost in the image sequences. We introduce the zero mean normalized cross correlation (ZNCC) as a measure of similarity between a given exposure image and the reference. The calculation of ZNCC is implemented in super-pixel level, and the super-pixels which have low correlation with the reference are excluded by adjusting the weight maps for fusion. Without any prior information on camera response function or exposure settings, the proposed method generates low dynamic range (LDR) images which can be shown on conventional display devices directly with details preserving and ghost effects reduced. Experimental results show that the proposed method generates high quality images which have less ghost artifacts and provide a better visual quality than previous approaches.
NASA Astrophysics Data System (ADS)
Min, Xiaolin; Liu, Rong; Fu, Bo; Xu, Kexin
2017-06-01
In the non-invasive sensing of blood glucose by near-infrared diffuse reflectance spectroscopy, the spectrum is highly susceptible to the unstable and complicated background variations from the human body and the environment. In in vitro analyses, background variations are usually corrected by the spectrum of a standard reference sample that has similar optical properties to the analyte of interest. However, it is hard to find a standard sample for the in vivo measurement. Therefore, the floating reference measurement method is proposed to enable relative measurements in vivo, where the spectra under some special source-detector distance, defined as the floating reference position, are insensitive to the changes in glucose concentration due to the absorption effect and scattering effect. Because the diffuse reflectance signals at the floating reference positions only reflect the information on background variations during the measurement, they can be used as the internal reference. In this paper, the theoretical basis of the floating reference positions in a semi-infinite turbid medium was discussed based on the steady-state diffusion equation and its analytical solutions in a semi-infinite turbid medium (under the extrapolated boundary conditions). Then, Monte-Carlo (MC) simulations and in vitro experiments based on a custom-built continuous-moving spatially resolving double-fiber NIR measurement system, configured with two types of light source, a super luminescent diode (SLD) and a super-continuum laser, were carried out to verify the existence of the floating reference position in 5%, 10% and 20% Intralipid solutions. The results showed that the simulation values of the floating reference positions are close to the theoretical results, with a maximum deviation of approximately 0.3 mm in 1100-1320 nm. Great differences can be observed in 1340-1400 nm because the optical properties of Intralipid in this region don not satisfy the conditions of the steady-state diffusion equation. For the in vitro experiments, floating reference positions exist in 1220 nm and 1320 nm under two types of light source, and the results are quite close. However, the reference positions obtained from experiments are further from the light source compared with those obtained in the MC simulation. For the turbid media and the wavelengths investigated, the difference is up to 1 mm. This study is important for the design of optical fibers to be applied in the floating reference measurement.
Zhang, Songdou; An, Shiheng; Li, Zhen; Wu, Fengming; Yang, Qingpo; Liu, Yichen; Cao, Jinjun; Zhang, Huaijiang; Zhang, Qingwen; Liu, Xiaoxia
2015-01-25
Recent studies have focused on determining functional genes and microRNAs in the pest Helicoverpa armigera (Lepidoptera: Noctuidae). Most of these studies used quantitative real-time PCR (qRT-PCR). Suitable reference genes are necessary to normalize gene expression data of qRT-PCR. However, a comprehensive study on the reference genes in H. armigera remains lacking. Twelve candidate reference genes of H. armigera were selected and evaluated for their expression stability under different biotic and abiotic conditions. The comprehensive stability ranking of candidate reference genes was recommended by RefFinder and the optimal number of reference genes was calculated by geNorm. Two target genes, thioredoxin (TRX) and Cu/Zn superoxide dismutase (SOD), were used to validate the selection of reference genes. Results showed that the most suitable candidate combinations of reference genes were as follows: 28S and RPS15 for developmental stages; RPS15 and RPL13 for larvae tissues; EF and RPL27 for adult tissues; GAPDH, RPL27, and β-TUB for nuclear polyhedrosis virus infection; RPS15 and RPL32 for insecticide treatment; RPS15 and RPL27 for temperature treatment; and RPL32, RPS15, and RPL27 for all samples. This study not only establishes an accurate method for normalizing qRT-PCR data in H. armigera but also serve as a reference for further study on gene transcription in H. armigera and other insects. Copyright © 2014 Elsevier B.V. All rights reserved.
Pasaniuc, Bogdan; Sankararaman, Sriram; Torgerson, Dara G.; Gignoux, Christopher; Zaitlen, Noah; Eng, Celeste; Rodriguez-Cintron, William; Chapela, Rocio; Ford, Jean G.; Avila, Pedro C.; Rodriguez-Santana, Jose; Chen, Gary K.; Le Marchand, Loic; Henderson, Brian; Reich, David; Haiman, Christopher A.; Gonzàlez Burchard, Esteban; Halperin, Eran
2013-01-01
Motivation: Local ancestry analysis of genotype data from recently admixed populations (e.g. Latinos, African Americans) provides key insights into population history and disease genetics. Although methods for local ancestry inference have been extensively validated in simulations (under many unrealistic assumptions), no empirical study of local ancestry accuracy in Latinos exists to date. Hence, interpreting findings that rely on local ancestry in Latinos is challenging. Results: Here, we use 489 nuclear families from the mainland USA, Puerto Rico and Mexico in conjunction with 3204 unrelated Latinos from the Multiethnic Cohort study to provide the first empirical characterization of local ancestry inference accuracy in Latinos. Our approach for identifying errors does not rely on simulations but on the observation that local ancestry in families follows Mendelian inheritance. We measure the rate of local ancestry assignments that lead to Mendelian inconsistencies in local ancestry in trios (MILANC), which provides a lower bound on errors in the local ancestry estimates. We show that MILANC rates observed in simulations underestimate the rate observed in real data, and that MILANC varies substantially across the genome. Second, across a wide range of methods, we observe that loci with large deviations in local ancestry also show enrichment in MILANC rates. Therefore, local ancestry estimates at such loci should be interpreted with caution. Finally, we reconstruct ancestral haplotype panels to be used as reference panels in local ancestry inference and show that ancestry inference is significantly improved by incoroprating these reference panels. Availability and implementation: We provide the reconstructed reference panels together with the maps of MILANC rates as a public resource for researchers analyzing local ancestry in Latinos at http://bogdanlab.pathology.ucla.edu. Contact: bpasaniuc@mednet.ucla.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23572411
Measuring Broadband IR Irradiance in the Direct Solar Beam (Poster)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reda, I.; Konings, J.; Xie, Y.
Solar and atmospheric science radiometers, e.g. pyranometers, pyrheliometers, and photovoltaic cells are calibrated with traceability to a consensus reference, which is maintained by Absolute Cavity Radiometers (ACRs). The ACR is an open cavity with no window, developed to measure extended broadband direct solar irradiance beyond the ultraviolet and infrared bands below and above 0.2 micrometers and 50 micrometers, respectively. On the other hand, pyranometers and pyrheliometers are developed to measure broadband shortwave irradiance from approximately 0.3 micrometers to 3 micrcometers, while the present photovoltaic cells are limited to approximately 0.3 micrometers to 1 micrometers. The broadband mismatch of ACR versusmore » such radiometers causes discrepancy in radiometers' calibration methods that has not been discussed or addressed in the solar and atmospheric science literature. Pyrgeometers are also used for solar and atmospheric science applications and calibrated with traceability to consensus reference, yet calibrated during nighttime only, because no consensus reference has yet been established for the daytime longwave irradiance. This poster shows a method to measure the broadband IR irradiance in the direct solar beam from 3 micrometers to 50 micrometers, as first step that might be used to help develop calibration methods to address the mismatch between broadband ACR and shortwave radiometers, and the lack of a daytime reference for pyrgeometers. The irradiance was measured from sunrise to sunset for 5 days when the sun disk was cloudless; the irradiance varied from approximately 1 Wm-2 to 16 Wm-2 for solar zenith angle from 80 degres to 16 degrees respectively; estimated uncertainty is 1.5 Wm-2.« less
Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.
Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen
2014-02-01
The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.
A Kernel-Free Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 4
NASA Technical Reports Server (NTRS)
Park, Young-Keun; Fahrenthold, Eric P.
2004-01-01
An improved hybrid particle-finite element method has been developed for the simulation of hypervelocity impact problems. Unlike alternative methods, the revised formulation computes the density without reference to any kernel or interpolation functions, for either the density or the rate of dilatation. This simplifies the state space model and leads to a significant reduction in computational cost. The improved method introduces internal energy variables as generalized coordinates in a new formulation of the thermomechanical Lagrange equations. Example problems show good agreement with exact solutions in one dimension and good agreement with experimental data in a three dimensional simulation.
Spectroscopy by joint spectral and time domain optical coherence tomography
NASA Astrophysics Data System (ADS)
Szkulmowski, Maciej; Tamborski, Szymon; Wojtkowski, Maciej
2015-03-01
We present the methodology for spectroscopic examination of absorbing media being the combination of Spectral Optical Coherence Tomography and Fourier Transform Spectroscopy. The method bases on the joint Spectral and Time OCT computational scheme and simplifies data analysis procedure as compared to the mostly used windowing-based Spectroscopic OCT methods. The proposed experimental setup is self-calibrating in terms of wavelength-pixel assignment. The performance of the method in measuring absorption spectrum was checked with the use of the reflecting phantom filled with the absorbing agent (indocyanine green). The results show quantitative accordance with the controlled exact results provided by the reference method.
2010-01-01
Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service. PMID:21034504
Reference genes for quantitative PCR in the adipose tissue of mice with metabolic disease.
Almeida-Oliveira, Fernanda; Leandro, João G B; Ausina, Priscila; Sola-Penna, Mauro; Majerowicz, David
2017-04-01
Obesity and diabetes are metabolic diseases and they are increasing in prevalence. The dynamics of gene expression associated with these diseases is fundamental to identifying genes involved in related biological processes. qPCR is a sensitive technique for mRNA quantification and the most commonly used method in gene-expression studies. However, the reliability of these results is directly influenced by data normalization. As reference genes are the major normalization method used, this work aims to identify reference genes for qPCR in adipose tissues of mice with type-I diabetes or obesity. We selected 12 genes that are commonly used as reference genes. The expression of these genes in the adipose tissues of mice was analyzed in the context of three different experimental protocols: 1) untreated animals; 2) high-fat-diet animals; and 3) streptozotocin-treated animals. Gene-expression stability was analyzed using four different algorithms. Our data indicate that TATA-binding protein is stably expressed across adipose tissues in control animals. This gene was also a useful reference when the brown adipose tissues of control and obese mice were analyzed. The mitochondrial ATP synthase F1 complex gene exhibits stable expression in subcutaneous and perigonadal adipose tissue from control and obese mice. Moreover, this gene is the best reference for qPCR normalization in adipose tissue from streptozotocin-treated animals. These results show that there is no perfect stable gene suited for use under all experimental conditions. In conclusion, the selection of appropriate genes is a prerequisite to ensure qPCR reliability and must be performed separately for different experimental protocols. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Standardization of automated 25-hydroxyvitamin D assays: How successful is it?
Elsenberg, E H A M; Ten Boekel, E; Huijgen, H; Heijboer, A C
2017-12-01
Multiple 25(OH)D assays have recently been aligned to improve comparibility. In this study we investigated the performance of these assays using both native single-donor sera with target values certified by a reference method as well as single donor sera from a heterogeneous patient population. 25(OH)D levels were measured in twenty reference samples (Ref!25OHD; Labquality, Finland) using five automated methods (Lumipulse, Liaison, Cobas, iSYS and Access) and one aligned ID-XLC-MS/MS method (slope: 1,00; intercept: 0,00; R=0,996). Furthermore, 25(OH)D concentrations measured in 50 pregnant women and 52 random patients using the 5 automated assays were compared to the ID-XLC-MS/MS. In addition, Vitamin D binding protein (DBP) was measured. Most automated assays showed significant differences in 25(OH)D levels measured in reference samples. Slopes varied from 1,00 to 1,33, intercepts from -5.48 to -15,81nmol/L and the R from 0,971 to 0,997. This inaccuracy was even more prominent in a heterogeneous patient population. Slopes varied from 0,75 to 1,35, intercepts from -9.02 to 11,51nmol/L and the R from 0,840 to 0,949. For most assays the deviation in 25(OH)D concentration increased with elevating DBP concentrations suggesting that DBP might be one of the factors contributing to the inaccuracy in currently used automated 25(OH)D methods. Despite the use of standardized assays, we observed significant differences in 25(OH)D concentrations in some automated methods using reference material obtained from healthy single donor sera. In sera of a patient population this inaccuracy was even worse which is highly concerning as patient samples are being investigated in clinical laboratories. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Indirect methods for reference interval determination - review and recommendations.
Jones, Graham R D; Haeckel, Rainer; Loh, Tze Ping; Sikaris, Ken; Streichert, Thomas; Katayev, Alex; Barth, Julian H; Ozarda, Yesim
2018-04-19
Reference intervals are a vital part of the information supplied by clinical laboratories to support interpretation of numerical pathology results such as are produced in clinical chemistry and hematology laboratories. The traditional method for establishing reference intervals, known as the direct approach, is based on collecting samples from members of a preselected reference population, making the measurements and then determining the intervals. An alternative approach is to perform analysis of results generated as part of routine pathology testing and using appropriate statistical techniques to determine reference intervals. This is known as the indirect approach. This paper from a working group of the International Federation of Clinical Chemistry (IFCC) Committee on Reference Intervals and Decision Limits (C-RIDL) aims to summarize current thinking on indirect approaches to reference intervals. The indirect approach has some major potential advantages compared with direct methods. The processes are faster, cheaper and do not involve patient inconvenience, discomfort or the risks associated with generating new patient health information. Indirect methods also use the same preanalytical and analytical techniques used for patient management and can provide very large numbers for assessment. Limitations to the indirect methods include possible effects of diseased subpopulations on the derived interval. The IFCC C-RIDL aims to encourage the use of indirect methods to establish and verify reference intervals, to promote publication of such intervals with clear explanation of the process used and also to support the development of improved statistical techniques for these studies.
Liu, Shu-Yu; Hu, Chang-Qin
2007-10-17
This study introduces the general method of quantitative nuclear magnetic resonance (qNMR) for the calibration of reference standards of macrolide antibiotics. Several qNMR experimental conditions were optimized including delay, which is an important parameter of quantification. Three kinds of macrolide antibiotics were used to validate the accuracy of the qNMR method by comparison with the results obtained by the high performance liquid chromatography (HPLC) method. The purities of five common reference standards of macrolide antibiotics were measured by the 1H qNMR method and the mass balance method, respectively. The analysis results of the two methods were compared. The qNMR is quick and simple to use. In a new medicine research and development process, qNMR provides a new and reliable method for purity analysis of the reference standard.
Consistency-based rectification of nonrigid registrations
Gass, Tobias; Székely, Gábor; Goksel, Orcun
2015-01-01
Abstract. We present a technique to rectify nonrigid registrations by improving their group-wise consistency, which is a widely used unsupervised measure to assess pair-wise registration quality. While pair-wise registration methods cannot guarantee any group-wise consistency, group-wise approaches typically enforce perfect consistency by registering all images to a common reference. However, errors in individual registrations to the reference then propagate, distorting the mean and accumulating in the pair-wise registrations inferred via the reference. Furthermore, the assumption that perfect correspondences exist is not always true, e.g., for interpatient registration. The proposed consistency-based registration rectification (CBRR) method addresses these issues by minimizing the group-wise inconsistency of all pair-wise registrations using a regularized least-squares algorithm. The regularization controls the adherence to the original registration, which is additionally weighted by the local postregistration similarity. This allows CBRR to adaptively improve consistency while locally preserving accurate pair-wise registrations. We show that the resulting registrations are not only more consistent, but also have lower average transformation error when compared to known transformations in simulated data. On clinical data, we show improvements of up to 50% target registration error in breathing motion estimation from four-dimensional MRI and improvements in atlas-based segmentation quality of up to 65% in terms of mean surface distance in three-dimensional (3-D) CT. Such improvement was observed consistently using different registration algorithms, dimensionality (two-dimensional/3-D), and modalities (MRI/CT). PMID:26158083
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jelinski, J.A.; Anderson, S.L.
1995-12-31
The authors` objectives were to determine the feasibility of using embryos of two fish species, Menidia beryllina and Atherinops affinis, in estuarine sediment toxicity tests at ambient temperatures and salinities, and to compare pore-water and sediment water interface corer (SWIC) exposure techniques using these same species. The ultimate goal is to determine whether these pore-water and SWIC methods can be used in in situ exposure studies. Sediment samples were collected at both a reference and contaminated site at the Mare Island Naval Shipyard in San Francisco Bay. Pore-water testes were conducted using methods developed in the laboratory, and SWIC testsmore » were conducted using a modification of B. Anderson et al. Salinity and temperature tolerance experiments revealed that M. beryllina embryos can tolerate temperatures between 160 C and 240 C and salinities of 10 ppt to 25 ppt, whereas A. affinis has a temperature range between 160 C and 200 C. Comparisons between pore-water and SWIC exposures at a reference site within MINSY showed no significant difference in hatching success. However, hatching success in SWIC exposures was significantly lower than pore-water exposures at a previously characterized contaminated site. In conclusion, both M. beryllina and A. affinis embryos may be useful for sediment and in situ toxicity testing in estuarine environments. Their wide temperature and salinity tolerances allow for minimal test manipulations, and M. beryllina showed excellent hatching success in reference sediments for both types of exposures.« less
Benschop, Corina C G; Connolly, Edward; Ansell, Ricky; Kokshoorn, Bas
2017-01-01
The interpretation of complex DNA profiles may differ between laboratories and reporting officers, which can lead to discrepancies in the final reports. In this study, we assessed the intra and inter laboratory variation in DNA mixture interpretation for three European ISO17025-accredited laboratories. To this aim, 26 reporting officers analyzed five sets of DNA profiles. Three main aspects were considered: 1) whether the mixed DNA profiles met the criteria for comparison to a reference profile, 2) the actual result of the comparison between references and DNA profiling data and 3) whether the weight of the DNA evidence could be assessed. Similarity in answers depended mostly on the complexity of the tasks. This study showed less variation within laboratories than between laboratories which could be the result of differences between internal laboratory guidelines and methods and tools available. Results show the profile types for which the three laboratories report differently, which informs indirectly on the complexity threshold the laboratories employ. Largest differences between laboratories were caused by the methods available to assess the weight of the DNA evidence. This exercise aids in training forensic scientists, refining laboratory guidelines and explaining differences between laboratories in court. Undertaking more collaborative exercises in future may stimulate dialog and consensus regarding interpretation. For training purposes, DNA profiles of the mixed stains and questioned references are made available. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.
Oftedal, O T; Eisert, R; Barrell, G K
2014-01-01
Mammalian milks may differ greatly in composition from cow milk, and these differences may affect the performance of analytical methods. High-fat, high-protein milks with a preponderance of oligosaccharides, such as those produced by many marine mammals, present a particular challenge. We compared the performance of several methods against reference procedures using Weddell seal (Leptonychotes weddellii) milk of highly varied composition (by reference methods: 27-63% water, 24-62% fat, 8-12% crude protein, 0.5-1.8% sugar). A microdrying step preparatory to carbon-hydrogen-nitrogen (CHN) gas analysis slightly underestimated water content and had a higher repeatability relative standard deviation (RSDr) than did reference oven drying at 100°C. Compared with a reference macro-Kjeldahl protein procedure, the CHN (or Dumas) combustion method had a somewhat higher RSDr (1.56 vs. 0.60%) but correlation between methods was high (0.992), means were not different (CHN: 17.2±0.46% dry matter basis; Kjeldahl 17.3±0.49% dry matter basis), there were no significant proportional or constant errors, and predictive performance was high. A carbon stoichiometric procedure based on CHN analysis failed to adequately predict fat (reference: Röse-Gottlieb method) or total sugar (reference: phenol-sulfuric acid method). Gross energy content, calculated from energetic factors and results from reference methods for fat, protein, and total sugar, accurately predicted gross energy as measured by bomb calorimetry. We conclude that the CHN (Dumas) combustion method and calculation of gross energy are acceptable analytical approaches for marine mammal milk, but fat and sugar require separate analysis by appropriate analytic methods and cannot be adequately estimated by carbon stoichiometry. Some other alternative methods-low-temperature drying for water determination; Bradford, Lowry, and biuret methods for protein; the Folch and the Bligh and Dyer methods for fat; and enzymatic and reducing sugar methods for total sugar-appear likely to produce substantial error in marine mammal milks. It is important that alternative analytical methods be properly validated against a reference method before being used, especially for mammalian milks that differ greatly from cow milk in analyte characteristics and concentrations. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Iizumi, Toshichika; Takikawa, Hiroki; Hirabayashi, Yukiko; Hanasaki, Naota; Nishimori, Motoki
2017-08-01
The use of different bias-correction methods and global retrospective meteorological forcing data sets as the reference climatology in the bias correction of general circulation model (GCM) daily data is a known source of uncertainty in projected climate extremes and their impacts. Despite their importance, limited attention has been given to these uncertainty sources. We compare 27 projected temperature and precipitation indices over 22 regions of the world (including the global land area) in the near (2021-2060) and distant future (2061-2100), calculated using four Representative Concentration Pathways (RCPs), five GCMs, two bias-correction methods, and three reference forcing data sets. To widen the variety of forcing data sets, we developed a new forcing data set, S14FD, and incorporated it into this study. The results show that S14FD is more accurate than other forcing data sets in representing the observed temperature and precipitation extremes in recent decades (1961-2000 and 1979-2008). The use of different bias-correction methods and forcing data sets contributes more to the total uncertainty in the projected precipitation index values in both the near and distant future than the use of different GCMs and RCPs. However, GCM appears to be the most dominant uncertainty source for projected temperature index values in the near future, and RCP is the most dominant source in the distant future. Our findings encourage climate risk assessments, especially those related to precipitation extremes, to employ multiple bias-correction methods and forcing data sets in addition to using different GCMs and RCPs.
LinkImpute: Fast and Accurate Genotype Imputation for Nonmodel Organisms.
Money, Daniel; Gardner, Kyle; Migicovsky, Zoë; Schwaninger, Heidi; Zhong, Gan-Yuan; Myles, Sean
2015-09-15
Obtaining genome-wide genotype data from a set of individuals is the first step in many genomic studies, including genome-wide association and genomic selection. All genotyping methods suffer from some level of missing data, and genotype imputation can be used to fill in the missing data and improve the power of downstream analyses. Model organisms like human and cattle benefit from high-quality reference genomes and panels of reference genotypes that aid in imputation accuracy. In nonmodel organisms, however, genetic and physical maps often are either of poor quality or are completely absent, and there are no panels of reference genotypes available. There is therefore a need for imputation methods designed specifically for nonmodel organisms in which genomic resources are poorly developed and marker order is unreliable or unknown. Here we introduce LinkImpute, a software package based on a k-nearest neighbor genotype imputation method, LD-kNNi, which is designed for unordered markers. No physical or genetic maps are required, and it is designed to work on unphased genotype data from heterozygous species. It exploits the fact that markers useful for imputation often are not physically close to the missing genotype but rather distributed throughout the genome. Using genotyping-by-sequencing data from diverse and heterozygous accessions of apples, grapes, and maize, we compare LD-kNNi with several genotype imputation methods and show that LD-kNNi is fast, comparable in accuracy to the best-existing methods, and exhibits the least bias in allele frequency estimates. Copyright © 2015 Money et al.
Junge, Benjamin; Berghof-Jäger, Kornelia
2006-01-01
A method was developed for the detection of L. monocytogenes in food based on real-time polymerase chain reaction (PCR). This advanced PCR method was designed to reduce the time needed to achieve results from PCR reactions and to enable the user to monitor the amplification of the PCR product simultaneously, in real-time. After DNA isolation using the Roche/BIOTECON Diagnostics ShortPrep foodproof II Kit (formerly called Listeria ShortPrep Kit) designed for the rapid preparation of L. monocytogenes DNA for direct use in PCR, the real-time detection of L. monocytogenes DNA is performed by using the Roche/BIOTECON Diagnostics LightCycler foodproof L. monocytogenes Detection Kit. This kit provides primers and hybridization probes for sequence-specific detection, convenient premixed reagents, and different controls for reliable interpretation of results. For repeatability studies, 20 different foods, covering the 15 food groups recommended from the AOAC Research Institute (AOAC RI) for L. monocytogenes detection were analyzed: raw meats, fresh produce/vegetables, processed meats, seafood, egg and egg products, dairy (cultured/noncultured), spices, dry foods, fruit/juices, uncooked pasta, nuts, confectionery, pet food, food dyes and colorings, and miscellaneous. From each food 20, samples were inoculated with a low level (1-10 colony-forming units (CFU)/25 g) and 20 samples with a high level (10-50 CFU/25 g) of L. monocytogenes. Additionally, 5 uninoculated samples were prepared from each food. The food samples were examined with the test kits and in correlation with the cultural methods according to U.S. Food and Drug Administration (FDA) Bacteriological Analytical Manual (BAM) or U.S. Department of Agriculture (USDA)/Food Safety and Inspection Service (FSIS) Microbiology Laboratory Guidebook. After 48 h of incubation, the PCR method in all cases showed equal or better results than the reference cultural FDA/BAM or USDA/FSIS methods. Fifteen out of 20 tested food types gave exactly the same amount of positive samples for both methods in both inoculation levels. For 5 out of 20 foodstuffs, the PCR method resulted in more positives than the reference method after 48 h of incubation. Following AOAC RI definition, these were false positives because they were not confirmed by the reference method (false-positive rate for low inoculated foodstuffs: 5.4%; for high inoculated foodstuffs: 7.1%). Without calculating these unconfirmed positives, the PCR method showed equal sensitivity results compared to the alternative method. With the unconfirmed PCR-positives included into the calculations, the alternative PCR method showed a higher sensitivity than the microbiological methods (low inoculation level: 100 vs 98.0%; sensitivity rate: 1; high inoculation level: 99.7 vs 97.7%; sensitivity rate, 1). All in-house and independently tested uninoculated food samples were negative for L. monocytogenes. The ruggedness testing of both ShortPrep foodproof II Kit and Roche/BIOTECON LightCycler foodproof L. monocytogenes Detection Kit showed no noteworthy influences to any variation of the parameters component concentration, apparatus comparison, tester comparison, and sample volumes. In total, 102 L. monocytogenes isolates (cultures and pure DNA) were tested and detected for the inclusivity study, including all isolates claimed by the AOAC RI. The exclusivity study included 60 non-L. monocytogenes bacteria. None of the tested isolates gave a false-positive result; specificity was 100%. Three different lots were tested in the lot-to-lot study. All 3 lots gave equal results. The stability study was subdivided into 3 parts: long-term study, stress test, and freeze-defrost test. Three lots were tested in 4 time intervals within a period of 13 months. They all gave comparable results for all test intervals. For the stress test, LightCycler L. monocytogenes detection mixes were stored at different temperatures and tested at different time points during 1 month. Stable results were produced at all storage temperatures. The freeze-defrost analysis showed no noteworthy aggravation of test results. The independent validation study examined by Campden and Chorleywood Food Research Association Group (CCFRA) demonstrated again that the LightCycler L. monocytogenes detection system shows a comparable sensitivity to reference methods. With both the LightCycler PCR and BAM methods, 19 out of 20 inoculated food samples were detected. The 24 h PCR results generated by the LightCycler system corresponded directly with the FDA/BAM culture results. However, the 48 h PCR results did not relate exactly to the FDA/BAM results, as one sample found to be positive by the 48 h PCR could not be culturally confirmed and another sample which was negative by the 48 h PCR was culturally positive.
Genetic Adaptive Control for PZT Actuators
NASA Technical Reports Server (NTRS)
Kim, Jeongwook; Stover, Shelley K.; Madisetti, Vijay K.
1995-01-01
A piezoelectric transducer (PZT) is capable of providing linear motion if controlled correctly and could provide a replacement for traditional heavy and large servo systems using motors. This paper focuses on a genetic model reference adaptive control technique (GMRAC) for a PZT which is moving a mirror where the goal is to keep the mirror velocity constant. Genetic Algorithms (GAs) are an integral part of the GMRAC technique acting as the search engine for an optimal PID controller. Two methods are suggested to control the actuator in this research. The first one is to change the PID parameters and the other is to add an additional reference input in the system. The simulation results of these two methods are compared. Simulated Annealing (SA) is also used to solve the problem. Simulation results of GAs and SA are compared after simulation. GAs show the best result according to the simulation results. The entire model is designed using the Mathworks' Simulink tool.
An alternative approach to characterize nonlinear site effects
Zhang, R.R.; Hartzell, S.; Liang, J.; Hu, Y.
2005-01-01
This paper examines the rationale of a method of nonstationary processing and analysis, referred to as the Hilbert-Huang transform (HHT), for its application to a recording-based approach in quantifying influences of soil nonlinearity in site response. In particular, this paper first summarizes symptoms of soil nonlinearity shown in earthquake recordings, reviews the Fourier-based approach to characterizing nonlinearity, and offers justifications for the HHT in addressing nonlinearity issues. This study then uses the HHT method to analyze synthetic data and recordings from the 1964 Niigata and 2001 Nisqually earthquakes. In doing so, the HHT-based site response is defined as the ratio of marginal Hilbert amplitude spectra, alternative to the Fourier-based response that is the ratio of Fourier amplitude spectra. With the Fourier-based approach in studies of site response as a reference, this study shows that the alternative HHT-based approach is effective in characterizing soil nonlinearity and nonlinear site response.
Dias, V M C; Cardoso, A S B
2006-05-01
Reference methods for determining lead in food are usually time-consuming. This paper reports a straightforward procedure using electrothermal atomic absorption spectrometry (ETAAS), to determine lead (Pb) in fat-free sweets. Several chemical modifiers were examined and results showed that it is not necessary to digest the samples, when a rhodium (Rh) modifier was used. The samples were dissolved in nitric acid and the determination of Pb was performed by ETAAS, using Rh chemical modifier at a pyrolysis temperature of 900 degrees C and an atomization temperature of 1,500 degrees C. No ashing step was employed and aqueous standards were used, in the range 2-10 microg l(-1). The limit of quantification was 0.095 mg kg(-1), and the accuracy of the method was verified by analysing certified reference materials.
Determination of the diffusion coefficient and solubility of radon in plastics.
Pressyanov, D; Georgiev, S; Dimitrova, I; Mitev, K; Boshkova, T
2011-05-01
This paper describes a method for determination of the diffusion coefficient and the solubility of radon in plastics. The method is based on the absorption and desorption of radon in plastics. Firstly, plastic specimens are exposed for controlled time to referent (222)Rn concentrations. After exposure, the activity of the specimens is followed by HPGe gamma spectrometry. Using the mathematical algorithm described in this report and the decrease of activity as a function of time, the diffusion coefficient can be determined. In addition, if the referent (222)Rn concentration during the exposure is known, the solubility of radon can be determined. The algorithm has been experimentally applied for different plastics. The results show that this approach allows the specified quantities to be determined with a rather high accuracy-depending on the quality of the counting equipment, it can be better than 10 %.
Methylphenidate use in children with attention deficit hyperactivity disorder
Machado, Felipe Salles Neves; Caetano, Sheila Cavalcante; Hounie, Ana Gabriela; Scivoletto, Sandra; Muszkat, Mauro; Gattás, Ivete Gianfaldoni; Casella, Erasmo Barbante; de Andrade, Ênio Roberto; Polanczyk, Guilherme Vanoni; do Rosário, Maria Conceição
2015-01-01
A Brazilian Health Technology Assessment Bulletin (BRATS) article regarding scientific evidence of the efficacy and safety of methylphenidate for treating attention deficit hyperactivity disorder (ADHD) has caused much controversy about its methods. Considering the relevance of BRATS for public health in Brazil, we critically reviewed this article by remaking the BRATS search and discussing its methods and results. Two questions were answered: did BRATS include all references available in the literature? Do the conclusions reflect the reviewed articles? The results indicate that BRATS did not include all the references from the literature on this subject and also that the proposed conclusions are different from the results of the articles chosen by the BRATS authors themselves. The articles selected by the BRATS authors showed that using methylphenidate is safe and effective. However, the BRATS final conclusion does not reflect the aforementioned and should not be used to support decisions on the use of methylphenidate. PMID:26061456
Delaneau, Olivier; Marchini, Jonathan
2014-06-13
A major use of the 1000 Genomes Project (1000 GP) data is genotype imputation in genome-wide association studies (GWAS). Here we develop a method to estimate haplotypes from low-coverage sequencing data that can take advantage of single-nucleotide polymorphism (SNP) microarray genotypes on the same samples. First the SNP array data are phased to build a backbone (or 'scaffold') of haplotypes across each chromosome. We then phase the sequence data 'onto' this haplotype scaffold. This approach can take advantage of relatedness between sequenced and non-sequenced samples to improve accuracy. We use this method to create a new 1000 GP haplotype reference set for use by the human genetic community. Using a set of validation genotypes at SNP and bi-allelic indels we show that these haplotypes have lower genotype discordance and improved imputation performance into downstream GWAS samples, especially at low-frequency variants.
NASA Astrophysics Data System (ADS)
Martinez Rivera, Francisco Javier
This research is aimed at investigating the corrosion durability of polyolefin fiberreinforced fly ash-based geopolymer structural concrete (hereafter referred to as GPC, in contradistinction to unreinforced geopolymer concrete referred to as simply geopolymer concrete), where cement is completely replaced by fly ash, that is activated by alkalis, sodium hydroxide and sodium silicate. The durability in a marine environment is tested through an electrochemical method for accelerated corrosion. The GPC achieved compressive strengths in excess of 6,000 psi. Fiber reinforced beams contained polyolefin fibers in the amounts of 0.1%, 0.3%, and 0.5% by volume. After being subjected to corrosion damage, the GPC beams were analyzed through a method of crack scoring, steel mass loss, and residual flexural strength testing. Fiber reinforced GPC beams showed greater resistance to corrosion damage with higher residual flexural strength. This makes GPC an attractive material for use in submerged marine structures.
[Studies on HPLC fingerprint chromatogram of Folium Fici Microcarpa].
Fang, Zhi-Jian; Dai, Zhen; Li, Shu-Yuan
2008-10-01
To establish a sensitive and specific method for quality control of Folium Fici Microcarpa, HPLC method was applied for studies on the fingerprint chromatogram of Folium Fici Microcarpa. Isovitexin was used as reference substance to evaluate the chromatogram of 10 samples from different regions and 12 samples collected in different months. The result revealed that all the chromatographic peaks were seperated efficiently. There were 17 common peaks showed in the fingerprint chromatogram. The method of fingerprint chromatogram with characteristic and specificity will be used to identify the quality and evaluate different origins and collection period of Folium Fici Microcarpa.
Xu, Jing; Liu, Xiaofei; Wang, Yutian
2016-08-05
Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components. Copyright © 2016 Elsevier B.V. All rights reserved.
Krogh, Magnus Reinsfelt; Nghiem, Giang M; Halvorsen, Per Steinar; Elle, Ole Jakob; Grymyr, Ole-Johannes; Hoff, Lars; Remme, Espen W
2017-05-01
A miniaturized accelerometer fixed to the heart can be used for monitoring of cardiac function. However, an accelerometer cannot differentiate between acceleration caused by motion and acceleration due to gravity. The accuracy of motion measurements is therefore dependent on how well the gravity component can be estimated and filtered from the measured signal. In this study we propose a new method for estimating the gravity, based on strapdown inertial navigation, using a combined accelerometer and gyro. The gyro was used to estimate the orientation of the gravity field and thereby remove it. We compared this method with two previously proposed gravity filtering methods in three experimental models using: (1) in silico computer simulated heart motion; (2) robot mimicked heart motion; and (3) in vivo measured motion on the heart in an animal model. The new method correlated excellently with the reference (r 2 > 0.93) and had a deviation from reference peak systolic displacement (6.3 ± 3.9 mm) below 0.2 ± 0.5 mm for the robot experiment model. The new method performed significantly better than the two previously proposed methods (p < 0.001). The results show that the proposed method using gyro can measure cardiac motion with high accuracy and performs better than existing methods for filtering the gravity component from the accelerometer signal.
Céspedes, Nora; Valencia, Angela; Echeverry, Carlos Alberto; Arce-Plata, Maria Isabel; Colón, Cristóbal; Castiñeiras, Daisy E; Hurtado, Paula Margarita; Cocho, Jose Angel; Herrera, Sócrates
2017-01-01
Abstract Introduction: Inborn errors of metabolism (IEM) represent an important public health problem due to current diagnosis and treatment limitations, poor life quality of affected patients, and consequent untimely child death. In contrast to classical methods, tandem mass spectrometry (MS/MS) has allowed simultaneous evaluation of multiple metabolites associated with IEM offering higher sensitivity, low false positive rates and high throughput. Aims: Determine concentration levels for amino acids and acylcarnitines in blood of newborns from Colombia, to establish reference values for further use in diagnosis of IEM. Methods: Implementation of a method to determine amino acids, acylcarnitines and succinylacetone in newborn dried blood spots using MS/MS, and its application in a cross-sectional study conducted in 891 healthy neonates from Cali and Quibdo cities is described. Results: fifty-seven analytes that allow the diagnosis of more than 40 different pathologies were tested. The method showed to be linear, precise and accurate. Healthy neonates 1-18 days of age were included, 523 from Cali and 368 from Quibdo; 52% male and 48% female. Age-related differences on the concentration levels of amino acids and acylcarnitines were observed whereas no significant differences by gender were found. Conclusion: The study has contributed to reveal the usual concentration levels of amino acids, acylcarnitines and succinylacetone that could be used as reference for the establishment of a newborn metabolic screening program in Colombia. PMID:29213153
NASA Astrophysics Data System (ADS)
Aschonitis, Vassilis G.; Papamichail, Dimitris; Demertzi, Kleoniki; Colombani, Nicolo; Mastrocicco, Micol; Ghirardini, Andrea; Castaldelli, Giuseppe; Fano, Elisa-Anna
2017-08-01
The objective of the study is to provide global grids (0.5°) of revised annual coefficients for the Priestley-Taylor (P-T) and Hargreaves-Samani (H-S) evapotranspiration methods after calibration based on the ASCE (American Society of Civil Engineers)-standardized Penman-Monteith method (the ASCE method includes two reference crops: short-clipped grass and tall alfalfa). The analysis also includes the development of a global grid of revised annual coefficients for solar radiation (Rs) estimations using the respective Rs formula of H-S. The analysis was based on global gridded climatic data of the period 1950-2000. The method for deriving annual coefficients of the P-T and H-S methods was based on partial weighted averages (PWAs) of their mean monthly values. This method estimates the annual values considering the amplitude of the parameter under investigation (ETo and Rs) giving more weight to the monthly coefficients of the months with higher ETo values (or Rs values for the case of the H-S radiation formula). The method also eliminates the effect of unreasonably high or low monthly coefficients that may occur during periods where ETo and Rs fall below a specific threshold. The new coefficients were validated based on data from 140 stations located in various climatic zones of the USA and Australia with expanded observations up to 2016. The validation procedure for ETo estimations of the short reference crop showed that the P-T and H-S methods with the new revised coefficients outperformed the standard methods reducing the estimated root mean square error (RMSE) in ETo values by 40 and 25 %, respectively. The estimations of Rs using the H-S formula with revised coefficients reduced the RMSE by 28 % in comparison to the standard H-S formula. Finally, a raster database was built consisting of (a) global maps for the mean monthly ETo values estimated by ASCE-standardized method for both reference crops, (b) global maps for the revised annual coefficients of the P-T and H-S evapotranspiration methods for both reference crops and a global map for the revised annual coefficient of the H-S radiation formula and (c) global maps that indicate the optimum locations for using the standard P-T and H-S methods and their possible annual errors based on reference values. The database can support estimations of ETo and solar radiation for locations where climatic data are limited and it can support studies which require such estimations on larger scales (e.g. country, continent, world). The datasets produced in this study are archived in the PANGAEA database (https://doi.org/10.1594/PANGAEA.868808) and in the ESRN database (http://www.esrn-database.org or http://esrn-database.weebly.com).
Thermodynamically constrained correction to ab initio equations of state
DOE Office of Scientific and Technical Information (OSTI.GOV)
French, Martin; Mattsson, Thomas R.
2014-07-07
We show how equations of state generated by density functional theory methods can be augmented to match experimental data without distorting the correct behavior in the high- and low-density limits. The technique is thermodynamically consistent and relies on knowledge of the density and bulk modulus at a reference state and an estimation of the critical density of the liquid phase. We apply the method to four materials representing different classes of solids: carbon, molybdenum, lithium, and lithium fluoride. It is demonstrated that the corrected equations of state for both the liquid and solid phases show a significantly reduced dependence ofmore » the exchange-correlation functional used.« less
Juck, Gregory; Gonzalez, Verapaz; Allen, Ann-Christine Olsson; Sutzko, Meredith; Seward, Kody; Muldoon, Mark T
2018-04-27
The Romer Labs RapidChek ® Listeria monocytogenes test system (Performance Tested Method ℠ 011805) was validated against the U.S. Department of Agriculture-Food Safety and Inspection Service Microbiology Laboratory Guidebook (USDA-FSIS/MLG), U.S. Food and Drug Association Bacteriological Analytical Manual (FDA/BAM), and AOAC Official Methods of Analysis ℠ (AOAC/OMA) cultural reference methods for the detection of L. monocytogenes on selected foods including hot dogs, frozen cooked breaded chicken, frozen cooked shrimp, cured ham, and ice cream, and environmental surfaces including stainless steel and plastic in an unpaired study design. The RapidChek method uses a proprietary enrichment media system, a 44-48 h enrichment at 30 ± 1°C, and detects L. monocytogenes on an immunochromatographic lateral flow device within 10 min. Different L. monocytogenes strains were used to spike each of the matrixes. Samples were confirmed based on the reference method confirmations and an alternate confirmation method. A total of 140 low-level spiked samples were tested by the RapidChek method after enrichment for 44-48 h in parallel with the cultural reference method. There were 88 RapidChek presumptive positives. One of the presumptive positives was not confirmed culturally. Additionally, one of the culturally confirmed samples did not exhibit a presumptive positive. No difference between the alternate confirmation method and reference confirmation method was observed. The respective cultural reference methods (USDA-FSIS/MLG, FDA/BAM, and AOAC/OMA) produced a total of 63 confirmed positive results. Nonspiked samples from all foods were reported as negative for L. monocytogenes by all methods. Probability of detection analysis demonstrated no significant differences in the number of positive samples detected by the RapidChek method and the respective cultural reference method.
Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S
2017-12-01
To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Wolde, Mistire; Tarekegn, Getahun; Kebede, Tedla
2018-05-01
Point-of-care glucometer (PoCG) devices play a significant role in self-monitoring of the blood sugar level, particularly in the follow-up of high blood sugar therapeutic response. The aim of this study was to evaluate blood glucose test results performed with four randomly selected glucometers on diabetes and control subjects versus standard wet chemistry (hexokinase) methods in Addis Ababa, Ethiopia. A prospective cross-sectional study was conducted on randomly selected 200 study participants (100 participants with diabetes and 100 healthy controls). Four randomly selected PoCG devices (CareSens N, DIAVUE Prudential, On Call Extra, i-QARE DS-W) were evaluated against hexokinase method and ISO 15197:2003 and ISO 15197:2013 standards. The minimum and maximum blood sugar values were recorded by CareSens N (21 mg/dl) and hexokinase method (498.8 mg/dl), respectively. The mean sugar values of all PoCG devices except On Call Extra showed significant differences compared with the reference hexokinase method. Meanwhile, all four PoCG devices had strong positive relationship (>80%) with the reference method (hexokinase). On the other hand, none of the four PoCG devices fulfilled the minimum accuracy measurement set by ISO 15197:2003 and ISO 15197:2013 standards. In addition, the linear regression analysis revealed that all four selected PoCG overestimated the glucose concentrations. The overall evaluation of the selected four PoCG measurements were poorly correlated with standard reference method. Therefore, before introducing PoCG devices to the market, there should be a standardized evaluation platform for validation. Further similar large-scale studies on other PoCG devices also need to be undertaken.
NASA Astrophysics Data System (ADS)
Hansen, Andreas; Liakos, Dimitrios G.; Neese, Frank
2011-12-01
A production level implementation of the high-spin open-shell (spin unrestricted) single reference coupled pair, quadratic configuration interaction and coupled cluster methods with up to doubly excited determinants in the framework of the local pair natural orbital (LPNO) concept is reported. This work is an extension of the closed-shell LPNO methods developed earlier [F. Neese, F. Wennmohs, and A. Hansen, J. Chem. Phys. 130, 114108 (2009), 10.1063/1.3086717; F. Neese, A. Hansen, and D. G. Liakos, J. Chem. Phys. 131, 064103 (2009), 10.1063/1.3173827]. The internal space is spanned by localized orbitals, while the external space for each electron pair is represented by a truncated PNO expansion. The laborious integral transformation associated with the large number of PNOs becomes feasible through the extensive use of density fitting (resolution of the identity (RI)) techniques. Technical complications arising for the open-shell case and the use of quasi-restricted orbitals for the construction of the reference determinant are discussed in detail. As in the closed-shell case, only three cutoff parameters control the average number of PNOs per electron pair, the size of the significant pair list, and the number of contributing auxiliary basis functions per PNO. The chosen threshold default values ensure robustness and the results of the parent canonical methods are reproduced to high accuracy. Comprehensive numerical tests on absolute and relative energies as well as timings consistently show that the outstanding performance of the LPNO methods carries over to the open-shell case with minor modifications. Finally, hyperfine couplings calculated with the variational LPNO-CEPA/1 method, for which a well-defined expectation value type density exists, indicate the great potential of the LPNO approach for the efficient calculation of molecular properties.
Integrative missing value estimation for microarray data.
Hu, Jianjun; Li, Haifeng; Waterman, Michael S; Zhou, Xianghong Jasmine
2006-10-12
Missing value estimation is an important preprocessing step in microarray analysis. Although several methods have been developed to solve this problem, their performance is unsatisfactory for datasets with high rates of missing data, high measurement noise, or limited numbers of samples. In fact, more than 80% of the time-series datasets in Stanford Microarray Database contain less than eight samples. We present the integrative Missing Value Estimation method (iMISS) by incorporating information from multiple reference microarray datasets to improve missing value estimation. For each gene with missing data, we derive a consistent neighbor-gene list by taking reference data sets into consideration. To determine whether the given reference data sets are sufficiently informative for integration, we use a submatrix imputation approach. Our experiments showed that iMISS can significantly and consistently improve the accuracy of the state-of-the-art Local Least Square (LLS) imputation algorithm by up to 15% improvement in our benchmark tests. We demonstrated that the order-statistics-based integrative imputation algorithms can achieve significant improvements over the state-of-the-art missing value estimation approaches such as LLS and is especially good for imputing microarray datasets with a limited number of samples, high rates of missing data, or very noisy measurements. With the rapid accumulation of microarray datasets, the performance of our approach can be further improved by incorporating larger and more appropriate reference datasets.
NASA Astrophysics Data System (ADS)
Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei
This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.
The Stayhealthy bioelectrical impedance analyzer predicts body fat in children and adults.
Erceg, David N; Dieli-Conwright, Christina M; Rossuello, Amerigo E; Jensky, Nicole E; Sun, Stephanie; Schroeder, E Todd
2010-05-01
Bioelectrical impedance analysis (BIA) is a time-efficient and cost-effective method for estimating body composition. We hypothesized that there would be no significant difference between the Stayhealthy BC1 BIA and the selected reference methods when determining body composition. Thus, the purpose of the present study was to determine the validity of estimating percent body fat (%BF) using the Stayhealthy BIA with its most recently updated algorithms compared to the reference methods of dual-energy x-ray absorptiometry for adults and hydrostatic weighing for children. We measured %BF in 245 adults aged 18 to 80 years and 115 children aged 10 to 17 years. Body fat by BIA was determined using a single 50 kHz frequency handheld impedance device and proprietary software. Agreement between BIA and reference methods was assessed by Bland and Altman plots. Bland and Altman analysis for men, women, and children revealed good agreement between the reference methods and BIA. There was no significant difference by t tests between mean %BF by BIA for men, women, or children when compared to the respective reference method. Significant correlation values between BIA, and reference methods for all men, women, and children were 0.85, 0.88, and 0.79, respectively. Reliability (test-retest) was assessed by intraclass correlation coefficient and coefficient of variation. Intraclass correlation coefficient values were greater than 0.99 (P < .001) for men, women, and children with coefficient of variation values 3.3%, 1.8%, and 1.7%, respectively. The Stayhealthy BIA device demonstrated good agreement between reference methods using Bland and Altman analyses. Copyright 2010 Elsevier Inc. All rights reserved.
Kuiper, Gerhardus J A J M; Houben, Rik; Wetzels, Rick J H; Verhezen, Paul W M; Oerle, Rene van; Ten Cate, Hugo; Henskens, Yvonne M C; Lancé, Marcus D
2017-11-01
Low platelet counts and hematocrit levels hinder whole blood point-of-care testing of platelet function. Thus far, no reference ranges for MEA (multiple electrode aggregometry) and PFA-100 (platelet function analyzer 100) devices exist for low ranges. Through dilution methods of volunteer whole blood, platelet function at low ranges of platelet count and hematocrit levels was assessed on MEA for four agonists and for PFA-100 in two cartridges. Using (multiple) regression analysis, 95% reference intervals were computed for these low ranges. Low platelet counts affected MEA in a positive correlation (all agonists showed r 2 ≥ 0.75) and PFA-100 in an inverse correlation (closure times were prolonged with lower platelet counts). Lowered hematocrit did not affect MEA testing, except for arachidonic acid activation (ASPI), which showed a weak positive correlation (r 2 = 0.14). Closure time on PFA-100 testing was inversely correlated with hematocrit for both cartridges. Regression analysis revealed different 95% reference intervals in comparison with originally established intervals for both MEA and PFA-100 in low platelet or hematocrit conditions. Multiple regression analysis of ASPI and both tests on the PFA-100 for combined low platelet and hematocrit conditions revealed that only PFA-100 testing should be adjusted for both thrombocytopenia and anemia. 95% reference intervals were calculated using multiple regression analysis. However, coefficients of determination of PFA-100 were poor, and some variance remained unexplained. Thus, in this pilot study using (multiple) regression analysis, we could establish reference intervals of platelet function in anemia and thrombocytopenia conditions on PFA-100 and in thrombocytopenia conditions on MEA.
Testing an automated method to estimate ground-water recharge from streamflow records
Rutledge, A.T.; Daniel, C.C.
1994-01-01
The computer program, RORA, allows automated analysis of streamflow hydrographs to estimate ground-water recharge. Output from the program, which is based on the recession-curve-displacement method (often referred to as the Rorabaugh method, for whom the program is named), was compared to estimates of recharge obtained from a manual analysis of 156 years of streamflow record from 15 streamflow-gaging stations in the eastern United States. Statistical tests showed that there was no significant difference between paired estimates of annual recharge by the two methods. Tests of results produced by the four workers who performed the manual method showed that results can differ significantly between workers. Twenty-two percent of the variation between manual and automated estimates could be attributed to having different workers perform the manual method. The program RORA will produce estimates of recharge equivalent to estimates produced manually, greatly increase the speed od analysis, and reduce the subjectivity inherent in manual analysis.
NASA Astrophysics Data System (ADS)
Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai
2015-09-01
Integration time and reference intensity are important factors for achieving high signal-to-noise ratio (SNR) and sensitivity in optical coherence tomography (OCT). In this context, we present an adaptive optimization method of reference intensity for OCT setup. The reference intensity is automatically controlled by tilting a beam position using a Galvanometric scanning mirror system. Before sample scanning, the OCT system acquires two dimensional intensity map with normalized intensity and variables in color spaces using false-color mapping. Then, the system increases or decreases reference intensity following the map data for optimization with a given algorithm. In our experiments, the proposed method successfully corrected the reference intensity with maintaining spectral shape, enabled to change integration time without manual calibration of the reference intensity, and prevented image degradation due to over-saturation and insufficient reference intensity. Also, SNR and sensitivity could be improved by increasing integration time with automatic adjustment of the reference intensity. We believe that our findings can significantly aid in the optimization of SNR and sensitivity for optical coherence tomography systems.
An Improved Calibration Method for Hydrazine Monitors for the United States Air Force
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korsah, K
2003-07-07
This report documents the results of Phase 1 of the ''Air Force Hydrazine Detector Characterization and Calibration Project''. A method for calibrating model MDA 7100 hydrazine detectors in the United States Air Force (AF) inventory has been developed. The calibration system consists of a Kintek 491 reference gas generation system, a humidifier/mixer system which combines the dry reference hydrazine gas with humidified diluent or carrier gas to generate the required humidified reference for calibrations, and a gas sampling interface. The Kintek reference gas generation system itself is periodically calibrated using an ORNL-constructed coulometric titration system to verify the hydrazine concentrationmore » of the sample atmosphere in the interface module. The Kintek reference gas is then used to calibrate the hydrazine monitors. Thus, coulometric titration is only used to periodically assess the performance of the Kintek reference gas generation system, and is not required for hydrazine monitor calibrations. One advantage of using coulometric titration for verifying the concentration of the reference gas is that it is a primary standard (if used for simple solutions), thereby guaranteeing, in principle, that measurements will be traceable to SI units (i.e., to the mole). The effect of humidity of the reference gas was characterized by using the results of concentrations determined by coulometric titration to develop a humidity correction graph for the Kintek 491 reference gas generation system. Using this calibration method, calibration uncertainty has been reduced by 50% compared to the current method used to calibrate hydrazine monitors in the Air Force inventory and calibration time has also been reduced by more than 20%. Significant findings from studies documented in this report are the following: (1) The Kintek 491 reference gas generation system (generator, humidifier and interface module) can be used to calibrate hydrazine detectors. (2) The Kintek system output concentration is less than the calculated output of the generator alone but can be calibrated as a system by using coulometric titration of gas samples collected with impingers. (3) The calibrated Kintek system output concentration is reproducible even after having been disassembled and moved and reassembled. (4) The uncertainty of the reference gas concentration generated by the Kintek system is less than half the uncertainty of the Zellweger Analytics' (ZA) reference gas concentration and can be easily lowered to one third or less of the ZA method by using lower-uncertainty flow rate or total flow measuring instruments. (5) The largest sources of uncertainty in the current ORNL calibration system are the permeation rate of the permeation tubes and the flow rate of the impinger sampling pump used to collect gas samples for calibrating the Kintek system. Upgrading the measurement equipment, as stated in (4), can reduce both of these. (6) The coulometric titration technique can be used to periodically assess the performance of the Kintek system and determine a suitable recalibration interval. (7) The Kintek system has been used to calibrate two MDA 7100s and an Interscan 4187 in less than one workday. The system can be upgraded (e.g., by automating it) to provide more calibrations per day. (8) The humidity of both the reference gas and the environment of the Chemcassette affect the MDA 7100 hydrazine detector's readings. However, ORNL believes that the environmental effect is less significant than the effect of the reference gas humidity. (9) The ORNL calibration method based on the Kintek 491 M-B gas standard can correct for the effect of the humidity of the reference gas to produce the same calibration as that of ZA's. Zellweger Analytics calibrations are typically performed at 45%-55% relative humidity. (10) Tests using the Interscan 4187 showed that the instrument was not accurate in its lower (0-100 ppb) range. Subsequent discussions with Kennedy Space Center (KSC) personnel also indicated that the Interscan units were not reproducible when new sensors were used. KSC had discovered that the Interscan units read incorrectly on the low range because of the presence of carbon dioxide. ORNL did not test the carbon dioxide effect, but it was found that the units did not read zero when a test gas containing no hydrazine was sampled. According to the KSC personnel that ORNL had these discussions with, NASA is phasing out the use of these Interscan detectors.« less
40 CFR 75.22 - Reference test methods.
Code of Federal Regulations, 2014 CFR
2014-07-01
... appendix A to part 60 of this chapter, except for Methods 2B and 2E, are the reference methods for... provided in appendix A to part 60 of this chapter, except for Methods 2B and 2E, for determining volumetric...
40 CFR 75.22 - Reference test methods.
Code of Federal Regulations, 2013 CFR
2013-07-01
... appendix A to part 60 of this chapter, except for Methods 2B and 2E, are the reference methods for... provided in appendix A to part 60 of this chapter, except for Methods 2B and 2E, for determining volumetric...
Surface Renewal: Micrometeorological Measurements Avoiding the Sonic Anemometer
NASA Astrophysics Data System (ADS)
Suvocarev, K.; Reba, M. L.; Runkle, B.
2016-12-01
Surface renewal (SR) is micrometeorological technique that has been suggested as an inexpensive alternative to eddy covariance (EC). While it was originally dependent on a calibration coefficient (α), a recent approach by Castellví (2004) showed that SR can be used as a stand-alone method where α is estimated using similarity theory. This "self-calibration" method is suitable for measuring different scalar fluxes under all stability conditions (Castellví et. al, 2008). According to the same authors, SR does not demand a sonic anemometer as only the horizontal wind speed is necessary to arrive to α values. Therefore, it is more affordable and applicable in both roughness and inertial sub-layers which makes this method less stringent to fetch requirements (Castellví, 2012). The SR method has not yet been tested when the equipment is reduced to scalar measurements and a simple anemometer (RM Young 5103 Wind Monitor Sensor). Here, our objective was to test this approach over temperature, H2O, CO2 and CH4 time series. When EC is taken as a reference for a comparison, our initial results show that all fluxes measured by SR are higher than corresponding reference fluxes. The portion of overestimation is in the range of typical values reported by SR literature. Still, more research will be done to improve its understanding as the correlation between flux measurements is very high. The SR method seems to be promising in avoiding the use of sonic anemometry (and related errors) while maintaining fewer fetch requirements and the possibility to yield observations from all wind directions.
Monakhova, Yulia B; Kohl-Himmelseher, Matthias; Kuballa, Thomas; Lachenmeier, Dirk W
2014-11-01
A fast and reliable nuclear magnetic resonance spectroscopic method for quantitative determination (qNMR) of targeted molecules in reference materials has been established using the ERETIC2 methodology (electronic reference to access in vivo concentrations) based on the PULCON principle (pulse length based concentration determination). The developed approach was validated for the analysis of pharmaceutical samples in the context of official medicines control, including ibandronic acid, amantadine, ambroxol and lercanidipine. The PULCON recoveries were above 94.3% and coefficients of variation (CVs) obtained by quantification of different targeted resonances ranged between 0.7% and 2.8%, demonstrating that the qNMR method is a precise tool for rapid quantification (approximately 15min) of reference materials and medicinal products. Generally, the values were within specification (certified values) provided by the manufactures. The results were in agreement with NMR quantification using an internal standard and validated reference HPLC analysis. The PULCON method was found to be a practical alternative with competitive precision and accuracy to the classical internal reference method and it proved to be applicable to different solvent conditions. The method can be recommended for routine use in medicines control laboratories, especially when the availability and costs of reference compounds are problematic. Copyright © 2014 Elsevier B.V. All rights reserved.
No-reference multiscale blur detection tool for content based image retrieval
NASA Astrophysics Data System (ADS)
Ezekiel, Soundararajan; Stocker, Russell; Harrity, Kyle; Alford, Mark; Ferris, David; Blasch, Erik; Gorniak, Mark
2014-06-01
In recent years, digital cameras have been widely used for image capturing. These devices are equipped in cell phones, laptops, tablets, webcams, etc. Image quality is an important component of digital image analysis. To assess image quality for these mobile products, a standard image is required as a reference image. In this case, Root Mean Square Error and Peak Signal to Noise Ratio can be used to measure the quality of the images. However, these methods are not possible if there is no reference image. In our approach, a discrete-wavelet transformation is applied to the blurred image, which decomposes into the approximate image and three detail sub-images, namely horizontal, vertical, and diagonal images. We then focus on noise-measuring the detail images and blur-measuring the approximate image to assess the image quality. We then compute noise mean and noise ratio from the detail images, and blur mean and blur ratio from the approximate image. The Multi-scale Blur Detection (MBD) metric provides both an assessment of the noise and blur content. These values are weighted based on a linear regression against full-reference y values. From these statistics, we can compare to normal useful image statistics for image quality without needing a reference image. We then test the validity of our obtained weights by R2 analysis as well as using them to estimate image quality of an image with a known quality measure. The result shows that our method provides acceptable results for images containing low to mid noise levels and blur content.
Ejlersen, June A; May, Ole; Mortensen, Jesper; Nielsen, Gitte L; Lauridsen, Jeppe F; Allan, Johansen
2017-11-01
Patients with normal stress perfusion have an excellent prognosis. Prospective studies on the diagnostic accuracy of stress-only scans with contemporary, independent examinations as gold standards are lacking. A total of 109 patients with typical angina and no previous coronary artery disease underwent a 2-day stress (exercise)/rest, gated, and attenuation-corrected (AC), 99m-technetium-sestamibi perfusion study, followed by invasive coronary angiography. The stress datasets were evaluated twice by four physicians with two different training levels (expert and novice): familiar and unfamiliar with AC. The two experts also made a consensus reading of the integrated stress-rest datasets. The consensus reading and quantitative data from the invasive coronary angiography were applied as reference methods. The sensitivity/specificity were 0.92-1.00/0.73-0.90 (reference: expert consensus reading), 0.93-0.96/0.63-0.82 (reference: ≥1 stenosis>70%), and 0.75-0.88/0.70-0.88 (reference: ≥1 stenosis>50%). The four readers showed a high and fairly equal sensitivity independent of their familiarity with AC. The expert familiar with AC had the highest specificity independent of the reference method. The intraobserver and interobserver agreements on the stress-only readings were good (readers without AC experience) to excellent (readers with AC experience). AC stress-only images yielded a high sensitivity independent of the training level and experience with AC of the nuclear physician, whereas the specificity correlated positively with both. Interobserver and intraobserver agreements tended to be the best for physicians with AC experience.
Equation-of-motion coupled-cluster method for doubly ionized states with spin-orbit coupling.
Wang, Zhifan; Hu, Shu; Wang, Fan; Guo, Jingwei
2015-04-14
In this work, we report implementation of the equation-of-motion coupled-cluster method for doubly ionized states (EOM-DIP-CC) with spin-orbit coupling (SOC) using a closed-shell reference. Double ionization potentials (DIPs) are calculated in the space spanned by 2h and 3h1p determinants with the EOM-DIP-CC approach at the CC singles and doubles level (CCSD). Time-reversal symmetry together with spatial symmetry is exploited to reduce computational effort. To circumvent the problem of unstable dianion references when diffuse basis functions are included, nuclear charges are scaled. Effect of this stabilization potential on DIPs is estimated based on results from calculations using a small basis set without diffuse basis functions. DIPs and excitation energies of some low-lying states for a series of open-shell atoms and molecules containing heavy elements with two unpaired electrons have been calculated with the EOM-DIP-CCSD approach. Results show that this approach is able to afford a reliable description on SOC splitting. Furthermore, the EOM-DIP-CCSD approach is shown to provide reasonable excitation energies for systems with a dianion reference when diffuse basis functions are not employed.
Equation-of-motion coupled-cluster method for doubly ionized states with spin-orbit coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhifan; Hu, Shu; Guo, Jingwei
2015-04-14
In this work, we report implementation of the equation-of-motion coupled-cluster method for doubly ionized states (EOM-DIP-CC) with spin-orbit coupling (SOC) using a closed-shell reference. Double ionization potentials (DIPs) are calculated in the space spanned by 2h and 3h1p determinants with the EOM-DIP-CC approach at the CC singles and doubles level (CCSD). Time-reversal symmetry together with spatial symmetry is exploited to reduce computational effort. To circumvent the problem of unstable dianion references when diffuse basis functions are included, nuclear charges are scaled. Effect of this stabilization potential on DIPs is estimated based on results from calculations using a small basis setmore » without diffuse basis functions. DIPs and excitation energies of some low-lying states for a series of open-shell atoms and molecules containing heavy elements with two unpaired electrons have been calculated with the EOM-DIP-CCSD approach. Results show that this approach is able to afford a reliable description on SOC splitting. Furthermore, the EOM-DIP-CCSD approach is shown to provide reasonable excitation energies for systems with a dianion reference when diffuse basis functions are not employed.« less
Anna, Hayton; Wallace, Anthony; Thomas, Peter
2017-03-01
The national diagnostic reference level service (NDRLS), was launched in 2011, however no paediatric data were submitted during the first calendar year of operation. As such, Australian national diagnostic reference levels (DRLs), for paediatric multi detector computed tomography (MDCT), were established using data obtained from a Royal Australian and New Zealand College of Radiologists (RANZCR), Quality Use of Diagnostic Imaging (QUDI), study. Paediatric data were submitted to the NDRLS in 2012 through 2015. An analysis has been made of the NDRLS paediatric data using the same method as was used to analyse the QUDI data to establish the Australian national paediatric DRLs for MDCT. An analysis of the paediatric NDRLS data has also been made using the method used to calculate the Australian national adult DRLs for MDCT. A comparison between the QUDI data and subsequent NDRLS data shows the NDRLS data to be lower on average for the Head and AbdoPelvis protocol and similar for the chest protocol. Using an average of NDRLS data submitted between 2012 and 2015 implications for updated paediatric DRLS are considered.
Han, Aaron L-F; Wong, Derek F; Chao, Lidia S; He, Liangye; Lu, Yi
2014-01-01
With the rapid development of machine translation (MT), the MT evaluation becomes very important to timely tell us whether the MT system makes any progress. The conventional MT evaluation methods tend to calculate the similarity between hypothesis translations offered by automatic translation systems and reference translations offered by professional translators. There are several weaknesses in existing evaluation metrics. Firstly, the designed incomprehensive factors result in language-bias problem, which means they perform well on some special language pairs but weak on other language pairs. Secondly, they tend to use no linguistic features or too many linguistic features, of which no usage of linguistic feature draws a lot of criticism from the linguists and too many linguistic features make the model weak in repeatability. Thirdly, the employed reference translations are very expensive and sometimes not available in the practice. In this paper, the authors propose an unsupervised MT evaluation metric using universal part-of-speech tagset without relying on reference translations. The authors also explore the performances of the designed metric on traditional supervised evaluation tasks. Both the supervised and unsupervised experiments show that the designed methods yield higher correlation scores with human judgments.
A surface spherical harmonic expansion of gravity anomalies on the ellipsoid
NASA Astrophysics Data System (ADS)
Claessens, S. J.; Hirt, C.
2015-10-01
A surface spherical harmonic expansion of gravity anomalies with respect to a geodetic reference ellipsoid can be used to model the global gravity field and reveal its spectral properties. In this paper, a direct and rigorous transformation between solid spherical harmonic coefficients of the Earth's disturbing potential and surface spherical harmonic coefficients of gravity anomalies in ellipsoidal approximation with respect to a reference ellipsoid is derived. This transformation cannot rigorously be achieved by the Hotine-Jekeli transformation between spherical and ellipsoidal harmonic coefficients. The method derived here is used to create a surface spherical harmonic model of gravity anomalies with respect to the GRS80 ellipsoid from the EGM2008 global gravity model. Internal validation of the model shows a global RMS precision of 1 nGal. This is significantly more precise than previous solutions based on spherical approximation or approximations to order or , which are shown to be insufficient for the generation of surface spherical harmonic coefficients with respect to a geodetic reference ellipsoid. Numerical results of two applications of the new method (the computation of ellipsoidal corrections to gravimetric geoid computation, and area means of gravity anomalies in ellipsoidal approximation) are provided.
Validity of flowmeter data in heterogeneous alluvial aquifers
NASA Astrophysics Data System (ADS)
Bianchi, Marco
2017-04-01
Numerical simulations are performed to evaluate the impact of medium-scale sedimentary architecture and small-scale heterogeneity on the validity of the borehole flowmeter test, a widely used method for measuring hydraulic conductivity (K) at the scale required for detailed groundwater flow and solute transport simulations. Reference data from synthetic K fields representing the range of structures and small-scale heterogeneity typically observed in alluvial systems are compared with estimated values from numerical simulations of flowmeter tests. Systematic errors inherent in the flowmeter K estimates are significant when the reference K field structure deviates from the hypothetical perfectly stratified conceptual model at the basis of the interpretation method of flowmeter tests. Because of these errors, the true variability of the K field is underestimated and the distributions of the reference K data and log-transformed spatial increments are also misconstrued. The presented numerical analysis shows that the validity of flowmeter based K data depends on measureable parameters defining the architecture of the hydrofacies, the conductivity contrasts between the hydrofacies and the sub-facies-scale K variability. A preliminary geological characterization is therefore essential for evaluating the optimal approach for accurate K field characterization.
40 CFR 60.547 - Test methods and procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...
40 CFR 60.547 - Test methods and procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...
40 CFR 60.547 - Test methods and procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...
Methods for analysis of cracks in three-dimensional solids
NASA Technical Reports Server (NTRS)
Raju, I. S.; Newman, J. C., Jr.
1984-01-01
Analytical and numerical methods evaluating the stress-intensity factors for three-dimensional cracks in solids are presented, with reference to fatigue failure in aerospace structures. The exact solutions for embedded elliptical and circular cracks in infinite solids, and the approximate methods, including the finite-element, the boundary-integral equation, the line-spring models, and the mixed methods are discussed. Among the mixed methods, the superposition of analytical and finite element methods, the stress-difference, the discretization-error, the alternating, and the finite element-alternating methods are reviewed. Comparison of the stress-intensity factor solutions for some three-dimensional crack configurations showed good agreement. Thus, the choice of a particular method in evaluating the stress-intensity factor is limited only to the availability of resources and computer programs.
Wilson, Richard A.; Chapman, Wendy W.; DeFries, Shawn J.; Becich, Michael J.; Chapman, Brian E.
2010-01-01
Background: Clinical records are often unstructured, free-text documents that create information extraction challenges and costs. Healthcare delivery and research organizations, such as the National Mesothelioma Virtual Bank, require the aggregation of both structured and unstructured data types. Natural language processing offers techniques for automatically extracting information from unstructured, free-text documents. Methods: Five hundred and eight history and physical reports from mesothelioma patients were split into development (208) and test sets (300). A reference standard was developed and each report was annotated by experts with regard to the patient’s personal history of ancillary cancer and family history of any cancer. The Hx application was developed to process reports, extract relevant features, perform reference resolution and classify them with regard to cancer history. Two methods, Dynamic-Window and ConText, for extracting information were evaluated. Hx’s classification responses using each of the two methods were measured against the reference standard. The average Cohen’s weighted kappa served as the human benchmark in evaluating the system. Results: Hx had a high overall accuracy, with each method, scoring 96.2%. F-measures using the Dynamic-Window and ConText methods were 91.8% and 91.6%, which were comparable to the human benchmark of 92.8%. For the personal history classification, Dynamic-Window scored highest with 89.2% and for the family history classification, ConText scored highest with 97.6%, in which both methods were comparable to the human benchmark of 88.3% and 97.2%, respectively. Conclusion: We evaluated an automated application’s performance in classifying a mesothelioma patient’s personal and family history of cancer from clinical reports. To do so, the Hx application must process reports, identify cancer concepts, distinguish the known mesothelioma from ancillary cancers, recognize negation, perform reference resolution and determine the experiencer. Results indicated that both information extraction methods tested were dependant on the domain-specific lexicon and negation extraction. We showed that the more general method, ConText, performed as well as our task-specific method. Although Dynamic- Window could be modified to retrieve other concepts, ConText is more robust and performs better on inconclusive concepts. Hx could greatly improve and expedite the process of extracting data from free-text, clinical records for a variety of research or healthcare delivery organizations. PMID:21031012
40 CFR 63.805 - Performance test methods.
Code of Federal Regulations, 2011 CFR
2011-07-01
... alternative method for determining the VHAP content of the coating. In the event of any inconsistency between... Collection of Coating and Ink Samples for VOC Content Analysis by Reference Method 24 and Reference Method... (see § 63.801); (iii) Use any alternative protocol and test method provided they meet either the...