Processor register error correction management
Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.
2016-12-27
Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.
Natural Language Processing As an Alternative to Manual Reporting of Colonoscopy Quality Metrics
RAJU, GOTTUMUKKALA S.; LUM, PHILLIP J.; SLACK, REBECCA; THIRUMURTHI, SELVI; LYNCH, PATRICK M.; MILLER, ETHAN; WESTON, BRIAN R.; DAVILA, MARTA L.; BHUTANI, MANOOP S.; SHAFI, MEHNAZ A.; BRESALIER, ROBERT S.; DEKOVICH, ALEXANDER A.; LEE, JEFFREY H.; GUHA, SUSHOVAN; PANDE, MALA; BLECHACZ, BORIS; RASHID, ASIF; ROUTBORT, MARK; SHUTTLESWORTH, GLADIS; MISHRA, LOPA; STROEHLEIN, JOHN R.; ROSS, WILLIAM A.
2015-01-01
BACKGROUND & AIMS The adenoma detection rate (ADR) is a quality metric tied to interval colon cancer occurrence. However, manual extraction of data to calculate and track the ADR in clinical practice is labor-intensive. To overcome this difficulty, we developed a natural language processing (NLP) method to identify patients, who underwent their first screening colonoscopy, identify adenomas and sessile serrated adenomas (SSA). We compared the NLP generated results with that of manual data extraction to test the accuracy of NLP, and report on colonoscopy quality metrics using NLP. METHODS Identification of screening colonoscopies using NLP was compared with that using the manual method for 12,748 patients who underwent colonoscopies from July 2010 to February 2013. Also, identification of adenomas and SSAs using NLP was compared with that using the manual method with 2259 matched patient records. Colonoscopy ADRs using these methods were generated for each physician. RESULTS NLP correctly identified 91.3% of the screening examinations, whereas the manual method identified 87.8% of them. Both the manual method and NLP correctly identified examinations of patients with adenomas and SSAs in the matched records almost perfectly. Both NLP and manual method produce comparable values for ADR for each endoscopist as well as the group as a whole. CONCLUSIONS NLP can correctly identify screening colonoscopies, accurately identify adenomas and SSAs in a pathology database, and provide real-time quality metrics for colonoscopy. PMID:25910665
Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)
NASA Astrophysics Data System (ADS)
Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.
2018-04-01
Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altunbas, Cem, E-mail: caltunbas@gmail.com; Lai, Chao-Jen; Zhong, Yuncheng
Purpose: In using flat panel detectors (FPD) for cone beam computed tomography (CBCT), pixel gain variations may lead to structured nonuniformities in projections and ring artifacts in CBCT images. Such gain variations can be caused by change in detector entrance exposure levels or beam hardening, and they are not accounted by conventional flat field correction methods. In this work, the authors presented a method to identify isolated pixel clusters that exhibit gain variations and proposed a pixel gain correction (PGC) method to suppress both beam hardening and exposure level dependent gain variations. Methods: To modulate both beam spectrum and entrancemore » exposure, flood field FPD projections were acquired using beam filters with varying thicknesses. “Ideal” pixel values were estimated by performing polynomial fits in both raw and flat field corrected projections. Residuals were calculated by taking the difference between measured and ideal pixel values to identify clustered image and FPD artifacts in flat field corrected and raw images, respectively. To correct clustered image artifacts, the ratio of ideal to measured pixel values in filtered images were utilized as pixel-specific gain correction factors, referred as PGC method, and they were tabulated as a function of pixel value in a look-up table. Results: 0.035% of detector pixels lead to clustered image artifacts in flat field corrected projections, where 80% of these pixels were traced back and linked to artifacts in the FPD. The performance of PGC method was tested in variety of imaging conditions and phantoms. The PGC method reduced clustered image artifacts and fixed pattern noise in projections, and ring artifacts in CBCT images. Conclusions: Clustered projection image artifacts that lead to ring artifacts in CBCT can be better identified with our artifact detection approach. When compared to the conventional flat field correction method, the proposed PGC method enables characterization of nonlinear pixel gain variations as a function of change in x-ray spectrum and intensity. Hence, it can better suppress image artifacts due to beam hardening as well as artifacts that arise from detector entrance exposure variation.« less
A newly identified calculation discrepancy of the Sunset semi-continuous carbon analyzer
NASA Astrophysics Data System (ADS)
Zheng, G.; Cheng, Y.; He, K.; Duan, F.; Ma, Y.
2014-01-01
Sunset Semi-Continuous Carbon Analyzer (SCCA) is an instrument widely used for carbonaceous aerosol measurement. Despite previous validation work, here we identified a new type of SCCA calculation discrepancy caused by the default multi-point baseline correction method. When exceeding a certain threshold carbon load, multi-point correction could cause significant Total Carbon (TC) underestimation. This calculation discrepancy was characterized for both sucrose and ambient samples with three temperature protocols. For ambient samples, 22%, 36% and 12% TC was underestimated by the three protocols, respectively, with corresponding threshold being ~0, 20 and 25 μg C. For sucrose, however, such discrepancy was observed with only one of these protocols, indicating the need of more refractory SCCA calibration substance. The discrepancy was less significant for the NIOSH (National Institute for Occupational Safety and Health)-like protocol compared with the other two protocols based on IMPROVE (Interagency Monitoring of PROtected Visual Environments). Although the calculation discrepancy could be largely reduced by the single-point baseline correction method, the instrumental blanks of single-point method were higher. Proposed correction method was to use multi-point corrected data when below the determined threshold, while use single-point results when beyond that threshold. The effectiveness of this correction method was supported by correlation with optical data.
A newly identified calculation discrepancy of the Sunset semi-continuous carbon analyzer
NASA Astrophysics Data System (ADS)
Zheng, G. J.; Cheng, Y.; He, K. B.; Duan, F. K.; Ma, Y. L.
2014-07-01
The Sunset semi-continuous carbon analyzer (SCCA) is an instrument widely used for carbonaceous aerosol measurement. Despite previous validation work, in this study we identified a new type of SCCA calculation discrepancy caused by the default multipoint baseline correction method. When exceeding a certain threshold carbon load, multipoint correction could cause significant total carbon (TC) underestimation. This calculation discrepancy was characterized for both sucrose and ambient samples, with two protocols based on IMPROVE (Interagency Monitoring of PROtected Visual Environments) (i.e., IMPshort and IMPlong) and one NIOSH (National Institute for Occupational Safety and Health)-like protocol (rtNIOSH). For ambient samples, the IMPshort, IMPlong and rtNIOSH protocol underestimated 22, 36 and 12% of TC, respectively, with the corresponding threshold being ~ 0, 20 and 25 μgC. For sucrose, however, such discrepancy was observed only with the IMPshort protocol, indicating the need of more refractory SCCA calibration substance. Although the calculation discrepancy could be largely reduced by the single-point baseline correction method, the instrumental blanks of single-point method were higher. The correction method proposed was to use multipoint-corrected data when below the determined threshold, and use single-point results when beyond that threshold. The effectiveness of this correction method was supported by correlation with optical data.
Comparison of methods for the identification of microorganisms isolated from blood cultures.
Monteiro, Aydir Cecília Marinho; Fortaleza, Carlos Magno Castelo Branco; Ferreira, Adriano Martison; Cavalcante, Ricardo de Souza; Mondelli, Alessandro Lia; Bagagli, Eduardo; da Cunha, Maria de Lourdes Ribeiro de Souza
2016-08-05
Bloodstream infections are responsible for thousands of deaths each year. The rapid identification of the microorganisms causing these infections permits correct therapeutic management that will improve the prognosis of the patient. In an attempt to reduce the time spent on this step, microorganism identification devices have been developed, including the VITEK(®) 2 system, which is currently used in routine clinical microbiology laboratories. This study evaluated the accuracy of the VITEK(®) 2 system in the identification of 400 microorganisms isolated from blood cultures and compared the results to those obtained with conventional phenotypic and genotypic methods. In parallel to the phenotypic identification methods, the DNA of these microorganisms was extracted directly from the blood culture bottles for genotypic identification by the polymerase chain reaction (PCR) and DNA sequencing. The automated VITEK(®) 2 system correctly identified 94.7 % (379/400) of the isolates. The YST and GN cards resulted in 100 % correct identifications of yeasts (15/15) and Gram-negative bacilli (165/165), respectively. The GP card correctly identified 92.6 % (199/215) of Gram-positive cocci, while the ANC card was unable to correctly identify any Gram-positive bacilli (0/5). The performance of the VITEK(®) 2 system was considered acceptable and statistical analysis showed that the system is a suitable option for routine clinical microbiology laboratories to identify different microorganisms.
Preferred color correction for digital LCD TVs
NASA Astrophysics Data System (ADS)
Kim, Kyoung Tae; Kim, Choon-Woo; Ahn, Ji-Young; Kang, Dong-Woo; Shin, Hyun-Ho
2009-01-01
Instead of colorimetirc color reproduction, preferred color correction is applied for digital TVs to improve subjective image quality. First step of the preferred color correction is to survey the preferred color coordinates of memory colors. This can be achieved by the off-line human visual tests. Next step is to extract pixels of memory colors representing skin, grass and sky. For the detected pixels, colors are shifted towards the desired coordinates identified in advance. This correction process may result in undesirable contours on the boundaries between the corrected and un-corrected areas. For digital TV applications, the process of extraction and correction should be applied in every frame of the moving images. This paper presents a preferred color correction method in LCH color space. Values of chroma and hue are corrected independently. Undesirable contours on the boundaries of correction are minimized. The proposed method change the coordinates of memory color pixels towards the target color coordinates. Amount of correction is determined based on the averaged coordinate of the extracted pixels. The proposed method maintains the relative color difference within memory color areas. Performance of the proposed method is evaluated using the paired comparison. Results of experiments indicate that the proposed method can reproduce perceptually pleasing images to viewers.
Towards process-informed bias correction of climate change simulations
NASA Astrophysics Data System (ADS)
Maraun, Douglas; Shepherd, Theodore G.; Widmann, Martin; Zappa, Giuseppe; Walton, Daniel; Gutiérrez, José M.; Hagemann, Stefan; Richter, Ingo; Soares, Pedro M. M.; Hall, Alex; Mearns, Linda O.
2017-11-01
Biases in climate model simulations introduce biases in subsequent impact simulations. Therefore, bias correction methods are operationally used to post-process regional climate projections. However, many problems have been identified, and some researchers question the very basis of the approach. Here we demonstrate that a typical cross-validation is unable to identify improper use of bias correction. Several examples show the limited ability of bias correction to correct and to downscale variability, and demonstrate that bias correction can cause implausible climate change signals. Bias correction cannot overcome major model errors, and naive application might result in ill-informed adaptation decisions. We conclude with a list of recommendations and suggestions for future research to reduce, post-process, and cope with climate model biases.
An improved method to detect correct protein folds using partial clustering.
Zhou, Jianjun; Wishart, David S
2013-01-16
Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient "partial" clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either C(α) RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance.
An improved method to detect correct protein folds using partial clustering
2013-01-01
Background Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient “partial“ clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. Results We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either Cα RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. Conclusions The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance. PMID:23323835
ERIC Educational Resources Information Center
Murray, Ellen R.
2016-01-01
According to the literature, identifying and treating tuberculosis (TB) in correctional facilities have been problematic for the inmates and also for the communities into which inmates are released. The importance of training those who can identify this disease early into incarceration is vital to halt the transmission. Although some training has…
ERIC Educational Resources Information Center
Croker, Robert E.; And Others
A study identified the learning style preferences and brain hemisphericity of female inmates at the Pocatello Women's Correctional Center in Pocatello, Idaho. It also identified teaching methodologies to which inmates were exposed while in a learning environment as well as preferred teaching methods. Data were gathered by the Learning Type Measure…
Correcting for Sample Contamination in Genotype Calling of DNA Sequence Data
Flickinger, Matthew; Jun, Goo; Abecasis, Gonçalo R.; Boehnke, Michael; Kang, Hyun Min
2015-01-01
DNA sample contamination is a frequent problem in DNA sequencing studies and can result in genotyping errors and reduced power for association testing. We recently described methods to identify within-species DNA sample contamination based on sequencing read data, showed that our methods can reliably detect and estimate contamination levels as low as 1%, and suggested strategies to identify and remove contaminated samples from sequencing studies. Here we propose methods to model contamination during genotype calling as an alternative to removal of contaminated samples from further analyses. We compare our contamination-adjusted calls to calls that ignore contamination and to calls based on uncontaminated data. We demonstrate that, for moderate contamination levels (5%–20%), contamination-adjusted calls eliminate 48%–77% of the genotyping errors. For lower levels of contamination, our contamination correction methods produce genotypes nearly as accurate as those based on uncontaminated data. Our contamination correction methods are useful generally, but are particularly helpful for sample contamination levels from 2% to 20%. PMID:26235984
NASA Technical Reports Server (NTRS)
Wiseman, S.M.; Arvidson, R.E.; Wolff, M. J.; Smith, M. D.; Seelos, F. P.; Morgan, F.; Murchie, S. L.; Mustard, J. F.; Morris, R. V.; Humm, D.;
2014-01-01
The empirical volcano-scan atmospheric correction is widely applied to Martian near infrared CRISM and OMEGA spectra between 1000 and 2600 nanometers to remove prominent atmospheric gas absorptions with minimal computational investment. This correction method employs division by a scaled empirically-derived atmospheric transmission spectrum that is generated from observations of the Martian surface in which different path lengths through the atmosphere were measured and transmission calculated using the Beer-Lambert Law. Identifying and characterizing both artifacts and residual atmospheric features left by the volcano-scan correction is important for robust interpretation of CRISM and OMEGA volcano scan corrected spectra. In order to identify and determine the cause of spectral artifacts introduced by the volcano-scan correction, we simulated this correction using a multiple scattering radiative transfer algorithm (DISORT). Simulated transmission spectra that are similar to actual CRISM- and OMEGA-derived transmission spectra were generated from modeled Olympus Mons base and summit spectra. Results from the simulations were used to investigate the validity of assumptions inherent in the volcano-scan correction and to identify artifacts introduced by this method of atmospheric correction. We found that the most prominent artifact, a bowl-shaped feature centered near 2000 nanometers, is caused by the inaccurate assumption that absorption coefficients of CO2 in the Martian atmosphere are independent of column density. In addition, spectral albedo and slope are modified by atmospheric aerosols. Residual atmospheric contributions that are caused by variable amounts of dust aerosols, ice aerosols, and water vapor are characterized by the analysis of CRISM volcano-scan corrected spectra from the same location acquired at different times under variable atmospheric conditions.
NASA Astrophysics Data System (ADS)
Wiseman, S. M.; Arvidson, R. E.; Wolff, M. J.; Smith, M. D.; Seelos, F. P.; Morgan, F.; Murchie, S. L.; Mustard, J. F.; Morris, R. V.; Humm, D.; McGuire, P. C.
2016-05-01
The empirical 'volcano-scan' atmospheric correction is widely applied to martian near infrared CRISM and OMEGA spectra between ∼1000 and ∼2600 nm to remove prominent atmospheric gas absorptions with minimal computational investment. This correction method employs division by a scaled empirically-derived atmospheric transmission spectrum that is generated from observations of the martian surface in which different path lengths through the atmosphere were measured and transmission calculated using the Beer-Lambert Law. Identifying and characterizing both artifacts and residual atmospheric features left by the volcano-scan correction is important for robust interpretation of CRISM and OMEGA volcano-scan corrected spectra. In order to identify and determine the cause of spectral artifacts introduced by the volcano-scan correction, we simulated this correction using a multiple scattering radiative transfer algorithm (DISORT). Simulated transmission spectra that are similar to actual CRISM- and OMEGA-derived transmission spectra were generated from modeled Olympus Mons base and summit spectra. Results from the simulations were used to investigate the validity of assumptions inherent in the volcano-scan correction and to identify artifacts introduced by this method of atmospheric correction. We found that the most prominent artifact, a bowl-shaped feature centered near 2000 nm, is caused by the inaccurate assumption that absorption coefficients of CO2 in the martian atmosphere are independent of column density. In addition, spectral albedo and slope are modified by atmospheric aerosols. Residual atmospheric contributions that are caused by variable amounts of dust aerosols, ice aerosols, and water vapor are characterized by the analysis of CRISM volcano-scan corrected spectra from the same location acquired at different times under variable atmospheric conditions.
Jeverica, Samo; Nagy, Elisabeth; Mueller-Premru, Manica; Papst, Lea
2018-05-15
Rapid detection and identification of anaerobic bacteria from blood is important to adjust antimicrobial therapy by including antibiotics with activity against anaerobic bacteria. Limited data is available about direct identification of anaerobes from positive blood culture bottles using MALDI-TOF mass spectrometry (MS). In this study, we evaluated the performance of two sample preparation protocols for direct identification of anaerobes from positive blood culture bottles, the MALDI Sepsityper kit (Sepsityper) and the in-house saponin (saponin) method. Additionally, we compared two blood culture bottle types designed to support the growth of anaerobic bacteria, the BacT/ALERT-FN Plus (FN Plus) and the BACTEC-Lytic (Lytic), and their influence on direct identification. A selection of 30 anaerobe strains belonging to 22 different anaerobic species (11 reference strains and 19 clinical isolates) were inoculated to 2 blood culture bottle types in duplicate. In total, 120 bottles were inoculated and 99.2% (n = 119) signalled growth within 5 days of incubation. The Sepsityper method correctly identified 56.3% (n = 67) of anaerobes, while the saponin method correctly identified 84.9% (n = 101) of anaerobes with at least log(score) ≥1.6 (low confidence correct identification), (p < 0.001). Gram negative anaerobes were better identified with the saponin method (100% vs. 46.5%; p < 0.001), while Gram positive anaerobes were better identified with the Sepsityper method (70.8% vs. 62.5%; p = 0.454). Average log(score) values among only those isolates that were correctly identified simultaneously by both sample preparation methods were 2.119 and 2.029 in favour of the Sepsityper method, (p = 0.019). The inoculated bottle type didn't influence the performance of the two sample preparation methods. We confirmed that direct identification from positive blood culture bottles with MALDI-TOF MS is reliable for anaerobic bacteria. However, the results are influenced by the sample preparation method used. Copyright © 2018 Elsevier Ltd. All rights reserved.
Extraction of memory colors for preferred color correction in digital TVs
NASA Astrophysics Data System (ADS)
Ryu, Byong Tae; Yeom, Jee Young; Kim, Choon-Woo; Ahn, Ji-Young; Kang, Dong-Woo; Shin, Hyun-Ho
2009-01-01
Subjective image quality is one of the most important performance indicators for digital TVs. In order to improve subjective image quality, preferred color correction is often employed. More specifically, areas of memory colors such as skin, grass, and sky are modified to generate pleasing impression to viewers. Before applying the preferred color correction, tendency of preference for memory colors should be identified. It is often accomplished by off-line human visual tests. Areas containing the memory colors should be extracted then color correction is applied to the extracted areas. These processes should be performed on-line. This paper presents a new method for area extraction of three types of memory colors. Performance of the proposed method is evaluated by calculating the correct and false detection ratios. Experimental results indicate that proposed method outperform previous methods proposed for the memory color extraction.
Patlewicz, Grace; Casati, Silvia; Basketter, David A; Asturiol, David; Roberts, David W; Lepoittevin, Jean-Pierre; Worth, Andrew P; Aschberger, Karin
2016-12-01
Predictive testing to characterize substances for their skin sensitization potential has historically been based on animal tests such as the Local Lymph Node Assay (LLNA). In recent years, regulations in the cosmetics and chemicals sectors have provided strong impetus to develop non-animal alternatives. Three test methods have undergone OECD validation: the direct peptide reactivity assay (DPRA), the KeratinoSens™ and the human Cell Line Activation Test (h-CLAT). Whilst these methods perform relatively well in predicting LLNA results, a concern raised is their ability to predict chemicals that need activation to be sensitizing (pre- or pro-haptens). This current study reviewed an EURL ECVAM dataset of 127 substances for which information was available in the LLNA and three non-animal test methods. Twenty eight of the sensitizers needed to be activated, with the majority being pre-haptens. These were correctly identified by 1 or more of the test methods. Six substances were categorized exclusively as pro-haptens, but were correctly identified by at least one of the cell-based assays. The analysis here showed that skin metabolism was not likely to be a major consideration for assessing sensitization potential and that sensitizers requiring activation could be identified correctly using one or more of the current non-animal methods. Published by Elsevier Inc.
Sun, Weifang; Yao, Bin; He, Yuchao; Chen, Binqiang; Zeng, Nianyin; He, Wangpeng
2017-08-09
Power generation using waste-gas is an effective and green way to reduce the emission of the harmful blast furnace gas (BFG) in pig-iron producing industry. Condition monitoring of mechanical structures in the BFG power plant is of vital importance to guarantee their safety and efficient operations. In this paper, we describe the detection of crack growth of bladed machinery in the BFG power plant via vibration measurement combined with an enhanced spectral correction technique. This technique enables high-precision identification of amplitude, frequency, and phase information (the harmonic information) belonging to deterministic harmonic components within the vibration signals. Rather than deriving all harmonic information using neighboring spectral bins in the fast Fourier transform spectrum, this proposed active frequency shift spectral correction method makes use of some interpolated Fourier spectral bins and has a better noise-resisting capacity. We demonstrate that the identified harmonic information via the proposed method is of suppressed numerical error when the same level of noises is presented in the vibration signal, even in comparison with a Hanning-window-based correction method. With the proposed method, we investigated vibration signals collected from a centrifugal compressor. Spectral information of harmonic tones, related to the fundamental working frequency of the centrifugal compressor, is corrected. The extracted spectral information indicates the ongoing development of an impeller blade crack that occurred in the centrifugal compressor. This method proves to be a promising alternative to identify blade cracks at early stages.
Enhancement of Chemical Entity Identification in Text Using Semantic Similarity Validation
Grego, Tiago; Couto, Francisco M.
2013-01-01
With the amount of chemical data being produced and reported in the literature growing at a fast pace, it is increasingly important to efficiently retrieve this information. To tackle this issue text mining tools have been applied, but despite their good performance they still provide many errors that we believe can be filtered by using semantic similarity. Thus, this paper proposes a novel method that receives the results of chemical entity identification systems, such as Whatizit, and exploits the semantic relationships in ChEBI to measure the similarity between the entities found in the text. The method assigns a single validation score to each entity based on its similarities with the other entities also identified in the text. Then, by using a given threshold, the method selects a set of validated entities and a set of outlier entities. We evaluated our method using the results of two state-of-the-art chemical entity identification tools, three semantic similarity measures and two text window sizes. The method was able to increase precision without filtering a significant number of correctly identified entities. This means that the method can effectively discriminate the correctly identified chemical entities, while discarding a significant number of identification errors. For example, selecting a validation set with 75% of all identified entities, we were able to increase the precision by 28% for one of the chemical entity identification tools (Whatizit), maintaining in that subset 97% the correctly identified entities. Our method can be directly used as an add-on by any state-of-the-art entity identification tool that provides mappings to a database, in order to improve their results. The proposed method is included in a freely accessible web tool at www.lasige.di.fc.ul.pt/webtools/ice/. PMID:23658791
Dai, Guang-ming; Campbell, Charles E; Chen, Li; Zhao, Huawei; Chernyak, Dimitri
2009-01-20
In wavefront-driven vision correction, ocular aberrations are often measured on the pupil plane and the correction is applied on a different plane. The problem with this practice is that any changes undergone by the wavefront as it propagates between planes are not currently included in devising customized vision correction. With some valid approximations, we have developed an analytical foundation based on geometric optics in which Zernike polynomials are used to characterize the propagation of the wavefront from one plane to another. Both the boundary and the magnitude of the wavefront change after the propagation. Taylor monomials were used to realize the propagation because of their simple form for this purpose. The method we developed to identify changes in low-order aberrations was verified with the classical vertex correction formula. The method we developed to identify changes in high-order aberrations was verified with ZEMAX ray-tracing software. Although the method may not be valid for highly irregular wavefronts and it was only proven for wavefronts with low-order or high-order aberrations, our analysis showed that changes in the propagating wavefront are significant and should, therefore, be included in calculating vision correction. This new approach could be of major significance in calculating wavefront-driven vision correction whether by refractive surgery, contact lenses, intraocular lenses, or spectacles.
ERIC Educational Resources Information Center
Turner, Jill; Rafferty, Lisa A.; Sullivan, Ray; Blake, Amy
2017-01-01
In this action research case study, the researchers used a multiple baseline across two student pairs design to investigate the effects of the error self-correction method on the spelling accuracy behaviors for four fifth-grade students who were identified as being at risk for learning disabilities. The dependent variable was the participants'…
Wilson, Deborah A; Young, Stephen; Timm, Karen; Novak-Weekley, Susan; Marlowe, Elizabeth M; Madisen, Neil; Lillie, Jennifer L; Ledeboer, Nathan A; Smith, Rebecca; Hyke, Josh; Griego-Fullbright, Christen; Jim, Patricia; Granato, Paul A; Faron, Matthew L; Cumpio, Joven; Buchan, Blake W; Procop, Gary W
2017-06-01
A report on the multicenter evaluation of the Bruker MALDI Biotyper CA System (MBT-CA; Bruker Daltonics, Billerica, MA) for the identification of clinically important bacteria and yeasts. In total, 4,399 isolates of medically important bacteria and yeasts were assessed in the MBT-CA. These included 2,262 aerobic gram-positive (AGP) bacteria, 792 aerobic gram-negative (AGN) bacteria 530 anaerobic (AnA) bacteria, and 815 yeasts (YSTs). Three processing methods were assesed. Overall, 98.4% (4,329/4,399) of all bacterial and yeast isolates were correctly identified to the genus and species/species complex level, and 95.7% of isolates were identified with a high degree of confidence. The percentage correctly identified and the percentage identified correctly with a high level of confidence, respectively, were as follows: AGP bacteria (98.6%/96.5%), AGN bacteria (98.5%/96.8%), AnA bacteria (98.5%/97.4%), and YSTs (97.8%/87.6%). The extended direct transfer method was only minimally superior to the direct transfer method for bacteria (89.9% vs 86.8%, respectively) but significantly superior for yeast isolates (74.0% vs 48.9%, respectively). The Bruker MALDI Biotyper CA System accurately identifies most clinically important bacteria and yeasts and has optional processing methods to improve isolate characterization. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Investigation on Beam-Blocker-Based Scatter Correction Method for Improving CT Number Accuracy
NASA Astrophysics Data System (ADS)
Lee, Hoyeon; Min, Jonghwan; Lee, Taewon; Pua, Rizza; Sabir, Sohail; Yoon, Kown-Ha; Kim, Hokyung; Cho, Seungryong
2017-03-01
Cone-beam computed tomography (CBCT) is gaining widespread use in various medical and industrial applications but suffers from substantially larger amount of scatter than that in the conventional diagnostic CT resulting in relatively poor image quality. Various methods that can reduce and/or correct for the scatter in the CBCT have therefore been developed. Scatter correction method that uses a beam-blocker has been considered a direct measurement-based approach providing accurate scatter estimation from the data in the shadows of the beam-blocker. To the best of our knowledge, there has been no record reporting the significance of the scatter from the beam-blocker itself in such correction methods. In this paper, we identified the scatter from the beam-blocker that is detected in the object-free projection data investigated its influence on the image accuracy of CBCT reconstructed images, and developed a scatter correction scheme that takes care of this scatter as well as the scatter from the scanned object.
Sun, Weifang; Yao, Bin; He, Yuchao; Zeng, Nianyin; He, Wangpeng
2017-01-01
Power generation using waste-gas is an effective and green way to reduce the emission of the harmful blast furnace gas (BFG) in pig-iron producing industry. Condition monitoring of mechanical structures in the BFG power plant is of vital importance to guarantee their safety and efficient operations. In this paper, we describe the detection of crack growth of bladed machinery in the BFG power plant via vibration measurement combined with an enhanced spectral correction technique. This technique enables high-precision identification of amplitude, frequency, and phase information (the harmonic information) belonging to deterministic harmonic components within the vibration signals. Rather than deriving all harmonic information using neighboring spectral bins in the fast Fourier transform spectrum, this proposed active frequency shift spectral correction method makes use of some interpolated Fourier spectral bins and has a better noise-resisting capacity. We demonstrate that the identified harmonic information via the proposed method is of suppressed numerical error when the same level of noises is presented in the vibration signal, even in comparison with a Hanning-window-based correction method. With the proposed method, we investigated vibration signals collected from a centrifugal compressor. Spectral information of harmonic tones, related to the fundamental working frequency of the centrifugal compressor, is corrected. The extracted spectral information indicates the ongoing development of an impeller blade crack that occurred in the centrifugal compressor. This method proves to be a promising alternative to identify blade cracks at early stages. PMID:28792453
Jahani, Sahar; Setarehdan, Seyed K; Boas, David A; Yücel, Meryem A
2018-01-01
Motion artifact contamination in near-infrared spectroscopy (NIRS) data has become an important challenge in realizing the full potential of NIRS for real-life applications. Various motion correction algorithms have been used to alleviate the effect of motion artifacts on the estimation of the hemodynamic response function. While smoothing methods, such as wavelet filtering, are excellent in removing motion-induced sharp spikes, the baseline shifts in the signal remain after this type of filtering. Methods, such as spline interpolation, on the other hand, can properly correct baseline shifts; however, they leave residual high-frequency spikes. We propose a hybrid method that takes advantage of different correction algorithms. This method first identifies the baseline shifts and corrects them using a spline interpolation method or targeted principal component analysis. The remaining spikes, on the other hand, are corrected by smoothing methods: Savitzky-Golay (SG) filtering or robust locally weighted regression and smoothing. We have compared our new approach with the existing correction algorithms in terms of hemodynamic response function estimation using the following metrics: mean-squared error, peak-to-peak error ([Formula: see text]), Pearson's correlation ([Formula: see text]), and the area under the receiver operator characteristic curve. We found that spline-SG hybrid method provides reasonable improvements in all these metrics with a relatively short computational time. The dataset and the code used in this study are made available online for the use of all interested researchers.
40 CFR 1068.501 - How do I report emission-related defects?
Code of Federal Regulations, 2010 CFR
2010-07-01
... methods for tracking, investigating, reporting, and correcting emission-related defects. In your request... aggregate in tracking, identifying, investigating, evaluating, reporting, and correcting potential and... it is actually defective. Note that this paragraph (b)(2) does not require data-tracking or recording...
Identifying UMLS concepts from ECG Impressions using KnowledgeMap
Denny, Joshua C.; Spickard, Anderson; Miller, Randolph A; Schildcrout, Jonathan; Darbar, Dawood; Rosenbloom, S. Trent; Peterson, Josh F.
2005-01-01
Electrocardiogram (ECG) impressions represent a wealth of medical information for potential decision support and drug-effect discovery. Much of this information is inaccessible to automated methods in the free-text portion of the ECG report. We studied the application of the KnowledgeMap concept identifier (KMCI) to map Unified Medical Language System (UMLS) concepts from ECG impressions. ECGs were processed by KMCI and the results scored for accuracy by multiple raters. Reviewers also recorded unidentified concepts through the scoring interface. Overall, KMCI correctly identified 1059 out of 1171 concepts for a recall of 0.90. Precision, indicating the proportion of ECG concepts correctly identified, was 0.94. KMCI was particularly effective at identifying ECG rhythms (330/333), perfusion changes (65/66), and noncardiac medical concepts (11/11). In conclusion, KMCI is an effective method for mapping ECG impressions to UMLS concepts. PMID:16779029
Removal of ring artifacts in microtomography by characterization of scintillator variations.
Vågberg, William; Larsson, Jakob C; Hertz, Hans M
2017-09-18
Ring artifacts reduce image quality in tomography, and arise from faulty detector calibration. In microtomography, we have identified that ring artifacts can arise due to high-spatial frequency variations in the scintillator thickness. Such variations are normally removed by a flat-field correction. However, as the spectrum changes, e.g. due to beam hardening, the detector response varies non-uniformly introducing ring artifacts that persist after flat-field correction. In this paper, we present a method to correct for ring artifacts from variations in scintillator thickness by using a simple method to characterize the local scintillator response. The method addresses the actual physical cause of the ring artifacts, in contrary to many other ring artifact removal methods which rely only on image post-processing. By applying the technique to an experimental phantom tomography, we show that ring artifacts are strongly reduced compared to only making a flat-field correction.
Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.
Samoli, Evangelia; Butland, Barbara K
2017-12-01
Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.
A Hybrid Wavelet-Based Method for the Peak Detection of Photoplethysmography Signals.
Li, Suyi; Jiang, Shanqing; Jiang, Shan; Wu, Jiang; Xiong, Wenji; Diao, Shu
2017-01-01
The noninvasive peripheral oxygen saturation (SpO 2 ) and the pulse rate can be extracted from photoplethysmography (PPG) signals. However, the accuracy of the extraction is directly affected by the quality of the signal obtained and the peak of the signal identified; therefore, a hybrid wavelet-based method is proposed in this study. Firstly, we suppressed the partial motion artifacts and corrected the baseline drift by using a wavelet method based on the principle of wavelet multiresolution. And then, we designed a quadratic spline wavelet modulus maximum algorithm to identify the PPG peaks automatically. To evaluate this hybrid method, a reflective pulse oximeter was used to acquire ten subjects' PPG signals under sitting, raising hand, and gently walking postures, and the peak recognition results on the raw signal and on the corrected signal were compared, respectively. The results showed that the hybrid method not only corrected the morphologies of the signal well but also optimized the peaks identification quality, subsequently elevating the measurement accuracy of SpO 2 and the pulse rate. As a result, our hybrid wavelet-based method profoundly optimized the evaluation of respiratory function and heart rate variability analysis.
A Hybrid Wavelet-Based Method for the Peak Detection of Photoplethysmography Signals
Jiang, Shanqing; Jiang, Shan; Wu, Jiang; Xiong, Wenji
2017-01-01
The noninvasive peripheral oxygen saturation (SpO2) and the pulse rate can be extracted from photoplethysmography (PPG) signals. However, the accuracy of the extraction is directly affected by the quality of the signal obtained and the peak of the signal identified; therefore, a hybrid wavelet-based method is proposed in this study. Firstly, we suppressed the partial motion artifacts and corrected the baseline drift by using a wavelet method based on the principle of wavelet multiresolution. And then, we designed a quadratic spline wavelet modulus maximum algorithm to identify the PPG peaks automatically. To evaluate this hybrid method, a reflective pulse oximeter was used to acquire ten subjects' PPG signals under sitting, raising hand, and gently walking postures, and the peak recognition results on the raw signal and on the corrected signal were compared, respectively. The results showed that the hybrid method not only corrected the morphologies of the signal well but also optimized the peaks identification quality, subsequently elevating the measurement accuracy of SpO2 and the pulse rate. As a result, our hybrid wavelet-based method profoundly optimized the evaluation of respiratory function and heart rate variability analysis. PMID:29250135
Kangas, Michael J; Burks, Raychelle M; Atwater, Jordyn; Lukowicz, Rachel M; Garver, Billy; Holmes, Andrea E
2018-02-01
With the increasing availability of digital imaging devices, colorimetric sensor arrays are rapidly becoming a simple, yet effective tool for the identification and quantification of various analytes. Colorimetric arrays utilize colorimetric data from many colorimetric sensors, with the multidimensional nature of the resulting data necessitating the use of chemometric analysis. Herein, an 8 sensor colorimetric array was used to analyze select acid and basic samples (0.5 - 10 M) to determine which chemometric methods are best suited for classification quantification of analytes within clusters. PCA, HCA, and LDA were used to visualize the data set. All three methods showed well-separated clusters for each of the acid or base analytes and moderate separation between analyte concentrations, indicating that the sensor array can be used to identify and quantify samples. Furthermore, PCA could be used to determine which sensors showed the most effective analyte identification. LDA, KNN, and HQI were used for identification of analyte and concentration. HQI and KNN could be used to correctly identify the analytes in all cases, while LDA correctly identified 95 of 96 analytes correctly. Additional studies demonstrated that controlling for solvent and image effects was unnecessary for all chemometric methods utilized in this study.
Technique for improving solid state mosaic images
NASA Technical Reports Server (NTRS)
Saboe, J. M.
1969-01-01
Method identifies and corrects mosaic image faults in solid state visual displays and opto-electronic presentation systems. Composite video signals containing faults due to defective sensing elements are corrected by a memory unit that contains the stored fault pattern and supplies the appropriate fault word to the blanking circuit.
Comparison of stochastic optimization methods for all-atom folding of the Trp-Cage protein.
Schug, Alexander; Herges, Thomas; Verma, Abhinav; Lee, Kyu Hwan; Wenzel, Wolfgang
2005-12-09
The performances of three different stochastic optimization methods for all-atom protein structure prediction are investigated and compared. We use the recently developed all-atom free-energy force field (PFF01), which was demonstrated to correctly predict the native conformation of several proteins as the global optimum of the free energy surface. The trp-cage protein (PDB-code 1L2Y) is folded with the stochastic tunneling method, a modified parallel tempering method, and the basin-hopping technique. All the methods correctly identify the native conformation, and their relative efficiency is discussed.
NASA Astrophysics Data System (ADS)
Gillen, Rebecca; Firbank, Michael J.; Lloyd, Jim; O'Brien, John T.
2015-09-01
This study investigated if the appearance and diagnostic accuracy of HMPAO brain perfusion SPECT images could be improved by using CT-based attenuation and scatter correction compared with the uniform attenuation correction method. A cohort of subjects who were clinically categorized as Alzheimer’s Disease (n=38 ), Dementia with Lewy Bodies (n=29 ) or healthy normal controls (n=30 ), underwent SPECT imaging with Tc-99m HMPAO and a separate CT scan. The SPECT images were processed using: (a) correction map derived from the subject’s CT scan or (b) the Chang uniform approximation for correction or (c) no attenuation correction. Images were visually inspected. The ratios between key regions of interest known to be affected or spared in each condition were calculated for each correction method, and the differences between these ratios were evaluated. The images produced using the different corrections were noted to be visually different. However, ROI analysis found similar statistically significant differences between control and dementia groups and between AD and DLB groups regardless of the correction map used. We did not identify an improvement in diagnostic accuracy in images which were corrected using CT-based attenuation and scatter correction, compared with those corrected using a uniform correction map.
Work productivity loss from depression: evidence from an employer survey.
Rost, Kathryn M; Meng, Hongdao; Xu, Stanley
2014-12-18
National working groups identify the need for return on investment research conducted from the purchaser perspective; however, the field has not developed standardized methods for measuring the basic components of return on investment, including costing out the value of work productivity loss due to illness. Recent literature is divided on whether the most commonly used method underestimates or overestimates this loss. The goal of this manuscript is to characterize between and within variation in the cost of work productivity loss from illness estimated by the most commonly used method and its two refinements. One senior health benefit specialist from each of 325 companies employing 100+ workers completed a cross-sectional survey describing their company size, industry and policies/practices regarding work loss which allowed the research team to derive the variables needed to estimate work productivity loss from illness using three methods. Compensation estimates were derived by multiplying lost work hours from presenteeism and absenteeism by wage/fringe. Disruption correction adjusted this estimate to account for co-worker disruption, while friction correction accounted for labor substitution. The analysis compared bootstrapped means and medians between and within these three methods. The average company realized an annual $617 (SD = $75) per capita loss from depression by compensation methods and a $649 (SD = $78) loss by disruption correction, compared to a $316 (SD = $58) loss by friction correction (p < .0001). Agreement across estimates was 0.92 (95% CI 0.90, 0.93). Although the methods identify similar companies with high costs from lost productivity, friction correction reduces the size of compensation estimates of productivity loss by one half. In analyzing the potential consequences of method selection for the dissemination of interventions to employers, intervention developers are encouraged to include friction methods in their estimate of the economic value of interventions designed to improve absenteeism and presenteeism. Business leaders in industries where labor substitution is common are encouraged to seek friction corrected estimates of return on investment. Health policy analysts are encouraged to target the dissemination of productivity enhancing interventions to employers with high losses rather than all employers. NCT01013220.
Methods as Tools: A Response to O'Keefe.
ERIC Educational Resources Information Center
Hewes, Dean E.
2003-01-01
Tries to distinguish the key insights from some distortions by clarifying the goals of experiment-wise error control that D. O'Keefe correctly identifies as vague and open to misuse. Concludes that a better understanding of the goal of experiment-wise error correction erases many of these "absurdities," but the clarifications necessary…
Vlek, Anneloes; Kolecka, Anna; Khayhan, Kantarawee; Theelen, Bart; Groenewald, Marizeth; Boel, Edwin
2014-01-01
An interlaboratory study using matrix-assisted laser desorption ionization–time of flight mass spectrometry (MALDI-TOF MS) to determine the identification of clinically important yeasts (n = 35) was performed at 11 clinical centers, one company, and one reference center using the Bruker Daltonics MALDI Biotyper system. The optimal cutoff for the MALDI-TOF MS score was investigated using receiver operating characteristic (ROC) curve analyses. The percentages of correct identifications were compared for different sample preparation methods and different databases. Logistic regression analysis was performed to analyze the association between the number of spectra in the database and the percentage of strains that were correctly identified. A total of 5,460 MALDI-TOF MS results were obtained. Using all results, the area under the ROC curve was 0.95 (95% confidence interval [CI], 0.94 to 0.96). With a sensitivity of 0.84 and a specificity of 0.97, a cutoff value of 1.7 was considered optimal. The overall percentage of correct identifications (formic acid-ethanol extraction method, score ≥ 1.7) was 61.5% when the commercial Bruker Daltonics database (BDAL) was used, and it increased to 86.8% by using an extended BDAL supplemented with a Centraalbureau voor Schimmelcultures (CBS)-KNAW Fungal Biodiversity Centre in-house database (BDAL+CBS in-house). A greater number of main spectra (MSP) in the database was associated with a higher percentage of correct identifications (odds ratio [OR], 1.10; 95% CI, 1.05 to 1.15; P < 0.01). The results from the direct transfer method ranged from 0% to 82.9% correct identifications, with the results of the top four centers ranging from 71.4% to 82.9% correct identifications. This study supports the use of a cutoff value of 1.7 for the identification of yeasts using MALDI-TOF MS. The inclusion of enough isolates of the same species in the database can enhance the proportion of correctly identified strains. Further optimization of the preparation methods, especially of the direct transfer method, may contribute to improved diagnosis of yeast-related infections. PMID:24920782
A Horizontal Tilt Correction Method for Ship License Numbers Recognition
NASA Astrophysics Data System (ADS)
Liu, Baolong; Zhang, Sanyuan; Hong, Zhenjie; Ye, Xiuzi
2018-02-01
An automatic ship license numbers (SLNs) recognition system plays a significant role in intelligent waterway transportation systems since it can be used to identify ships by recognizing the characters in SLNs. Tilt occurs frequently in many SLNs because the monitors and the ships usually have great vertical or horizontal angles, which decreases the accuracy and robustness of a SLNs recognition system significantly. In this paper, we present a horizontal tilt correction method for SLNs. For an input tilt SLN image, the proposed method accomplishes the correction task through three main steps. First, a MSER-based characters’ center-points computation algorithm is designed to compute the accurate center-points of the characters contained in the input SLN image. Second, a L 1- L 2 distance-based straight line is fitted to the computed center-points using M-estimator algorithm. The tilt angle is estimated at this stage. Finally, based on the computed tilt angle, an affine transformation rotation is conducted to rotate and to correct the input SLN horizontally. At last, the proposed method is tested on 200 tilt SLN images, the proposed method is proved to be effective with a tilt correction rate of 80.5%.
NASA Astrophysics Data System (ADS)
Gao, M.; Li, J.
2018-04-01
Geometric correction is an important preprocessing process in the application of GF4 PMS image. The method of geometric correction that is based on the manual selection of geometric control points is time-consuming and laborious. The more common method, based on a reference image, is automatic image registration. This method involves several steps and parameters. For the multi-spectral sensor GF4 PMS, it is necessary for us to identify the best combination of parameters and steps. This study mainly focuses on the following issues: necessity of Rational Polynomial Coefficients (RPC) correction before automatic registration, base band in the automatic registration and configuration of GF4 PMS spatial resolution.
POCS-enhanced correction of motion artifacts in parallel MRI.
Samsonov, Alexey A; Velikina, Julia; Jung, Youngkyoo; Kholmovski, Eugene G; Johnson, Chris R; Block, Walter F
2010-04-01
A new method for correction of MRI motion artifacts induced by corrupted k-space data, acquired by multiple receiver coils such as phased arrays, is presented. In our approach, a projections onto convex sets (POCS)-based method for reconstruction of sensitivity encoded MRI data (POCSENSE) is employed to identify corrupted k-space samples. After the erroneous data are discarded from the dataset, the artifact-free images are restored from the remaining data using coil sensitivity profiles. The error detection and data restoration are based on informational redundancy of phased-array data and may be applied to full and reduced datasets. An important advantage of the new POCS-based method is that, in addition to multicoil data redundancy, it can use a priori known properties about the imaged object for improved MR image artifact correction. The use of such information was shown to improve significantly k-space error detection and image artifact correction. The method was validated on data corrupted by simulated and real motion such as head motion and pulsatile flow.
Classification and correction of the radar bright band with polarimetric radar
NASA Astrophysics Data System (ADS)
Hall, Will; Rico-Ramirez, Miguel; Kramer, Stefan
2015-04-01
The annular region of enhanced radar reflectivity, known as the Bright Band (BB), occurs when the radar beam intersects a layer of melting hydrometeors. Radar reflectivity is related to rainfall through a power law equation and so this enhanced region can lead to overestimations of rainfall by a factor of up to 5, so it is important to correct for this. The BB region can be identified by using several techniques including hydrometeor classification and freezing level forecasts from mesoscale meteorological models. Advances in dual-polarisation radar measurements and continued research in the field has led to increased accuracy in the ability to identify the melting snow region. A method proposed by Kitchen et al (1994), a form of which is currently used operationally in the UK, utilises idealised Vertical Profiles of Reflectivity (VPR) to correct for the BB enhancement. A simpler and more computationally efficient method involves the formation of an average VPR from multiple elevations for correction that can still cause a significant decrease in error (Vignal 2000). The purpose of this research is to evaluate a method that relies only on analysis of measurements from an operational C-band polarimetric radar without the need for computationally expensive models. Initial results show that LDR is a strong classifier of melting snow with a high Critical Success Index of 97% when compared to the other variables. An algorithm based on idealised VPRs resulted in the largest decrease in error when BB corrected scans are compared to rain gauges and to lower level scans with a reduction in RMSE of 61% for rain-rate measurements. References Kitchen, M., R. Brown, and A. G. Davies, 1994: Real-time correction of weather radar data for the effects of bright band, range and orographic growth in widespread precipitation. Q.J.R. Meteorol. Soc., 120, 1231-1254. Vignal, B. et al, 2000: Three methods to determine profiles of reflectivity from volumetric radar data to correct precipitation estimates. J. Appl. Meteor., 39(10), 1715-1726.
Carleton, W. Christopher; Campbell, David
2018-01-01
Statistical time-series analysis has the potential to improve our understanding of human-environment interaction in deep time. However, radiocarbon dating—the most common chronometric technique in archaeological and palaeoenvironmental research—creates challenges for established statistical methods. The methods assume that observations in a time-series are precisely dated, but this assumption is often violated when calibrated radiocarbon dates are used because they usually have highly irregular uncertainties. As a result, it is unclear whether the methods can be reliably used on radiocarbon-dated time-series. With this in mind, we conducted a large simulation study to investigate the impact of chronological uncertainty on a potentially useful time-series method. The method is a type of regression involving a prediction algorithm called the Poisson Exponentially Weighted Moving Average (PEMWA). It is designed for use with count time-series data, which makes it applicable to a wide range of questions about human-environment interaction in deep time. Our simulations suggest that the PEWMA method can often correctly identify relationships between time-series despite chronological uncertainty. When two time-series are correlated with a coefficient of 0.25, the method is able to identify that relationship correctly 20–30% of the time, providing the time-series contain low noise levels. With correlations of around 0.5, it is capable of correctly identifying correlations despite chronological uncertainty more than 90% of the time. While further testing is desirable, these findings indicate that the method can be used to test hypotheses about long-term human-environment interaction with a reasonable degree of confidence. PMID:29351329
Carleton, W Christopher; Campbell, David; Collard, Mark
2018-01-01
Statistical time-series analysis has the potential to improve our understanding of human-environment interaction in deep time. However, radiocarbon dating-the most common chronometric technique in archaeological and palaeoenvironmental research-creates challenges for established statistical methods. The methods assume that observations in a time-series are precisely dated, but this assumption is often violated when calibrated radiocarbon dates are used because they usually have highly irregular uncertainties. As a result, it is unclear whether the methods can be reliably used on radiocarbon-dated time-series. With this in mind, we conducted a large simulation study to investigate the impact of chronological uncertainty on a potentially useful time-series method. The method is a type of regression involving a prediction algorithm called the Poisson Exponentially Weighted Moving Average (PEMWA). It is designed for use with count time-series data, which makes it applicable to a wide range of questions about human-environment interaction in deep time. Our simulations suggest that the PEWMA method can often correctly identify relationships between time-series despite chronological uncertainty. When two time-series are correlated with a coefficient of 0.25, the method is able to identify that relationship correctly 20-30% of the time, providing the time-series contain low noise levels. With correlations of around 0.5, it is capable of correctly identifying correlations despite chronological uncertainty more than 90% of the time. While further testing is desirable, these findings indicate that the method can be used to test hypotheses about long-term human-environment interaction with a reasonable degree of confidence.
Can the behavioral sciences self-correct? A social epistemic study.
Romero, Felipe
2016-12-01
Advocates of the self-corrective thesis argue that scientific method will refute false theories and find closer approximations to the truth in the long run. I discuss a contemporary interpretation of this thesis in terms of frequentist statistics in the context of the behavioral sciences. First, I identify experimental replications and systematic aggregation of evidence (meta-analysis) as the self-corrective mechanism. Then, I present a computer simulation study of scientific communities that implement this mechanism to argue that frequentist statistics may converge upon a correct estimate or not depending on the social structure of the community that uses it. Based on this study, I argue that methodological explanations of the "replicability crisis" in psychology are limited and propose an alternative explanation in terms of biases. Finally, I conclude suggesting that scientific self-correction should be understood as an interaction effect between inference methods and social structures. Copyright © 2016 Elsevier Ltd. All rights reserved.
2016-01-01
Reliably estimating wildlife abundance is fundamental to effective management. Aerial surveys are one of the only spatially robust tools for estimating large mammal populations, but statistical sampling methods are required to address detection biases that affect accuracy and precision of the estimates. Although various methods for correcting aerial survey bias are employed on large mammal species around the world, these have rarely been rigorously validated. Several populations of feral horses (Equus caballus) in the western United States have been intensively studied, resulting in identification of all unique individuals. This provided a rare opportunity to test aerial survey bias correction on populations of known abundance. We hypothesized that a hybrid method combining simultaneous double-observer and sightability bias correction techniques would accurately estimate abundance. We validated this integrated technique on populations of known size and also on a pair of surveys before and after a known number was removed. Our analysis identified several covariates across the surveys that explained and corrected biases in the estimates. All six tests on known populations produced estimates with deviations from the known value ranging from -8.5% to +13.7% and <0.7 standard errors. Precision varied widely, from 6.1% CV to 25.0% CV. In contrast, the pair of surveys conducted around a known management removal produced an estimated change in population between the surveys that was significantly larger than the known reduction. Although the deviation between was only 9.1%, the precision estimate (CV = 1.6%) may have been artificially low. It was apparent that use of a helicopter in those surveys perturbed the horses, introducing detection error and heterogeneity in a manner that could not be corrected by our statistical models. Our results validate the hybrid method, highlight its potentially broad applicability, identify some limitations, and provide insight and guidance for improving survey designs. PMID:27139732
Lubow, Bruce C; Ransom, Jason I
2016-01-01
Reliably estimating wildlife abundance is fundamental to effective management. Aerial surveys are one of the only spatially robust tools for estimating large mammal populations, but statistical sampling methods are required to address detection biases that affect accuracy and precision of the estimates. Although various methods for correcting aerial survey bias are employed on large mammal species around the world, these have rarely been rigorously validated. Several populations of feral horses (Equus caballus) in the western United States have been intensively studied, resulting in identification of all unique individuals. This provided a rare opportunity to test aerial survey bias correction on populations of known abundance. We hypothesized that a hybrid method combining simultaneous double-observer and sightability bias correction techniques would accurately estimate abundance. We validated this integrated technique on populations of known size and also on a pair of surveys before and after a known number was removed. Our analysis identified several covariates across the surveys that explained and corrected biases in the estimates. All six tests on known populations produced estimates with deviations from the known value ranging from -8.5% to +13.7% and <0.7 standard errors. Precision varied widely, from 6.1% CV to 25.0% CV. In contrast, the pair of surveys conducted around a known management removal produced an estimated change in population between the surveys that was significantly larger than the known reduction. Although the deviation between was only 9.1%, the precision estimate (CV = 1.6%) may have been artificially low. It was apparent that use of a helicopter in those surveys perturbed the horses, introducing detection error and heterogeneity in a manner that could not be corrected by our statistical models. Our results validate the hybrid method, highlight its potentially broad applicability, identify some limitations, and provide insight and guidance for improving survey designs.
Inserting Mastered Targets during Error Correction When Teaching Skills to Children with Autism
ERIC Educational Resources Information Center
Plaisance, Lauren; Lerman, Dorothea C.; Laudont, Courtney; Wu, Wai-Ling
2016-01-01
Research has identified a variety of effective approaches for responding to errors during discrete-trial training. In one commonly used method, the therapist delivers a prompt contingent on the occurrence of an incorrect response and then re-presents the trial so that the learner has an opportunity to perform the correct response independently.…
Han, Hyemin; Glenn, Andrea L
2018-06-01
In fMRI research, the goal of correcting for multiple comparisons is to identify areas of activity that reflect true effects, and thus would be expected to replicate in future studies. Finding an appropriate balance between trying to minimize false positives (Type I error) while not being too stringent and omitting true effects (Type II error) can be challenging. Furthermore, the advantages and disadvantages of these types of errors may differ for different areas of study. In many areas of social neuroscience that involve complex processes and considerable individual differences, such as the study of moral judgment, effects are typically smaller and statistical power weaker, leading to the suggestion that less stringent corrections that allow for more sensitivity may be beneficial and also result in more false positives. Using moral judgment fMRI data, we evaluated four commonly used methods for multiple comparison correction implemented in Statistical Parametric Mapping 12 by examining which method produced the most precise overlap with results from a meta-analysis of relevant studies and with results from nonparametric permutation analyses. We found that voxelwise thresholding with familywise error correction based on Random Field Theory provides a more precise overlap (i.e., without omitting too few regions or encompassing too many additional regions) than either clusterwise thresholding, Bonferroni correction, or false discovery rate correction methods.
Zhang, Yanbin; Lin, Guanfeng; Wang, Shengru; Zhang, Jianguo; Shen, Jianxiong; Wang, Yipeng; Guo, Jianwei; Yang, Xinyu; Zhao, Lijuan
2016-01-01
Study Design. Retrospective study. Objective. To study the behavior of the unfused thoracic curve in Lenke type 5C during the follow-up and to identify risk factors for its correction loss. Summary of Background Data. Few studies have focused on the spontaneous behaviors of the unfused thoracic curve after selective thoracolumbar or lumbar fusion during the follow-up and the risk factors for spontaneous correction loss. Methods. We retrospectively reviewed 45 patients (41 females and 4 males) with AIS who underwent selective TL/L fusion from 2006 to 2012 in a single institution. The follow-up averaged 36 months (range, 24–105 months). Patients were divided into two groups. Thoracic curves in group A improved or maintained their curve magnitude after spontaneous correction, with a negative or no correction loss during the follow-up. Thoracic curves in group B deteriorated after spontaneous correction with a positive correction loss. Univariate analysis and multivariate analysis were built to identify the risk factors for correction loss of the unfused thoracic curves. Results. The minor thoracic curve was 26° preoperatively. It was corrected to 13° immediately with a spontaneous correction of 48.5%. At final follow-up it was 14° with a correction loss of 1°. Thoracic curves did not deteriorate after spontaneous correction in 23 cases in group A, while 22 cases were identified with thoracic curve progressing in group B. In multivariate analysis, two risk factors were independently associated with thoracic correction loss: higher flexibility and better immediate spontaneous correction rate of thoracic curve. Conclusion. Posterior selective TL/L fusion with pedicle screw constructs is an effective treatment for Lenke 5C AIS patients. Nonstructural thoracic curves with higher flexibility or better immediate correction are more likely to progress during the follow-up and close attentions must be paid to these patients in case of decompensation. Level of Evidence: 4 PMID:27831989
Evaluation of Molecular Methods for Identification of Salmonella Serovars
Gurnik, Simone; Ahmad, Aaminah; Blimkie, Travis; Murphy, Stephanie A.; Kropinski, Andrew M.; Nash, John H. E.
2016-01-01
Classification by serotyping is the essential first step in the characterization of Salmonella isolates and is important for surveillance, source tracking, and outbreak detection. To improve detection and reduce the burden of salmonellosis, several rapid and high-throughput molecular Salmonella serotyping methods have been developed. The aim of this study was to compare three commercial kits, Salm SeroGen (Salm Sero-Genotyping AS-1 kit), Check&Trace (Check-Points), and xMAP (xMAP Salmonella serotyping assay), to the Salmonella genoserotyping array (SGSA) developed by our laboratory. They were assessed using a panel of 321 isolates that represent commonly reported serovars from human and nonhuman sources globally. The four methods correctly identified 73.8% to 94.7% of the isolates tested. The methods correctly identified 85% and 98% of the clinically important Salmonella serovars Enteritidis and Typhimurium, respectively. The methods correctly identified 75% to 100% of the nontyphoidal, broad host range Salmonella serovars, including Heidelberg, Hadar, Infantis, Kentucky, Montevideo, Newport, and Virchow. The sensitivity and specificity of Salmonella serovars Typhimurium and Enteritidis ranged from 85% to 100% and 99% to 100%, respectively. It is anticipated that whole-genome sequencing will replace serotyping in public health laboratories in the future. However, at present, it is approximately three times more expensive than molecular methods. Until consistent standards and methodologies are deployed for whole-genome sequencing, data analysis and interlaboratory comparability remain a challenge. The use of molecular serotyping will provide a valuable high-throughput alternative to traditional serotyping. This comprehensive analysis provides a detailed comparison of commercial kits available for the molecular serotyping of Salmonella. PMID:27194688
Robert, Mark E; Linthicum, Fred H
2016-01-01
Profile count method for estimating cell number in sectioned tissue applies a correction factor for double count (resulting from transection during sectioning) of count units selected to represent the cell. For human spiral ganglion cell counts, we attempted to address apparent confusion between published correction factors for nucleus and nucleolus count units that are identical despite the role of count unit diameter in a commonly used correction factor formula. We examined a portion of human cochlea to empirically derive correction factors for the 2 count units, using 3-dimensional reconstruction software to identify double counts. The Neurotology and House Histological Temporal Bone Laboratory at University of California at Los Angeles. Using a fully sectioned and stained human temporal bone, we identified and generated digital images of sections of the modiolar region of the lower first turn of cochlea, identified count units with a light microscope, labeled them on corresponding digital sections, and used 3-dimensional reconstruction software to identify double-counted count units. For 25 consecutive sections, we determined that double-count correction factors for nucleus count unit (0.91) and nucleolus count unit (0.92) matched the published factors. We discovered that nuclei and, therefore, spiral ganglion cells were undercounted by 6.3% when using nucleolus count units. We determined that correction factors for count units must include an element for undercounting spiral ganglion cells as well as the double-count element. We recommend a correction factor of 0.91 for the nucleus count unit and 0.98 for the nucleolus count unit when using 20-µm sections. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.
NASA Technical Reports Server (NTRS)
Mullally, Fergal
2017-01-01
We present an automated method of identifying background eclipsing binaries masquerading as planet candidates in the Kepler planet candidate catalogs. We codify the manual vetting process for Kepler Objects of Interest (KOIs) described in Bryson et al. (2013) with a series of measurements and tests that can be performed algorithmically. We compare our automated results with a sample of manually vetted KOIs from the catalog of Burke et al. (2014) and find excellent agreement. We test the performance on a set of simulated transits and find our algorithm correctly identifies simulated false positives approximately 50 of the time, and correctly identifies 99 of simulated planet candidates.
Endoscopic findings following retroperitoneal pancreas transplantation.
Pinchuk, Alexey V; Dmitriev, Ilya V; Shmarina, Nonna V; Teterin, Yury S; Balkarov, Aslan G; Storozhev, Roman V; Anisimov, Yuri A; Gasanov, Ali M
2017-07-01
An evaluation of the efficacy of endoscopic methods for the diagnosis and correction of surgical and immunological complications after retroperitoneal pancreas transplantation. From October 2011 to March 2015, 27 patients underwent simultaneous retroperitoneal pancreas-kidney transplantation (SPKT). Diagnostic oesophagogastroduodenoscopy (EGD) with protocol biopsy of the donor and recipient duodenal mucosa and endoscopic retrograde pancreatography (ERP) were performed to detect possible complications. Endoscopic stenting of the main pancreatic duct with plastic stents and three-stage endoscopic hemostasis were conducted to correct the identified complications. Endoscopic methods showed high efficiency in the timely diagnosis and adequate correction of complications after retroperitoneal pancreas transplantation. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Evaluation of the Vitek 2 ANC card for identification of clinical isolates of anaerobic bacteria.
Lee, E H L; Degener, J E; Welling, G W; Veloo, A C M
2011-05-01
An evaluation of the Vitek 2 ANC card (bioMérieux, Marcy l'Etoile, France) was performed with 301 anaerobic isolates. Each strain was identified by 16S rRNA gene sequencing, which is considered to be the reference method. The Vitek 2 ANC card correctly identified 239 (79.4%) of the 301 clinical isolates to the genus level, including 100 species that were not represented in the database. Correct species identification was obtained for 60.1% (181/301) of the clinical isolates. For the isolates not identified to the species level, a correct genus identification was obtained for 47.0% of them (47/100), and 16 were accurately designated not identified. Although the Vitek 2 ANC card allows the rapid and acceptable identification of the most common clinically important anaerobic bacteria within 6 h, improvement is required for the identification of members of the genera Fusobacterium, Prevotella, and Actinomyces and certain Gram-positive anaerobic cocci (GPAC).
Teng, L J; Luh, K T; Ho, S W
1985-11-01
Species identifications of 71 strains of viridans streptococci isolated from blood and 4 reference strains were made by the API 20 STREP system (API system S. A., Montalieu-Vercien, France) and the conventional method. There are high levels of agreement between results obtained with the both methods for determining acidification from carbohydrate except inulin. The API 20 STREP system correctly identified 74.7% of the viridans streptococci with 9.3% low descrimination, 12% incorrect and 4% unidentified. All strains of S. mitis, S. mutans, S. salivarius and S. anginosus-constellatus were correctly identified. The correct identification rates for S. sanguis I, S. sanguis II and S. MG-intermedius were 88.9%, 68% and 61% respectively. The difference of inulin reaction and the taxonomy discrepancy may be the cause of different identification. The study indicates that the API 20 STREP system has a good potentiality for species identification of viridans streptococci at present time, however, further refinement in needed.
The remote diagnosis of malaria using telemedicine or e-mailed images.
Murray, Clinton K; Mody, Rupal M; Dooley, David P; Hospenthal, Duane R; Horvath, Lynn L; Moran, Kimberly A; Muntz, Ronald W
2006-12-01
We determined the ability of blinded remote expert microscopy to identify malaria parasites through transmission of malaria smear images via telemedicine and as e-mail attachments. Protocols for malaria smear transmission included: (1) transmission of sender-selected televised smears at various bandwidths (Bw), (2) transmission of remote reader-directed televised smears at various Bw, and (3) transmission of digital photomicrographs as e-mail attachments. Twenty (14%) of 147 sender-selected, and 13 (6%) of 221 reader-directed, images were deemed unreadable by slide readers. The presence or absence of malaria was correctly identified in 98% of the remaining images. Sixty-four (34%) of 190 digital microphotographs were deemed unreadable, while the presence or absence of malaria was correctly identified in 100% of the remaining images. Correct speciation ranged from 45% to 83% across various transmission methods and Bw. The use of telemedicine and e-mail technology shows promise for the remote diagnosis of malaria.
Human factors process failure modes and effects analysis (HF PFMEA) software tool
NASA Technical Reports Server (NTRS)
Chandler, Faith T. (Inventor); Relvini, Kristine M. (Inventor); Shedd, Nathaneal P. (Inventor); Valentino, William D. (Inventor); Philippart, Monica F. (Inventor); Bessette, Colette I. (Inventor)
2011-01-01
Methods, computer-readable media, and systems for automatically performing Human Factors Process Failure Modes and Effects Analysis for a process are provided. At least one task involved in a process is identified, where the task includes at least one human activity. The human activity is described using at least one verb. A human error potentially resulting from the human activity is automatically identified, the human error is related to the verb used in describing the task. A likelihood of occurrence, detection, and correction of the human error is identified. The severity of the effect of the human error is identified. The likelihood of occurrence, and the severity of the risk of potential harm is identified. The risk of potential harm is compared with a risk threshold to identify the appropriateness of corrective measures.
Sloan, N L; Rosen, D; de la Paz, T; Arita, M; Temalilwa, C; Solomons, N W
1997-02-01
The prevalence of vitamin A deficiency has traditionally been assessed through xerophthalmia or biochemical surveys. The cost and complexity of implementing these methods limits the ability of nonresearch organizations to identify vitamin A deficiency. This study examined the validity of a simple, inexpensive food frequency method to identify areas with a high prevalence of vitamin A deficiency. The validity of the method was tested in 15 communities, 5 each from the Philippines, Guatemala, and Tanzania. Serum retinol concentrations of less than 20 micrograms/dL defined vitamin A deficiency. Weighted measures of vitamin A intake six or fewer times per week and unweighted measures of consumption of animal sources of vitamin A four or fewer times per week correctly classified seven of eight communities as having a high prevalence of vitamin A deficiency (i.e., 15% or more preschool-aged children in the community had the deficiency) (sensitivity = 87.5%) and four of seven communities as having a low prevalence (specificity = 57.1%). This method correctly classified the vitamin A deficiency status of 73.3% of the communities but demonstrated a high false-positive rate (42.9%).
Identification of Enzyme Genes Using Chemical Structure Alignments of Substrate-Product Pairs.
Moriya, Yuki; Yamada, Takuji; Okuda, Shujiro; Nakagawa, Zenichi; Kotera, Masaaki; Tokimatsu, Toshiaki; Kanehisa, Minoru; Goto, Susumu
2016-03-28
Although there are several databases that contain data on many metabolites and reactions in biochemical pathways, there is still a big gap in the numbers between experimentally identified enzymes and metabolites. It is supposed that many catalytic enzyme genes are still unknown. Although there are previous studies that estimate the number of candidate enzyme genes, these studies required some additional information aside from the structures of metabolites such as gene expression and order in the genome. In this study, we developed a novel method to identify a candidate enzyme gene of a reaction using the chemical structures of the substrate-product pair (reactant pair). The proposed method is based on a search for similar reactant pairs in a reference database and offers ortholog groups that possibly mediate the given reaction. We applied the proposed method to two experimentally validated reactions. As a result, we confirmed that the histidine transaminase was correctly identified. Although our method could not directly identify the asparagine oxo-acid transaminase, we successfully found the paralog gene most similar to the correct enzyme gene. We also applied our method to infer candidate enzyme genes in the mesaconate pathway. The advantage of our method lies in the prediction of possible genes for orphan enzyme reactions where any associated gene sequences are not determined yet. We believe that this approach will facilitate experimental identification of genes for orphan enzymes.
Yang, Xiaofeng; Wu, Shengyong; Sechopoulos, Ioannis; Fei, Baowei
2012-01-01
Purpose: To develop and test an automated algorithm to classify the different tissues present in dedicated breast CT images. Methods: The original CT images are first corrected to overcome cupping artifacts, and then a multiscale bilateral filter is used to reduce noise while keeping edge information on the images. As skin and glandular tissues have similar CT values on breast CT images, morphologic processing is used to identify the skin mask based on its position information. A modified fuzzy C-means (FCM) classification method is then used to classify breast tissue as fat and glandular tissue. By combining the results of the skin mask with the FCM, the breast tissue is classified as skin, fat, and glandular tissue. To evaluate the authors’ classification method, the authors use Dice overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on eight patient images. Results: The correction method was able to correct the cupping artifacts and improve the quality of the breast CT images. For glandular tissue, the overlap ratios between the authors’ automatic classification and manual segmentation were 91.6% ± 2.0%. Conclusions: A cupping artifact correction method and an automatic classification method were applied and evaluated for high-resolution dedicated breast CT images. Breast tissue classification can provide quantitative measurements regarding breast composition, density, and tissue distribution. PMID:23039675
CULTURE-INDEPENDENT MOLECULAR METHODS FOR FECAL SOURCE IDENTIFICATION
Fecal contamination is widespread in the waterways of the United States. Both to correct the problem, and to estimate public health risk, it is necessary to identify the source of the contamination. Several culture-independent molecular methods for fecal source identification hav...
Statistical inference of static analysis rules
NASA Technical Reports Server (NTRS)
Engler, Dawson Richards (Inventor)
2009-01-01
Various apparatus and methods are disclosed for identifying errors in program code. Respective numbers of observances of at least one correctness rule by different code instances that relate to the at least one correctness rule are counted in the program code. Each code instance has an associated counted number of observances of the correctness rule by the code instance. Also counted are respective numbers of violations of the correctness rule by different code instances that relate to the correctness rule. Each code instance has an associated counted number of violations of the correctness rule by the code instance. A respective likelihood of the validity is determined for each code instance as a function of the counted number of observances and counted number of violations. The likelihood of validity indicates a relative likelihood that a related code instance is required to observe the correctness rule. The violations may be output in order of the likelihood of validity of a violated correctness rule.
Application of wavelet multi-resolution analysis for correction of seismic acceleration records
NASA Astrophysics Data System (ADS)
Ansari, Anooshiravan; Noorzad, Assadollah; Zare, Mehdi
2007-12-01
During an earthquake, many stations record the ground motion, but only a few of them could be corrected using conventional high-pass and low-pass filtering methods and the others were identified as highly contaminated by noise and as a result useless. There are two major problems associated with these noisy records. First, since the signal to noise ratio (S/N) is low, it is not possible to discriminate between the original signal and noise either in the frequency domain or in the time domain. Consequently, it is not possible to cancel out noise using conventional filtering methods. The second problem is the non-stationary characteristics of the noise. In other words, in many cases the characteristics of the noise are varied over time and in these situations, it is not possible to apply frequency domain correction schemes. When correcting acceleration signals contaminated with high-level non-stationary noise, there is an important question whether it is possible to estimate the state of the noise in different bands of time and frequency. Wavelet multi-resolution analysis decomposes a signal into different time-frequency components, and besides introducing a suitable criterion for identification of the noise among each component, also provides the required mathematical tool for correction of highly noisy acceleration records. In this paper, the characteristics of the wavelet de-noising procedures are examined through the correction of selected real and synthetic acceleration time histories. It is concluded that this method provides a very flexible and efficient tool for the correction of very noisy and non-stationary records of ground acceleration. In addition, a two-step correction scheme is proposed for long period correction of the acceleration records. This method has the advantage of stable results in displacement time history and response spectrum.
Validation of an improved abnormality insertion method for medical image perception investigations
NASA Astrophysics Data System (ADS)
Madsen, Mark T.; Durst, Gregory R.; Caldwell, Robert T.; Schartz, Kevin M.; Thompson, Brad H.; Berbaum, Kevin S.
2009-02-01
The ability to insert abnormalities in clinical tomographic images makes image perception studies with medical images practical. We describe a new insertion technique and its experimental validation that uses complementary image masks to select an abnormality from a library and place it at a desired location. The method was validated using a 4-alternative forced-choice experiment. For each case, four quadrants were simultaneously displayed consisting of 5 consecutive frames of a chest CT with a pulmonary nodule. One quadrant was unaltered, while the other 3 had the nodule from the unaltered quadrant artificially inserted. 26 different sets were generated and repeated with order scrambling for a total of 52 cases. The cases were viewed by radiology staff and residents who ranked each quadrant by realistic appearance. On average, the observers were able to correctly identify the unaltered quadrant in 42% of cases, and identify the unaltered quadrant both times it appeared in 25% of cases. Consensus, defined by a majority of readers, correctly identified the unaltered quadrant in only 29% of 52 cases. For repeats, the consensus observer successfully identified the unaltered quadrant only once. We conclude that the insertion method can be used to reliably place abnormalities in perception experiments.
Yang, Xiaofeng; Wu, Shengyong; Sechopoulos, Ioannis; Fei, Baowei
2012-10-01
To develop and test an automated algorithm to classify the different tissues present in dedicated breast CT images. The original CT images are first corrected to overcome cupping artifacts, and then a multiscale bilateral filter is used to reduce noise while keeping edge information on the images. As skin and glandular tissues have similar CT values on breast CT images, morphologic processing is used to identify the skin mask based on its position information. A modified fuzzy C-means (FCM) classification method is then used to classify breast tissue as fat and glandular tissue. By combining the results of the skin mask with the FCM, the breast tissue is classified as skin, fat, and glandular tissue. To evaluate the authors' classification method, the authors use Dice overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on eight patient images. The correction method was able to correct the cupping artifacts and improve the quality of the breast CT images. For glandular tissue, the overlap ratios between the authors' automatic classification and manual segmentation were 91.6% ± 2.0%. A cupping artifact correction method and an automatic classification method were applied and evaluated for high-resolution dedicated breast CT images. Breast tissue classification can provide quantitative measurements regarding breast composition, density, and tissue distribution.
Page, Brent T; Shields, Christine E; Merz, William G; Kurtzman, Cletus P
2006-09-01
This study was designed to compare the identification of ascomycetous yeasts recovered from clinical specimens by using phenotypic assays (PA) and a molecular flow cytometric (FC) method. Large-subunit rRNA domains 1 and 2 (D1/D2) gene sequence analysis was also performed and served as the reference for correct strain identification. A panel of 88 clinical isolates was tested that included representatives of nine commonly encountered species and six infrequently encountered species. The PA included germ tube production, fermentation of seven carbohydrates, morphology on corn meal agar, urease and phenoloxidase activities, and carbohydrate assimilation tests when needed. The FC method (Luminex) employed species-specific oligonucleotides attached to polystyrene beads, which were hybridized with D1/D2 amplicons from the unidentified isolates. The PA identified 81 of 88 strains correctly but misidentified 4 of Candida dubliniensis, 1 of C. bovina, 1 of C. palmioleophila, and 1 of C. bracarensis. The FC method correctly identified 79 of 88 strains and did not misidentify any isolate but did not identify nine isolates because oligonucleotide probes were not available in the current library. The FC assay takes approximately 5 h, whereas the PA takes from 2 h to 5 days for identification. In conclusion, PA did well with the commonly encountered species, was not accurate for uncommon species, and takes significantly longer than the FC method. These data strongly support the potential of FC technology for rapid and accurate identification of medically important yeasts. With the introduction of new antifungals, rapid, accurate identification of pathogenic yeasts is more important than ever for guiding antifungal chemotherapy.
Evaluation of MODFLOW-LGR in connection with a synthetic regional-scale model
Vilhelmsen, T.N.; Christensen, S.; Mehl, S.W.
2012-01-01
This work studies costs and benefits of utilizing local-grid refinement (LGR) as implemented in MODFLOW-LGR to simulate groundwater flow in a buried tunnel valley interacting with a regional aquifer. Two alternative LGR methods were used: the shared-node (SN) method and the ghost-node (GN) method. To conserve flows the SN method requires correction of sources and sinks in cells at the refined/coarse-grid interface. We found that the optimal correction method is case dependent and difficult to identify in practice. However, the results showed little difference and suggest that identifying the optimal method was of minor importance in our case. The GN method does not require corrections at the models' interface, and it uses a simpler head interpolation scheme than the SN method. The simpler scheme is faster but less accurate so that more iterations may be necessary. However, the GN method solved our flow problem more efficiently than the SN method. The MODFLOW-LGR results were compared with the results obtained using a globally coarse (GC) grid. The LGR simulations required one to two orders of magnitude longer run times than the GC model. However, the improvements of the numerical resolution around the buried valley substantially increased the accuracy of simulated heads and flows compared with the GC simulation. Accuracy further increased locally around the valley flanks when improving the geological resolution using the refined grid. Finally, comparing MODFLOW-LGR simulation with a globally refined (GR) grid showed that the refinement proportion of the model should not exceed 10% to 15% in order to secure method efficiency. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.
You, J S; Park, S; Chung, S P; Park, J W
2009-03-01
A needle thoracocentesis should be performed with maximal safety and optimal efficacy in mind. Mobile video telephony (VT) could be used to facilitate instructions for the accurate performance of needle thoracocentesis in an emergency setting. This new communication method will increase the accuracy of identifying the relevant anatomical site during the decompression technique. A prospective randomised manikin study was performed to investigate the effectiveness of using VT as a method of instruction for the identification of anatomical landmarks during the performance of needle thoracocentesis. The overall success rate was significantly higher in the VT group which performed needle thoracocentesis under the guidance of VT than in the non-VT group who performed the procedure without VT-aided instruction. The instrument difficulty score and procedure satisfaction score were significantly lower in the VT group than in the non-VT group. Identification of the correct anatomical landmark for needle thoracocentesis can be performed with instructions provided via VT because a dispatcher can monitor every step and provide correct instructions. This new technology will improve critical care medicine.
Correcting Systemic Deficiencies in Our Scientific Infrastructure
Doss, Mohan
2014-01-01
Scientific method is inherently self-correcting. When different hypotheses are proposed, their study would result in the rejection of the invalid ones. If the study of a competing hypothesis is prevented because of the faith in an unverified one, scientific progress is stalled. This has happened in the study of low dose radiation. Though radiation hormesis was hypothesized to reduce cancers in 1980, it could not be studied in humans because of the faith in the unverified linear no-threshold model hypothesis, likely resulting in over 15 million preventable cancer deaths worldwide during the past two decades, since evidence has accumulated supporting the validity of the phenomenon of radiation hormesis. Since our society has been guided by scientific advisory committees that ostensibly follow the scientific method, the long duration of such large casualties is indicative of systemic deficiencies in the infrastructure that has evolved in our society for the application of science. Some of these deficiencies have been identified in a few elements of the scientific infrastructure, and remedial steps suggested. Identifying and correcting such deficiencies may prevent similar tolls in the future. PMID:24910580
Assessing Feedback in a Mobile Videogame
Brand, Leah; Beltran, Alicia; Hughes, Sheryl; O'Connor, Teresia; Baranowski, Janice; Nicklas, Theresa; Chen, Tzu-An; Dadabhoy, Hafza R.; Diep, Cassandra S.; Buday, Richard
2016-01-01
Abstract Background: Player feedback is an important part of serious games, although there is no consensus regarding its delivery or optimal content. “Mommio” is a serious game designed to help mothers motivate their preschoolers to eat vegetables. The purpose of this study was to assess optimal format and content of player feedback for use in “Mommio.” Materials and Methods: The current study posed 36 potential “Mommio” gameplay feedback statements to 20 mothers using a Web survey and interview. Mothers were asked about the meaning and helpfulness of each feedback statement. Results: Several themes emerged upon thematic analysis, including identifying an effective alternative in the case of corrective feedback, avoiding vague wording, using succinct and correct grammar, avoiding provocation of guilt, and clearly identifying why players' game choice was correct or incorrect. Conclusions: Guidelines are proposed for future feedback statements. PMID:27058403
Structured methods for identifying and correcting potential human errors in aviation operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, W.R.
1997-10-01
Human errors have been identified as the source of approximately 60% of the incidents and accidents that occur in commercial aviation. It can be assumed that a very large number of human errors occur in aviation operations, even though in most cases the redundancies and diversities built into the design of aircraft systems prevent the errors from leading to serious consequences. In addition, when it is acknowledged that many system failures have their roots in human errors that occur in the design phase, it becomes apparent that the identification and elimination of potential human errors could significantly decrease the risksmore » of aviation operations. This will become even more critical during the design of advanced automation-based aircraft systems as well as next-generation systems for air traffic management. Structured methods to identify and correct potential human errors in aviation operations have been developed and are currently undergoing testing at the Idaho National Engineering and Environmental Laboratory (INEEL).« less
A fractional Fourier transform analysis of a bubble excited by an ultrasonic chirp.
Barlow, Euan; Mulholland, Anthony J
2011-11-01
The fractional Fourier transform is proposed here as a model based, signal processing technique for determining the size of a bubble in a fluid. The bubble is insonified with an ultrasonic chirp and the radiated pressure field is recorded. This experimental bubble response is then compared with a series of theoretical model responses to identify the most accurate match between experiment and theory which allows the correct bubble size to be identified. The fractional Fourier transform is used to produce a more detailed description of each response, and two-dimensional cross correlation is then employed to identify the similarities between the experimental response and each theoretical response. In this paper the experimental bubble response is simulated by adding various levels of noise to the theoretical model output. The method is compared to the standard technique of using time-domain cross correlation. The proposed method is shown to be far more robust at correctly sizing the bubble and can cope with much lower signal to noise ratios.
An efficient empirical Bayes method for genomewide association studies.
Wang, Q; Wei, J; Pan, Y; Xu, S
2016-08-01
Linear mixed model (LMM) is one of the most popular methods for genomewide association studies (GWAS). Numerous forms of LMM have been developed; however, there are two major issues in GWAS that have not been fully addressed before. The two issues are (i) the genomic background noise and (ii) low statistical power after Bonferroni correction. We proposed an empirical Bayes (EB) method by assigning each marker effect a normal prior distribution, resulting in shrinkage estimates of marker effects. We found that such a shrinkage approach can selectively shrink marker effects and reduce the noise level to zero for majority of non-associated markers. In the meantime, the EB method allows us to use an 'effective number of tests' to perform Bonferroni correction for multiple tests. Simulation studies for both human and pig data showed that EB method can significantly increase statistical power compared with the widely used exact GWAS methods, such as GEMMA and FaST-LMM-Select. Real data analyses in human breast cancer identified improved detection signals for markers previously known to be associated with breast cancer. We therefore believe that EB method is a valuable tool for identifying the genetic basis of complex traits. © 2015 Blackwell Verlag GmbH.
Schubert, Sören; Weinert, Kirsten; Wagner, Chris; Gunzl, Beatrix; Wieser, Andreas; Maier, Thomas; Kostrzewa, Markus
2011-11-01
Matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) is widely used for rapid and reliable identification of bacteria and yeast grown on agar plates. Moreover, MALDI-TOF MS also holds promise for bacterial identification from blood culture (BC) broths in hospital laboratories. The most important technical step for the identification of bacteria from positive BCs by MALDI-TOF MS is sample preparation to remove blood cells and host proteins. We present a method for novel, rapid sample preparation using differential lysis of blood cells. We demonstrate the efficacy and ease of use of this sample preparation and subsequent MALDI-TOF MS identification, applying it to a total of 500 aerobic and anaerobic BCs reported to be positive by a Bactec 9240 system. In 86.5% of all BCs, the microorganism species were correctly identified. Moreover, in 18/27 mixed cultures at least one isolate was correctly identified. A novel method that adjusts the score value for MALDI-TOF MS results is proposed, further improving the proportion of correctly identified samples. The results of the present study show that the MALDI-TOF MS-based method allows rapid (<20 minutes) bacterial identification directly from positive BCs and with high accuracy. Copyright © 2011 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Bennett, Derrick A; Landry, Denise; Little, Julian; Minelli, Cosetta
2017-09-19
Several statistical approaches have been proposed to assess and correct for exposure measurement error. We aimed to provide a critical overview of the most common approaches used in nutritional epidemiology. MEDLINE, EMBASE, BIOSIS and CINAHL were searched for reports published in English up to May 2016 in order to ascertain studies that described methods aimed to quantify and/or correct for measurement error for a continuous exposure in nutritional epidemiology using a calibration study. We identified 126 studies, 43 of which described statistical methods and 83 that applied any of these methods to a real dataset. The statistical approaches in the eligible studies were grouped into: a) approaches to quantify the relationship between different dietary assessment instruments and "true intake", which were mostly based on correlation analysis and the method of triads; b) approaches to adjust point and interval estimates of diet-disease associations for measurement error, mostly based on regression calibration analysis and its extensions. Two approaches (multiple imputation and moment reconstruction) were identified that can deal with differential measurement error. For regression calibration, the most common approach to correct for measurement error used in nutritional epidemiology, it is crucial to ensure that its assumptions and requirements are fully met. Analyses that investigate the impact of departures from the classical measurement error model on regression calibration estimates can be helpful to researchers in interpreting their findings. With regard to the possible use of alternative methods when regression calibration is not appropriate, the choice of method should depend on the measurement error model assumed, the availability of suitable calibration study data and the potential for bias due to violation of the classical measurement error model assumptions. On the basis of this review, we provide some practical advice for the use of methods to assess and adjust for measurement error in nutritional epidemiology.
Gillman, Ashley; Smith, Jye; Thomas, Paul; Rose, Stephen; Dowson, Nicholas
2017-12-01
Patient motion is an important consideration in modern PET image reconstruction. Advances in PET technology mean motion has an increasingly important influence on resulting image quality. Motion-induced artifacts can have adverse effects on clinical outcomes, including missed diagnoses and oversized radiotherapy treatment volumes. This review aims to summarize the wide variety of motion correction techniques available in PET and combined PET/CT and PET/MR, with a focus on the latter. A general framework for the motion correction of PET images is presented, consisting of acquisition, modeling, and correction stages. Methods for measuring, modeling, and correcting motion and associated artifacts, both in literature and commercially available, are presented, and their relative merits are contrasted. Identified limitations of current methods include modeling of aperiodic and/or unpredictable motion, attaining adequate temporal resolution for motion correction in dynamic kinetic modeling acquisitions, and maintaining availability of the MR in PET/MR scans for diagnostic acquisitions. Finally, avenues for future investigation are discussed, with a focus on improvements that could improve PET image quality, and that are practical in the clinical environment. © 2017 American Association of Physicists in Medicine.
The femoral neck-shaft angle on plain radiographs: a systematic review.
Boese, Christoph Kolja; Dargel, Jens; Oppermann, Johannes; Eysel, Peer; Scheyerer, Max Joseph; Bredow, Jan; Lechler, Philipp
2016-01-01
The femoral neck-shaft angle (NSA) is an important measure for the assessment of the anatomy of the hip and planning of operations. Despite its common use, there remains disagreement concerning the method of measurement and the correction of hip rotation and femoral version of the projected NSA on conventional radiographs. We addressed the following questions: (1) What are the reported values for NSA in normal adult subjects and in osteoarthritis? (2) Is there a difference between non-corrected and rotation-corrected measurements? (3) Which methods are used for measuring the NSA on plain radiographs? (4) What could be learned from an analysis of the intra- and interobserver reliability? A systematic literature search was performed including 26 publications reporting the measurement of the NSA on conventional radiographs. The mean NSA of healthy adults (5,089 hips) was 128.8° (98-180°) and 131.5° (115-155°) in patients with osteoarthritis (1230 hips). The mean NSA was 128.5° (127-130.5°) for the rotation-corrected and 129.5° (119.6-151°) for the non-corrected measurements. Our data showed a high variance of the reported neck-shaft angles. Notably, we identified the inconsistency of the published methods of measurement as a central issue. The reported effect of rotation-correction cannot be reliably verified.
Identification of bacteria isolated from veterinary clinical specimens using MALDI-TOF MS.
Pavlovic, Melanie; Wudy, Corinna; Zeller-Peronnet, Veronique; Maggipinto, Marzena; Zimmermann, Pia; Straubinger, Alix; Iwobi, Azuka; Märtlbauer, Erwin; Busch, Ulrich; Huber, Ingrid
2015-01-01
Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) has recently emerged as a rapid and accurate identification method for bacterial species. Although it has been successfully applied for the identification of human pathogens, it has so far not been well evaluated for routine identification of veterinary bacterial isolates. This study was performed to compare and evaluate the performance of MALDI-TOF MS based identification of veterinary bacterial isolates with commercially available conventional test systems. Discrepancies of both methods were resolved by sequencing 16S rDNA and, if necessary, the infB gene for Actinobacillus isolates. A total of 375 consecutively isolated veterinary samples were collected. Among the 357 isolates (95.2%) correctly identified at the genus level by MALDI-TOF MS, 338 of them (90.1% of the total isolates) were also correctly identified at the species level. Conventional methods offered correct species identification for 319 isolates (85.1%). MALDI-TOF identification therefore offered more accurate identification of veterinary bacterial isolates. An update of the in-house mass spectra database with additional reference spectra clearly improved the identification results. In conclusion, the presented data suggest that MALDI-TOF MS is an appropriate platform for classification and identification of veterinary bacterial isolates.
Detection and correction of false segmental duplications caused by genome mis-assembly
2010-01-01
Diploid genomes with divergent chromosomes present special problems for assembly software as two copies of especially polymorphic regions may be mistakenly constructed, creating the appearance of a recent segmental duplication. We developed a method for identifying such false duplications and applied it to four vertebrate genomes. For each genome, we corrected mis-assemblies, improved estimates of the amount of duplicated sequence, and recovered polymorphisms between the sequenced chromosomes. PMID:20219098
Statistical Selection of Biological Models for Genome-Wide Association Analyses.
Bi, Wenjian; Kang, Guolian; Pounds, Stanley B
2018-05-24
Genome-wide association studies have discovered many biologically important associations of genes with phenotypes. Typically, genome-wide association analyses formally test the association of each genetic feature (SNP, CNV, etc) with the phenotype of interest and summarize the results with multiplicity-adjusted p-values. However, very small p-values only provide evidence against the null hypothesis of no association without indicating which biological model best explains the observed data. Correctly identifying a specific biological model may improve the scientific interpretation and can be used to more effectively select and design a follow-up validation study. Thus, statistical methodology to identify the correct biological model for a particular genotype-phenotype association can be very useful to investigators. Here, we propose a general statistical method to summarize how accurately each of five biological models (null, additive, dominant, recessive, co-dominant) represents the data observed for each variant in a GWAS study. We show that the new method stringently controls the false discovery rate and asymptotically selects the correct biological model. Simulations of two-stage discovery-validation studies show that the new method has these properties and that its validation power is similar to or exceeds that of simple methods that use the same statistical model for all SNPs. Example analyses of three data sets also highlight these advantages of the new method. An R package is freely available at www.stjuderesearch.org/site/depts/biostats/maew. Copyright © 2018. Published by Elsevier Inc.
Statistical testing and power analysis for brain-wide association study.
Gong, Weikang; Wan, Lin; Lu, Wenlian; Ma, Liang; Cheng, Fan; Cheng, Wei; Grünewald, Stefan; Feng, Jianfeng
2018-04-05
The identification of connexel-wise associations, which involves examining functional connectivities between pairwise voxels across the whole brain, is both statistically and computationally challenging. Although such a connexel-wise methodology has recently been adopted by brain-wide association studies (BWAS) to identify connectivity changes in several mental disorders, such as schizophrenia, autism and depression, the multiple correction and power analysis methods designed specifically for connexel-wise analysis are still lacking. Therefore, we herein report the development of a rigorous statistical framework for connexel-wise significance testing based on the Gaussian random field theory. It includes controlling the family-wise error rate (FWER) of multiple hypothesis testings using topological inference methods, and calculating power and sample size for a connexel-wise study. Our theoretical framework can control the false-positive rate accurately, as validated empirically using two resting-state fMRI datasets. Compared with Bonferroni correction and false discovery rate (FDR), it can reduce false-positive rate and increase statistical power by appropriately utilizing the spatial information of fMRI data. Importantly, our method bypasses the need of non-parametric permutation to correct for multiple comparison, thus, it can efficiently tackle large datasets with high resolution fMRI images. The utility of our method is shown in a case-control study. Our approach can identify altered functional connectivities in a major depression disorder dataset, whereas existing methods fail. A software package is available at https://github.com/weikanggong/BWAS. Copyright © 2018 Elsevier B.V. All rights reserved.
Hempler, Daniela; Schmidt, Martin U; van de Streek, Jacco
2017-08-01
More than 600 molecular crystal structures with correct, incorrect and uncertain space-group symmetry were energy-minimized with dispersion-corrected density functional theory (DFT-D, PBE-D3). For the purpose of determining the correct space-group symmetry the required tolerance on the atomic coordinates of all non-H atoms is established to be 0.2 Å. For 98.5% of 200 molecular crystal structures published with missed symmetry, the correct space group is identified; there are no false positives. Very small, very symmetrical molecules can end up in artificially high space groups upon energy minimization, although this is easily detected through visual inspection. If the space group of a crystal structure determined from powder diffraction data is ambiguous, energy minimization with DFT-D provides a fast and reliable method to select the correct space group.
Observed to expected or logistic regression to identify hospitals with high or low 30-day mortality?
Helgeland, Jon; Clench-Aas, Jocelyne; Laake, Petter; Veierød, Marit B.
2018-01-01
Introduction A common quality indicator for monitoring and comparing hospitals is based on death within 30 days of admission. An important use is to determine whether a hospital has higher or lower mortality than other hospitals. Thus, the ability to identify such outliers correctly is essential. Two approaches for detection are: 1) calculating the ratio of observed to expected number of deaths (OE) per hospital and 2) including all hospitals in a logistic regression (LR) comparing each hospital to a form of average over all hospitals. The aim of this study was to compare OE and LR with respect to correctly identifying 30-day mortality outliers. Modifications of the methods, i.e., variance corrected approach of OE (OE-Faris), bias corrected LR (LR-Firth), and trimmed mean variants of LR and LR-Firth were also studied. Materials and methods To study the properties of OE and LR and their variants, we performed a simulation study by generating patient data from hospitals with known outlier status (low mortality, high mortality, non-outlier). Data from simulated scenarios with varying number of hospitals, hospital volume, and mortality outlier status, were analysed by the different methods and compared by level of significance (ability to falsely claim an outlier) and power (ability to reveal an outlier). Moreover, administrative data for patients with acute myocardial infarction (AMI), stroke, and hip fracture from Norwegian hospitals for 2012–2014 were analysed. Results None of the methods achieved the nominal (test) level of significance for both low and high mortality outliers. For low mortality outliers, the levels of significance were increased four- to fivefold for OE and OE-Faris. For high mortality outliers, OE and OE-Faris, LR 25% trimmed and LR-Firth 10% and 25% trimmed maintained approximately the nominal level. The methods agreed with respect to outlier status for 94.1% of the AMI hospitals, 98.0% of the stroke, and 97.8% of the hip fracture hospitals. Conclusion We recommend, on the balance, LR-Firth 10% or 25% trimmed for detection of both low and high mortality outliers. PMID:29652941
Hess, Megan C; Inoue, Kentaro; Tsakiris, Eric T; Hart, Michael; Morton, Jennifer; Dudding, Jack; Robertson, Clinton R; Randklev, Charles R
2018-01-01
Correct identification of sex is an important component of wildlife management because changes in sex ratios can affect population viability. Identification of sex often relies on external morphology, which can be biased by intermediate or nondistinctive morphotypes and observer experience. For unionid mussels, research has demonstrated that species misidentification is common but less attention has been given to the reliability of sex identification. To evaluate whether this is an issue, we surveyed 117 researchers on their ability to correctly identify sex of Lampsilis teres (Yellow Sandshell), a wide ranging, sexually dimorphic species. Personal background information of each observer was analyzed to identify factors that may contribute to misidentification of sex. We found that median misidentification rates were ~20% across males and females and that observers falsely identified the number of female specimens more often (~23%) than males (~10%). Misidentification rates were partially explained by geographic region of prior mussel experience and where observers learned how to identify mussels, but there remained substantial variation among observers after controlling for these factors. We also used three morphometric methods (traditional, geometric, and Fourier) to investigate whether sex could be more correctly identified statistically and found that misidentification rates for the geometric and Fourier methods (which characterize shape) were less than 5% (on average 7% and 2% for females and males, respectively). Our results show that misidentification of sex is likely common for mussels if based solely on external morphology, which raises general questions, regardless of taxonomic group, about its reliability for conservation efforts.
Lee, Wonmok; Kim, Myungsook; Yong, Dongeun; Jeong, Seok Hoon; Lee, Kyungwon; Chong, Yunsop
2015-01-01
By conventional methods, the identification of anaerobic bacteria is more time consuming and requires more expertise than the identification of aerobic bacteria. Although the matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) systems are relatively less studied, they have been reported to be a promising method for the identification of anaerobes. We evaluated the performance of the VITEK MS in vitro diagnostic (IVD; 1.1 database; bioMérieux, France) in the identification of anaerobes. We used 274 anaerobic bacteria isolated from various clinical specimens. The results for the identification of the bacteria by VITEK MS were compared to those obtained by phenotypic methods and 16S rRNA gene sequencing. Among the 249 isolates included in the IVD database, the VITEK MS correctly identified 209 (83.9%) isolates to the species level and an additional 18 (7.2%) at the genus level. In particular, the VITEK MS correctly identified clinically relevant and frequently isolated anaerobic bacteria to the species level. The remaining 22 isolates (8.8%) were either not identified or misidentified. The VITEK MS could not identify the 25 isolates absent from the IVD database to the species level. The VITEK MS showed reliable identifications for clinically relevant anaerobic bacteria.
Adderson, Elisabeth E.; Boudreaux, Jan W.; Cummings, Jessica R.; Pounds, Stanley; Wilson, Deborah A.; Procop, Gary W.; Hayden, Randall T.
2008-01-01
We compared the relative levels of effectiveness of three commercial identification kits and three nucleic acid amplification tests for the identification of coryneform bacteria by testing 50 diverse isolates, including 12 well-characterized control strains and 38 organisms obtained from pediatric oncology patients at our institution. Between 33.3 and 75.0% of control strains were correctly identified to the species level by phenotypic systems or nucleic acid amplification assays. The most sensitive tests were the API Coryne system and amplification and sequencing of the 16S rRNA gene using primers optimized for coryneform bacteria, which correctly identified 9 of 12 control isolates to the species level, and all strains with a high-confidence call were correctly identified. Organisms not correctly identified were species not included in the test kit databases or not producing a pattern of reactions included in kit databases or which could not be differentiated among several genospecies based on reaction patterns. Nucleic acid amplification assays had limited abilities to identify some bacteria to the species level, and comparison of sequence homologies was complicated by the inclusion of allele sequences obtained from uncultivated and uncharacterized strains in databases. The utility of rpoB genotyping was limited by the small number of representative gene sequences that are currently available for comparison. The correlation between identifications produced by different classification systems was poor, particularly for clinical isolates. PMID:18160450
Smith, Emery; Janovick, Jo Ann; Bannister, Thomas D; Shumate, Justin; Scampavia, Louis; Conn, P Michael; Spicer, Timothy P
2016-09-01
Pharmacoperones correct the folding of otherwise misfolded protein mutants, restoring function (i.e., providing "rescue") by correcting their trafficking. Currently, most pharmacoperones possess intrinsic antagonist activity because they were identified using methods initially aimed at discovering such functions. Here, we describe an ultra-high-throughput homogeneous cell-based assay with a cAMP detection system, a method specifically designed to identify pharmacoperones of the vasopressin type 2 receptor (V2R), a GPCR that, when mutated, is associated with nephrogenic diabetes insipidus. Previously developed methods to identify compounds capable of altering cellular trafficking of V2R were modified and used to screen a 645,000 compound collection by measuring the ability of library compounds to rescue a mutant hV2R [L83Q], using a cell-based luminescent detection system. The campaign initially identified 3734 positive modulators of cAMP. The confirmation and counterscreen identified only 147 of the active compounds with an EC50 of ≤5 µM. Of these, 83 were reconfirmed as active through independently obtained pure samples and were also inactive in a relevant counterscreen. Active and tractable compounds within this set can be categorized into three predominant structural clusters, described here, in the first report detailing the results of a large-scale pharmacoperone high-throughput screening campaign. © 2016 Society for Laboratory Automation and Screening.
Level repulsion and band sorting in phononic crystals
NASA Astrophysics Data System (ADS)
Lu, Yan; Srivastava, Ankit
2018-02-01
In this paper we consider the problem of avoided crossings (level repulsion) in phononic crystals and suggest a computationally efficient strategy to distinguish them from normal cross points. This process is essential for the correct sorting of the phononic bands and, subsequently, for the accurate determination of mode continuation, group velocities, and emergent properties which depend on them such as thermal conductivity. Through explicit phononic calculations using generalized Rayleigh quotient, we identify exact locations of exceptional points in the complex wavenumber domain which results in level repulsion in the real domain. We show that in the vicinity of the exceptional point the relevant phononic eigenvalue surfaces resemble the surfaces of a 2 by 2 parameter-dependent matrix. Along a closed loop encircling the exceptional point we show that the phononic eigenvalues are exchanged, just as they are for the 2 by 2 matrix case. However, the behavior of the associated eigenvectors is shown to be more complex in the phononic case. Along a closed loop around an exceptional point, we show that the eigenvectors can flip signs multiple times unlike a 2 by 2 matrix where the flip of sign occurs only once. Finally, we exploit these eigenvector sign flips around exceptional points to propose a simple and efficient method of distinguishing them from normal crosses and of correctly sorting the band-structure. Our proposed method is roughly an order-of-magnitude faster than the zoom-in method and correctly identifies > 96% of the cases considered. Both its speed and accuracy can be further improved and we suggest some ways of achieving this. Our method is general and, as such, would be directly applicable to other eigenvalue problems where the eigenspectrum needs to be correctly sorted.
Consistency of FMEA used in the validation of analytical procedures.
Oldenhof, M T; van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Vredenbregt, M J; Weda, M; Barends, D M
2011-02-20
In order to explore the consistency of the outcome of a Failure Mode and Effects Analysis (FMEA) in the validation of analytical procedures, an FMEA was carried out by two different teams. The two teams applied two separate FMEAs to a High Performance Liquid Chromatography-Diode Array Detection-Mass Spectrometry (HPLC-DAD-MS) analytical procedure used in the quality control of medicines. Each team was free to define their own ranking scales for the probability of severity (S), occurrence (O), and detection (D) of failure modes. We calculated Risk Priority Numbers (RPNs) and we identified the failure modes above the 90th percentile of RPN values as failure modes needing urgent corrective action; failure modes falling between the 75th and 90th percentile of RPN values were identified as failure modes needing necessary corrective action, respectively. Team 1 and Team 2 identified five and six failure modes needing urgent corrective action respectively, with two being commonly identified. Of the failure modes needing necessary corrective actions, about a third were commonly identified by both teams. These results show inconsistency in the outcome of the FMEA. To improve consistency, we recommend that FMEA is always carried out under the supervision of an experienced FMEA-facilitator and that the FMEA team has at least two members with competence in the analytical method to be validated. However, the FMEAs of both teams contained valuable information that was not identified by the other team, indicating that this inconsistency is not always a drawback. Copyright © 2010 Elsevier B.V. All rights reserved.
DOT National Transportation Integrated Search
1985-03-01
The purpose of this report is to identify the difference, if any, in AASHTO and OSHD test procedures and results. This report addresses the effect of the size of samples taken in the field and evaluates the methods of determining the moisture content...
Selection and authentication of botanical materials for the development of analytical methods.
Applequist, Wendy L; Miller, James S
2013-05-01
Herbal products, for example botanical dietary supplements, are widely used. Analytical methods are needed to ensure that botanical ingredients used in commercial products are correctly identified and that research materials are of adequate quality and are sufficiently characterized to enable research to be interpreted and replicated. Adulteration of botanical material in commerce is common for some species. The development of analytical methods for specific botanicals, and accurate reporting of research results, depend critically on correct identification of test materials. Conscious efforts must therefore be made to ensure that the botanical identity of test materials is rigorously confirmed and documented through preservation of vouchers, and that their geographic origin and handling are appropriate. Use of material with an associated herbarium voucher that can be botanically identified is always ideal. Indirect methods of authenticating bulk material in commerce, for example use of organoleptic, anatomical, chemical, or molecular characteristics, are not always acceptable for the chemist's purposes. Familiarity with botanical and pharmacognostic literature is necessary to determine what potential adulterants exist and how they may be distinguished.
Efficient Semi-Automatic 3D Segmentation for Neuron Tracing in Electron Microscopy Images
Jones, Cory; Liu, Ting; Cohan, Nathaniel Wood; Ellisman, Mark; Tasdizen, Tolga
2015-01-01
0.1. Background In the area of connectomics, there is a significant gap between the time required for data acquisition and dense reconstruction of the neural processes contained in the same dataset. Automatic methods are able to eliminate this timing gap, but the state-of-the-art accuracy so far is insufficient for use without user corrections. If completed naively, this process of correction can be tedious and time consuming. 0.2. New Method We present a new semi-automatic method that can be used to perform 3D segmentation of neurites in EM image stacks. It utilizes an automatic method that creates a hierarchical structure for recommended merges of superpixels. The user is then guided through each predicted region to quickly identify errors and establish correct links. 0.3. Results We tested our method on three datasets with both novice and expert users. Accuracy and timing were compared with published automatic, semi-automatic, and manual results. 0.4. Comparison with Existing Methods Post-automatic correction methods have also been used in [1] and [2]. These methods do not provide navigation or suggestions in the manner we present. Other semi-automatic methods require user input prior to the automatic segmentation such as [3] and [4] and are inherently different than our method. 0.5. Conclusion Using this method on the three datasets, novice users achieved accuracy exceeding state-of-the-art automatic results, and expert users achieved accuracy on par with full manual labeling but with a 70% time improvement when compared with other examples in publication. PMID:25769273
Huang, Yanfei; Wang, Jinglin; Zhang, Mingxin; Zhu, Min; Wang, Mei; Sun, Yufeng; Gu, Haitong; Cao, Jingjing; Li, Xue; Zhang, Shaoya; Lu, Xinxin
2017-03-01
Filamentous fungi are among the most important pathogens, causing fungal rhinosinusitis (FRS). Current laboratory diagnosis of FRS pathogens mainly relies on phenotypic identification by culture and microscopic examination, which is time consuming and expertise dependent. Although matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) MS has been employed to identify various fungi, its efficacy in the identification of FRS fungi is less clear. A total of 153 FRS isolates obtained from patients were analysed at the Clinical Laboratory at the Beijing Tongren Hospital affiliated to the Capital Medical University, between January 2014 and December 2015. They were identified by traditional phenotypic methods and Bruker MALDI-TOF MS (Bruker, Biotyper version 3.1), respectively. Discrepancies between the two methods were further validated by sequencing. Among the 153 isolates, 151 had correct species identification using MALDI-TOF MS (Bruker, Biot 3.1, score ≥2.0 or 2.3). MALDI-TOF MS enabled identification of some very closely related species that were indistinguishable by conventional phenotypic methods, including 1/10 Aspergillus versicolor, 3/20 Aspergillus flavus, 2/30 Aspergillus fumigatus and 1/20 Aspergillus terreus, which were misidentified by conventional phenotypic methods as Aspergillus nidulans, Aspergillus oryzae, Aspergillus japonicus and Aspergillus nidulans, respectively. In addition, 2/2 Rhizopus oryzae and 1/1 Rhizopus stolonifer that were identified only to the genus level by the phenotypic method were correctly identified by MALDI-TOF MS. MALDI-TOF MS is a rapid and accurate technique, and could replace the conventional phenotypic method for routine identification of FRS fungi in clinical microbiology laboratories.
Vos, Janet R; Hsu, Li; Brohet, Richard M; Mourits, Marian J E; de Vries, Jakob; Malone, Kathleen E; Oosterwijk, Jan C; de Bock, Geertruida H
2015-08-10
Recommendations for treating patients who carry a BRCA1/2 gene are mainly based on cumulative lifetime risks (CLTRs) of breast cancer determined from retrospective cohorts. These risks vary widely (27% to 88%), and it is important to understand why. We analyzed the effects of methods of risk estimation and bias correction and of population factors on CLTRs in this retrospective clinical cohort of BRCA1/2 carriers. The following methods to estimate the breast cancer risk of BRCA1/2 carriers were identified from the literature: Kaplan-Meier, frailty, and modified segregation analyses with bias correction consisting of including or excluding index patients combined with including or excluding first-degree relatives (FDRs) or different conditional likelihoods. These were applied to clinical data of BRCA1/2 families derived from our family cancer clinic for whom a simulation was also performed to evaluate the methods. CLTRs and 95% CIs were estimated and compared with the reference CLTRs. CLTRs ranged from 35% to 83% for BRCA1 and 41% to 86% for BRCA2 carriers at age 70 years width of 95% CIs: 10% to 35% and 13% to 46%, respectively). Relative bias varied from -38% to +16%. Bias correction with inclusion of index patients and untested FDRs gave the smallest bias: +2% (SD, 2%) in BRCA1 and +0.9% (SD, 3.6%) in BRCA2. Much of the variation in breast cancer CLTRs in retrospective clinical BRCA1/2 cohorts is due to the bias-correction method, whereas a smaller part is due to population differences. Kaplan-Meier analyses with bias correction that includes index patients and a proportion of untested FDRs provide suitable CLTRs for carriers counseled in the clinic. © 2015 by American Society of Clinical Oncology.
Asymptotic-induced numerical methods for conservation laws
NASA Technical Reports Server (NTRS)
Garbey, Marc; Scroggs, Jeffrey S.
1990-01-01
Asymptotic-induced methods are presented for the numerical solution of hyperbolic conservation laws with or without viscosity. The methods consist of multiple stages. The first stage is to obtain a first approximation by using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problems identified by using techniques derived via asymptotics. Finally, a residual correction increases the accuracy of the scheme. The method is derived and justified with singular perturbation techniques.
Identifying biologically relevant putative mechanisms in a given phenotype comparison
Hanoudi, Samer; Donato, Michele; Draghici, Sorin
2017-01-01
A major challenge in life science research is understanding the mechanism involved in a given phenotype. The ability to identify the correct mechanisms is needed in order to understand fundamental and very important phenomena such as mechanisms of disease, immune systems responses to various challenges, and mechanisms of drug action. The current data analysis methods focus on the identification of the differentially expressed (DE) genes using their fold change and/or p-values. Major shortcomings of this approach are that: i) it does not consider the interactions between genes; ii) its results are sensitive to the selection of the threshold(s) used, and iii) the set of genes produced by this approach is not always conducive to formulating mechanistic hypotheses. Here we present a method that can construct networks of genes that can be considered putative mechanisms. The putative mechanisms constructed by this approach are not limited to the set of DE genes, but also considers all known and relevant gene-gene interactions. We analyzed three real datasets for which both the causes of the phenotype, as well as the true mechanisms were known. We show that the method identified the correct mechanisms when applied on microarray datasets from mouse. We compared the results of our method with the results of the classical approach, showing that our method produces more meaningful biological insights. PMID:28486531
Artificial Intelligence Techniques for Automatic Screening of Amblyogenic Factors
Van Eenwyk, Jonathan; Agah, Arvin; Giangiacomo, Joseph; Cibis, Gerhard
2008-01-01
Purpose To develop a low-cost automated video system to effectively screen children aged 6 months to 6 years for amblyogenic factors. Methods In 1994 one of the authors (G.C.) described video vision development assessment, a digitizable analog video-based system combining Brückner pupil red reflex imaging and eccentric photorefraction to screen young children for amblyogenic factors. The images were analyzed manually with this system. We automated the capture of digital video frames and pupil images and applied computer vision and artificial intelligence to analyze and interpret results. The artificial intelligence systems were evaluated by a tenfold testing method. Results The best system was the decision tree learning approach, which had an accuracy of 77%, compared to the “gold standard” specialist examination with a “refer/do not refer” decision. Criteria for referral were strabismus, including microtropia, and refractive errors and anisometropia considered to be amblyogenic. Eighty-two percent of strabismic individuals were correctly identified. High refractive errors were also correctly identified and referred 90% of the time, as well as significant anisometropia. The program was less correct in identifying more moderate refractive errors, below +5 and less than −7. Conclusions Although we are pursuing a variety of avenues to improve the accuracy of the automated analysis, the program in its present form provides acceptable cost benefits for detecting ambylogenic factors in children aged 6 months to 6 years. PMID:19277222
Hiroyasu, Tomoyuki; Hayashinuma, Katsutoshi; Ichikawa, Hiroshi; Yagi, Nobuaki
2015-08-01
A preprocessing method for endoscopy image analysis using texture analysis is proposed. In a previous study, we proposed a feature value that combines a co-occurrence matrix and a run-length matrix to analyze the extent of early gastric cancer from images taken with narrow-band imaging endoscopy. However, the obtained feature value does not identify lesion zones correctly due to the influence of noise and halation. Therefore, we propose a new preprocessing method with a non-local means filter for de-noising and contrast limited adaptive histogram equalization. We have confirmed that the pattern of gastric mucosa in images can be improved by the proposed method. Furthermore, the lesion zone is shown more correctly by the obtained color map.
Aucott, John N.; Crowder, Lauren A.; Yedlin, Victoria; Kortte, Kathleen B.
2012-01-01
Introduction. Lyme disease is an emerging worldwide infectious disease with major foci of endemicity in North America and regions of temperate Eurasia. The erythema migrans rash associated with early infection is found in approximately 80% of patients and can have a range of appearances including the classic target bull's-eye lesion and nontarget appearing lesions. Methods. A survey was designed to assess the ability of the general public to distinguish various appearances of erythema migrans from non-Lyme rashes. Participants were solicited from individuals who visited an educational website about Lyme disease. Results. Of 3,104 people who accessed a rash identification survey, 72.7% of participants correctly identified the classic target erythema migrans commonly associated with Lyme disease. A mean of 20.5% of participants was able to correctly identify the four nonclassic erythema migrans. 24.2% of participants incorrectly identified a tick bite reaction in the skin as erythema migrans. Conclusions. Participants were most familiar with the classic target erythema migrans of Lyme disease but were unlikely to correctly identify the nonclassic erythema migrans. These results identify an opportunity for educational intervention to improve early recognition of Lyme disease and to increase the patient's appropriate use of medical services for early Lyme disease diagnosis. PMID:23133445
Li, Hongmei Lisa; Fujimoto, Naoko; Sasakawa, Noriko; Shirai, Saya; Ohkame, Tokiko; Sakuma, Tetsushi; Tanaka, Michihiro; Amano, Naoki; Watanabe, Akira; Sakurai, Hidetoshi; Yamamoto, Takashi; Yamanaka, Shinya; Hotta, Akitsu
2015-01-13
Duchenne muscular dystrophy (DMD) is a severe muscle-degenerative disease caused by a mutation in the dystrophin gene. Genetic correction of patient-derived induced pluripotent stem cells (iPSCs) by TALENs or CRISPR-Cas9 holds promise for DMD gene therapy; however, the safety of such nuclease treatment must be determined. Using a unique k-mer database, we systematically identified a unique target region that reduces off-target sites. To restore the dystrophin protein, we performed three correction methods (exon skipping, frameshifting, and exon knockin) in DMD-patient-derived iPSCs, and found that exon knockin was the most effective approach. We further investigated the genomic integrity by karyotyping, copy number variation array, and exome sequencing to identify clones with a minimal mutation load. Finally, we differentiated the corrected iPSCs toward skeletal muscle cells and successfully detected the expression of full-length dystrophin protein. These results provide an important framework for developing iPSC-based gene therapy for genetic disorders using programmable nucleases. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Noble, L D; Gow, J A
1998-03-01
Bacteria belonging to the family Vibrionaceae were suspended using saline and a solution prepared from a marine-cations supplement. The effect of this on the profile of oxidized substrates obtained when using Biolog GN MicroPlates was investigated. Thirty-nine species belonging to the genera Aeromonas, Listonella, Photobacterium, and Vibrio were studied. Of the strains studied, species of Listonella, Photobacterium, and Vibrio could be expected to benefit from a marine-cations supplement that contained Na+, K+, and Mg2+. Bacteria that are not of marine origin are usually suspended in normal saline. Of the 39 species examined, 9 were not included in the Biolog data base and were not identified. Of the 30 remaining species, 50% were identified correctly using either of the suspending solutions. A further 20% were correctly identified only when suspended in saline. Three species, or 10%, were correctly identified only after suspension in the marine-cations supplemented solution. The remaining 20% of species were not correctly identified by either method. Generally, more substrates were oxidized when the bacteria had been suspended in the more complex salts solution. Usually, when identifications were incorrect, the use of the marine-cations supplemented suspending solution had resulted in many more substrates being oxidized. Based on these results, it would be preferable to use saline to suspend the cells when using Biolog for identification of species of Vibrionaceae. A salts solution containing a marine-cations supplement would be preferable for environmental studies where the objective is to determine profiles of substrates that the bacteria have the potential to oxidize. If identifications are done using marine-cations supplemented suspending solution, it would be advisable to include reference cultures to determine the effect of the supplement. Of the Vibrio and Listonella species associated with human clinical specimens, 8 out of the 11 studied were identified correctly when either of the suspending solutions was used.
Digital signal processing methods for biosequence comparison.
Benson, D C
1990-01-01
A method is discussed for DNA or protein sequence comparison using a finite field fast Fourier transform, a digital signal processing technique; and statistical methods are discussed for analyzing the output of this algorithm. This method compares two sequences of length N in computing time proportional to N log N compared to N2 for methods currently used. This method makes it feasible to compare very long sequences. An example is given to show that the method correctly identifies sites of known homology. PMID:2349096
New methods for engineering site characterization using reflection and surface wave seismic survey
NASA Astrophysics Data System (ADS)
Chaiprakaikeow, Susit
This study presents two new seismic testing methods for engineering application, a new shallow seismic reflection method and Time Filtered Analysis of Surface Waves (TFASW). Both methods are described in this dissertation. The new shallow seismic reflection was developed to measure reflection at a single point using two to four receivers, assuming homogeneous, horizontal layering. It uses one or more shakers driven by a swept sine function as a source, and the cross-correlation technique to identify wave arrivals. The phase difference between the source forcing function and the ground motion due to the dynamic response of the shaker-ground interface was corrected by using a reference geophone. Attenuated high frequency energy was also recovered using the whitening in frequency domain. The new shallow seismic reflection testing was performed at the crest of Porcupine Dam in Paradise, Utah. The testing used two horizontal Vibroseis sources and four receivers for spacings between 6 and 300 ft. Unfortunately, the results showed no clear evidence of the reflectors despite correction of the magnitude and phase of the signals. However, an improvement in the shape of the cross-correlations was noticed after the corrections. The results showed distinct primary lobes in the corrected cross-correlated signals up to 150 ft offset. More consistent maximum peaks were observed in the corrected waveforms. TFASW is a new surface (Rayleigh) wave method to determine the shear wave velocity profile at a site. It is a time domain method as opposed to the Spectral Analysis of Surface Waves (SASW) method, which is a frequency domain method. This method uses digital filtering to optimize bandwidth used to determine the dispersion curve. Results from testings at three different sites in Utah indicated good agreement with the dispersion curves measured using both TFASW and SASW methods. The advantage of TFASW method is that the dispersion curves had less scatter at long wavelengths as a result from wider bandwidth used in those tests.
ERIC Educational Resources Information Center
Rosado, Javier I.; Pfeiffer, Steven; Petscher, Yaacov
2015-01-01
The challenge of correctly identifying gifted students is a critical issue. Gifted education in Puerto Rico is marked by insufficient support and a lack of appropriate identification methods. This study examined the reliability and validity of a Spanish translation of the "Gifted Rating Scales-School Form" (GRS) with a sample of 618…
Machine-learned cluster identification in high-dimensional data.
Ultsch, Alfred; Lötsch, Jörn
2017-02-01
High-dimensional biomedical data are frequently clustered to identify subgroup structures pointing at distinct disease subtypes. It is crucial that the used cluster algorithm works correctly. However, by imposing a predefined shape on the clusters, classical algorithms occasionally suggest a cluster structure in homogenously distributed data or assign data points to incorrect clusters. We analyzed whether this can be avoided by using emergent self-organizing feature maps (ESOM). Data sets with different degrees of complexity were submitted to ESOM analysis with large numbers of neurons, using an interactive R-based bioinformatics tool. On top of the trained ESOM the distance structure in the high dimensional feature space was visualized in the form of a so-called U-matrix. Clustering results were compared with those provided by classical common cluster algorithms including single linkage, Ward and k-means. Ward clustering imposed cluster structures on cluster-less "golf ball", "cuboid" and "S-shaped" data sets that contained no structure at all (random data). Ward clustering also imposed structures on permuted real world data sets. By contrast, the ESOM/U-matrix approach correctly found that these data contain no cluster structure. However, ESOM/U-matrix was correct in identifying clusters in biomedical data truly containing subgroups. It was always correct in cluster structure identification in further canonical artificial data. Using intentionally simple data sets, it is shown that popular clustering algorithms typically used for biomedical data sets may fail to cluster data correctly, suggesting that they are also likely to perform erroneously on high dimensional biomedical data. The present analyses emphasized that generally established classical hierarchical clustering algorithms carry a considerable tendency to produce erroneous results. By contrast, unsupervised machine-learned analysis of cluster structures, applied using the ESOM/U-matrix method, is a viable, unbiased method to identify true clusters in the high-dimensional space of complex data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-07
... reduction requirement, the method used must be suitable for the entire range of emissions since pre and post... demand response. The proposed amendments also correct minor mistakes in the pre-existing regulations...: Submit your comments, identified by Docket ID No. EPA-HQ- OAR-2008-0708, by one of the following methods...
Jain, Ram B
2017-07-01
Prevalence of smoking is needed to estimate the need for future public health resources. To compute and compare smoking prevalence rates by using self-reported smoking statuses, two serum cotinine (SCOT) based biomarker methods, and one urinary 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanol (NNAL) based biomarker method. These estimates were then used to develop correction factors to be applicable to self-reported prevalences to arrive at corrected smoking prevalence rates. Data from National Health and Nutrition Examination Survey (NHANES) for 2007-2012 for those aged ≥20 years (N = 16826) were used. Self-reported prevalence rate for the total population computed as the weighted number of self-reported smokers divided by weighted number of all participants was 21.6% and 24% when computed by weighted number of self-reported smokers divided by the weighted number of self-reported smokers and nonsmokers. The corrected prevalence rate was found to be 25.8%. A 1% underestimate in smoking prevalence is equivalent to not being able to identify 2.2 million smokers in US in a given year. This underestimation, if not corrected, could lead to serious gap in the public health services available and needed to provide adequate preventive and corrective treatment to smokers.
Estimation of Skidding Offered by Ackermann Mechanism
NASA Astrophysics Data System (ADS)
Rao, Are Padma; Venkatachalam, Rapur
2016-04-01
Steering for a four wheeler is being provided by Ackermann mechanism. Though it cannot always provide correct steering conditions, it is very popular because of its simple nature. A correct steering would avoid skidding of the tires, and thereby enhance their lives as the wear of the tires is reduced. In this paper it is intended to analyze Ackermann mechanism for its performance. A method of estimating skidding due to improper steering is proposed. Two parameters are identified using which the length of skidding can be estimated.
Patients' misunderstanding of common orthopaedic terminology: the need for clarity
Bagley, CHM; Hunter, AR; Bacarese-Hamilton, IA
2011-01-01
INTRODUCTION Patients' understanding of their medical problems is essential to allow them to make competent decisions, comply with treatment and enable recovery. We investigated Patients' understanding of orthopaedic terms to identify those words surgeons should make the most effort to explain. METHODS This questionnaire-based study recruited patients attending the orthopaedic clinics. Qualitative and quantitative data were collected using free text boxes for the Patients' written definitions and multiple choice questions (MCQs). RESULTS A total of 133 patients took part. Of these, 74% identified English as their first language. ‘Broken bone’ was correctly defined by 71% of respondents whereas ‘fractured bone’ was only correctly defined by 33%. ‘Sprain’ was correctly defined by 17% of respondents, with 29% being almost correct, 25% wrong and 29% unsure. In the MCQs, 51% of respondents answered correctly for ‘fracture’, 55% for ‘arthroscopy’, 46% for ‘meniscus’, 35% for ‘tendon’ and 23% for ‘ligament’. ‘Sprained’ caused confusion, with only 11% of patients answering correctly. Speaking English as a second language was a significant predictive factor for patients who had difficulty with definitions. There was no significant variation among different age groups. CONCLUSIONS Care should be taken by surgeons when using basic and common orthopaedic terminology in order to avoid misunderstanding. Educating patients in clinic is a routine part of practice. PMID:21943466
Rapid identification of bacteria from bioMérieux BacT/ALERT blood culture bottles by MALDI-TOF MS.
Haigh, J D; Green, I M; Ball, D; Eydmann, M; Millar, M; Wilks, M
2013-01-01
Several studies have reported poor results when trying to identify microorganisms directly from the bioMérieux BacT/ALERT blood culture system using matrix-assisted laser desorption/ionisation-time of flight (MALDI-TOF) mass spectrometry. The aim of this study is to evaluate two new methods, Sepsityper and an enrichment method for direct identification of microorganisms from this system. For both methods the samples were processed using the Bruker Microflex LT mass spectrometer (Biotyper) using the Microflex Control software to obtain spectra. The results from direct analysis were compared with those obtained by subculture and subsequent identification. A total of 350 positive blood cultures were processed simultaneously by the two methods. Fifty-three cultures were polymocrobial or failed to grow any organism on subculture, and these results were not included as there was either no subculture result, or for polymicrobial cultures it was known that the Biotyper would not be able to distinguish the constituent organisms correctly. Overall, the results showed that, contrary to previous reports, it is possible to identify bacteria directly from bioMérieux blood culture bottles, as 219/297 (74%) correct identifications were obtained using the Bruker Sepsityper method and 228/297 (77%) were obtained for the enrichment method when there is only one organism was present. Although the enrichment method was simpler, the reagent costs for the Sepsityper method were approximately pound 4.00 per sample compared to pound 0.50. An even simpler and cheaper method, which was less labour-intensive and did not require further reagents, was investigated. Seventy-seven specimens from positive signalled blood cultures were analysed by inoculating prewarmed blood agar plates and analysing any growth after 1-, 2- and 4-h periods of incubation at 37 degrees C, by either direct transfer or alcohol extraction. This method gave the highest number of correct identifications, 66/77 (86%), and was cheaper and less labour-intensive than either of the two above methods.
Assessing the Assessment Methods: Climate Change and Hydrologic Impacts
NASA Astrophysics Data System (ADS)
Brekke, L. D.; Clark, M. P.; Gutmann, E. D.; Mizukami, N.; Mendoza, P. A.; Rasmussen, R.; Ikeda, K.; Pruitt, T.; Arnold, J. R.; Rajagopalan, B.
2014-12-01
The Bureau of Reclamation, the U.S. Army Corps of Engineers, and other water management agencies have an interest in developing reliable, science-based methods for incorporating climate change information into longer-term water resources planning. Such assessments must quantify projections of future climate and hydrology, typically relying on some form of spatial downscaling and bias correction to produce watershed-scale weather information that subsequently drives hydrology and other water resource management analyses (e.g., water demands, water quality, and environmental habitat). Water agencies continue to face challenging method decisions in these endeavors: (1) which downscaling method should be applied and at what resolution; (2) what observational dataset should be used to drive downscaling and hydrologic analysis; (3) what hydrologic model(s) should be used and how should these models be configured and calibrated? There is a critical need to understand the ramification of these method decisions, as they affect the signal and uncertainties produced by climate change assessments and, thus, adaptation planning. This presentation summarizes results from a three-year effort to identify strengths and weaknesses of widely applied methods for downscaling climate projections and assessing hydrologic conditions. Methods were evaluated from two perspectives: historical fidelity, and tendency to modulate a global climate model's climate change signal. On downscaling, four methods were applied at multiple resolutions: statistically using Bias Correction Spatial Disaggregation, Bias Correction Constructed Analogs, and Asynchronous Regression; dynamically using the Weather Research and Forecasting model. Downscaling results were then used to drive hydrologic analyses over the contiguous U.S. using multiple models (VIC, CLM, PRMS), with added focus placed on case study basins within the Colorado Headwaters. The presentation will identify which types of climate changes are expressed robustly across methods versus those that are sensitive to method choice; which method choices seem relatively more important; and where strategic investments in research and development can substantially improve guidance on climate change provided to water managers.
Generating quality word sense disambiguation test sets based on MeSH indexing.
Fan, Jung-Wei; Friedman, Carol
2009-11-14
Word sense disambiguation (WSD) determines the correct meaning of a word that has more than one meaning, and is a critical step in biomedical natural language processing, as interpretation of information in text can be correct only if the meanings of their component terms are correctly identified first. Quality evaluation sets are important to WSD because they can be used as representative samples for developing automatic programs and as referees for comparing different WSD programs. To help create quality test sets for WSD, we developed a MeSH-based automatic sense-tagging method that preferentially annotates terms being topical of the text. Preliminary results were promising and revealed important issues to be addressed in biomedical WSD research. We also suggest that, by cross-validating with 2 or 3 annotators, the method should be able to efficiently generate quality WSD test sets. Online supplement is available at: http://www.dbmi.columbia.edu/~juf7002/AMIA09.
Oral and maxillofacial surgery residents have poor understanding of biostatistics.
Best, Al M; Laskin, Daniel M
2013-01-01
The purpose of this study was to evaluate residents' understanding of biostatistics and interpretation of research results. A questionnaire previously used in internal medicine residents was modified to include oral and maxillofacial surgery (OMS) examples. The survey included sections to identify demographic and educational characteristics of residents, attitudes and confidence, and the primary outcome-knowledge of biostatistics. In 2009 an invitation to the Internet survey was sent to all 106 program directors in the United States, who were requested to forward it to their residents. One hundred twelve residents responded. The percentage of residents who had taken a course in epidemiology was 53%; biostatistics, 49%; and evidence-based dentistry, 65%. Conversely, 10% of OMS residents had taken none of these classes. Across the 6-item test of knowledge of statistical methods, the mean percentage of correct answers was 38% (SD, 22%). Nearly half of the residents (42%) could not correctly identify continuous, ordinal, or nominal variables. Only 21% correctly identified a case-control study, but 79% correctly identified that the purpose of blinding was to reduce bias. Only 46% correctly interpreted a clinically unimportant and statistically nonsignificant result. None of the demographic or experience factors of OMS residents were related to statistical knowledge. Overall, OMS resident knowledge was below that of internal medicine residents (P<.0001). However, OMS residents were overconfident in their claim to understand most statistical terms. OMS residents lack knowledge in biostatistics and the interpretation of research and are thus unprepared to interpret the results of published clinical research. Residency programs should include effective biostatistical training in their curricula to prepare residents in evidence-based dentistry. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-08
... number EPA-HQ- OECA-2012-0657, to: (1) EPA online, using www.regulations.gov (our preferred method), or... estimates that are attributed to the correction of mathematical discrepancies identified in the previous ICR...
NASA Astrophysics Data System (ADS)
Liu, Xing-fa; Cen, Ming
2007-12-01
Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.
Szatkiewicz, Jin P; Wang, WeiBo; Sullivan, Patrick F; Wang, Wei; Sun, Wei
2013-02-01
Structural variation is an important class of genetic variation in mammals. High-throughput sequencing (HTS) technologies promise to revolutionize copy-number variation (CNV) detection but present substantial analytic challenges. Converging evidence suggests that multiple types of CNV-informative data (e.g. read-depth, read-pair, split-read) need be considered, and that sophisticated methods are needed for more accurate CNV detection. We observed that various sources of experimental biases in HTS confound read-depth estimation, and note that bias correction has not been adequately addressed by existing methods. We present a novel read-depth-based method, GENSENG, which uses a hidden Markov model and negative binomial regression framework to identify regions of discrete copy-number changes while simultaneously accounting for the effects of multiple confounders. Based on extensive calibration using multiple HTS data sets, we conclude that our method outperforms existing read-depth-based CNV detection algorithms. The concept of simultaneous bias correction and CNV detection can serve as a basis for combining read-depth with other types of information such as read-pair or split-read in a single analysis. A user-friendly and computationally efficient implementation of our method is freely available.
Wolterink, Jelmer M; Leiner, Tim; de Vos, Bob D; Coatrieux, Jean-Louis; Kelm, B Michael; Kondo, Satoshi; Salgado, Rodrigo A; Shahzad, Rahil; Shu, Huazhong; Snoeren, Miranda; Takx, Richard A P; van Vliet, Lucas J; van Walsum, Theo; Willems, Tineke P; Yang, Guanyu; Zheng, Yefeng; Viergever, Max A; Išgum, Ivana
2016-05-01
The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular disease (CVD) events. In clinical practice, CAC is manually identified and automatically quantified in cardiac CT using commercially available software. This is a tedious and time-consuming process in large-scale studies. Therefore, a number of automatic methods that require no interaction and semiautomatic methods that require very limited interaction for the identification of CAC in cardiac CT have been proposed. Thus far, a comparison of their performance has been lacking. The objective of this study was to perform an independent evaluation of (semi)automatic methods for CAC scoring in cardiac CT using a publicly available standardized framework. Cardiac CT exams of 72 patients distributed over four CVD risk categories were provided for (semi)automatic CAC scoring. Each exam consisted of a noncontrast-enhanced calcium scoring CT (CSCT) and a corresponding coronary CT angiography (CCTA) scan. The exams were acquired in four different hospitals using state-of-the-art equipment from four major CT scanner vendors. The data were divided into 32 training exams and 40 test exams. A reference standard for CAC in CSCT was defined by consensus of two experts following a clinical protocol. The framework organizers evaluated the performance of (semi)automatic methods on test CSCT scans, per lesion, artery, and patient. Five (semi)automatic methods were evaluated. Four methods used both CSCT and CCTA to identify CAC, and one method used only CSCT. The evaluated methods correctly detected between 52% and 94% of CAC lesions with positive predictive values between 65% and 96%. Lesions in distal coronary arteries were most commonly missed and aortic calcifications close to the coronary ostia were the most common false positive errors. The majority (between 88% and 98%) of correctly identified CAC lesions were assigned to the correct artery. Linearly weighted Cohen's kappa for patient CVD risk categorization by the evaluated methods ranged from 0.80 to 1.00. A publicly available standardized framework for the evaluation of (semi)automatic methods for CAC identification in cardiac CT is described. An evaluation of five (semi)automatic methods within this framework shows that automatic per patient CVD risk categorization is feasible. CAC lesions at ambiguous locations such as the coronary ostia remain challenging, but their detection had limited impact on CVD risk determination.
A survey of the accuracy of interpretation of intraoperative cholangiograms
Sanjay, Pandanaboyana; Tagolao, Sherry; Dirkzwager, Ilse; Bartlett, Adam
2012-01-01
Objectives There are few data in the literature regarding the ability of surgical trainees and surgeons to correctly interpret intraoperative cholangiograms (IOCs) during laparoscopic cholecystectomy (LC). The aim of this study was to determine the accuracy of surgeons' interpretations of IOCs. Methods Fifteen IOCs, depicting normal, variants of normal and abnormal anatomy, were sent electronically in random sequence to 20 surgical trainees and 20 consultant general surgeons. Information was also sought on the routine or selective use of IOC by respondents. Results The accuracy of IOC interpretation was poor. Only nine surgeons and nine trainees correctly interpreted the cholangiograms showing normal anatomy. Six consultant surgeons and five trainees correctly identified variants of normal anatomy on cholangiograms. Abnormal anatomy on cholangiograms was identified correctly by 18 consultant surgeons and 19 trainees. Routine IOC was practised by seven consultants and six trainees. There was no significant difference between those who performed routine and selective IOC with respect to correct identification of normal, variant and abnormal anatomy. Conclusions The present study shows that the accuracy of detection of both normal and variants of normal anatomy was poor in all grades of surgeon irrespective of a policy of routine or selective IOC. Improving operators' understanding of biliary anatomy may help to increase the diagnostic accuracy of IOC interpretation. PMID:22954003
Nontronite mineral identification in nilgiri hills of tamil nadu using hyperspectral remote sensing
NASA Astrophysics Data System (ADS)
Vigneshkumar, M.; Yarakkula, Kiran
2017-11-01
Hyperspectral Remote sensing is a tool to identify the minerals along with field investigation. Tamil Nadu has abundant minerals like 30% titanium, 52% molybdenum, 59% garnet, 69% dunite, 75% vermiculite and 81% lignite. To enhance the user and industry requirements, mineral extraction is required. To identify the minerals properly, sophisticated tools are required. Hyperspectral remote sensing provides continuous extraction of earth surface information in an accurate manner. Nontronite is an iron-rich mineral mainly available in Nilgiri hills, Tamil Nadu, India. Due to the large number of bands, hyperspectral data require various preprocessing steps such as bad bands removal, destriping, radiance conversion and atmospheric correction. The atmospheric correction is performed using FLAASH method. The spectral data reduction is carried out with minimum noise fraction (MNF) method. The spatial information is reduced using pixel purity index (PPI) with 10000 iterations. The selected end members are compared with spectral libraries like USGS, JPL, and JHU. In the Nontronite mineral gives the probability of 0.85. Finally the classification is accomplished using spectral angle mapper (SAM) method.
Li, Yunhai; Lee, Kee Khoon; Walsh, Sean; Smith, Caroline; Hadingham, Sophie; Sorefan, Karim; Cawley, Gavin; Bevan, Michael W
2006-03-01
Establishing transcriptional regulatory networks by analysis of gene expression data and promoter sequences shows great promise. We developed a novel promoter classification method using a Relevance Vector Machine (RVM) and Bayesian statistical principles to identify discriminatory features in the promoter sequences of genes that can correctly classify transcriptional responses. The method was applied to microarray data obtained from Arabidopsis seedlings treated with glucose or abscisic acid (ABA). Of those genes showing >2.5-fold changes in expression level, approximately 70% were correctly predicted as being up- or down-regulated (under 10-fold cross-validation), based on the presence or absence of a small set of discriminative promoter motifs. Many of these motifs have known regulatory functions in sugar- and ABA-mediated gene expression. One promoter motif that was not known to be involved in glucose-responsive gene expression was identified as the strongest classifier of glucose-up-regulated gene expression. We show it confers glucose-responsive gene expression in conjunction with another promoter motif, thus validating the classification method. We were able to establish a detailed model of glucose and ABA transcriptional regulatory networks and their interactions, which will help us to understand the mechanisms linking metabolism with growth in Arabidopsis. This study shows that machine learning strategies coupled to Bayesian statistical methods hold significant promise for identifying functionally significant promoter sequences.
[Wound microbial sampling methods in surgical practice, imprint techniques].
Chovanec, Z; Veverková, L; Votava, M; Svoboda, J; Peštál, A; Doležel, J; Jedlička, V; Veselý, M; Wechsler, J; Čapov, I
2012-12-01
The wound is a damage of tissue. The process of healing is influenced by many systemic and local factors. The most crucial and the most discussed local factor of wound healing is infection. Surgical site infection in the wound is caused by micro-organisms. This information is known for many years, however the conditions leading to an infection occurrence have not been sufficiently described yet. Correct sampling technique, correct storage, transportation, evaluation, and valid interpretation of these data are very important in clinical practice. There are many methods for microbiological sampling, but the best one has not been yet identified and validated. We aim to discuss the problem with the focus on the imprint technique.
A New Method for Atmospheric Correction of MRO/CRISM Data.
NASA Astrophysics Data System (ADS)
Noe Dobrea, Eldar Z.; Dressing, C.; Wolff, M. J.
2009-09-01
The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) aboard the Mars Reconnaissance Orbiter (MRO) collects hyperspectral images from 0.362 to 3.92 μm at 6.55 nanometers/channel, and at a spatial resolution of 20 m/pixel. The 1-2.6 μm spectral range is often used to identify and map the distribution of hydrous minerals using mineralogically diagnostic bands at 1.4 μm, 1.9 μm, and 2 - 2.5 micron region. Atmospheric correction of the 2-μm CO2 band typically employs the same methodology applied to OMEGA data (Mustard et al., Nature 454, 2008): an atmospheric opacity spectrum, obtained from the ratio of spectra from the base to spectra from the peak of Olympus Mons, is rescaled for each spectrum in the observation to fit the 2-μm CO2 band, and is subsequently used to correct the data. Three important aspects are not considered in this correction: 1) absorptions due to water vapor are improperly accounted for, 2) the band-center of each channel shifts slightly with time, and 3) multiple scattering due to atmospheric aerosols is not considered. The second issue results in miss-registration of the sharp CO2 features in the 2-μm triplet, and hence poor atmospheric correction. This leads to the necessity to ratio all spectra using the spectrum of a spectrally "bland” region in each observation in order to distinguish features 1.9 μm. Here, we present an improved atmospheric correction method, which uses emission phase function (EPF) observations to correct for molecular opacity, and a discrete ordinate radiative transfer algorithm (DISORT - Stamnes et al., Appl. Opt. 27, 1988) to correct for the effects of multiple scattering. This method results in a significant improvement in the correction of the 2-μm CO2 band, allowing us to forgo the use of spectral ratios that affect the spectral shape and preclude the derivation of reflectance values in the data.
ERIC Educational Resources Information Center
Simner, Marvin L.
1985-01-01
An abbreviated scoring system for the Goodenough-Harris Draw-A-Man Test found that three items had the same overall potential for correctly identifying at-risk kindergarteners as more time-consuming scoring methods. (CL)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva-Rodríguez, Jesús, E-mail: jesus.silva.rodriguez@sergas.es; Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es; Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela
Purpose: Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. Methods: One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manualmore » ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Results: Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. Conclusions: The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peña, Jaime A.; Damm, Timo; Bastgen, Jan
Purpose: Accurate noninvasive assessment of vertebral bone marrow fat fraction is important for diagnostic assessment of a variety of disorders and therapies known to affect marrow composition. Moreover, it provides a means to correct fat-induced bias of single energy quantitative computed tomography (QCT) based bone mineral density (BMD) measurements. The authors developed new segmentation and calibration methods to obtain quantitative surrogate measures of marrow-fat density in the axial skeleton. Methods: The authors developed and tested two high resolution-QCT (HR-QCT) based methods which permit segmentation of bone voids in between trabeculae hypothesizing that they are representative of bone marrow space. Themore » methods permit calculation of marrow content in units of mineral equivalent marrow density (MeMD). The first method is based on global thresholding and peeling (GTP) to define a volume of interest away from the transition between trabecular bone and marrow. The second method, morphological filtering (MF), uses spherical elements of different radii (0.1–1.2 mm) and automatically places them in between trabeculae to identify regions with large trabecular interspace, the bone-void space. To determine their performance, data were compared ex vivo to high-resolution peripheral CT (HR-pQCT) images as the gold-standard. The performance of the methods was tested on a set of excised human vertebrae with intact bone marrow tissue representative of an elderly population with low BMD. Results: 86% (GTP) and 87% (MF) of the voxels identified as true marrow space on HR-pQCT images were correctly identified on HR-QCT images and thus these volumes of interest can be considered to be representative of true marrow space. Within this volume, MeMD was estimated with residual errors of 4.8 mg/cm{sup 3} corresponding to accuracy errors in fat fraction on the order of 5% both for GTP and MF methods. Conclusions: The GTP and MF methods on HR-QCT images permit noninvasive localization and densitometric assessment of marrow fat with residual accuracy errors sufficient to study disorders and therapies known to affect bone marrow composition. Additionally, the methods can be used to correct BMD for fat induced bias. Application and testing in vivo and in longitudinal studies are warranted to determine the clinical performance and value of these methods.« less
Identification of FGF7 as a novel susceptibility locus for chronic obstructive pulmonary disease.
Brehm, John M; Hagiwara, Koichi; Tesfaigzi, Yohannes; Bruse, Shannon; Mariani, Thomas J; Bhattacharya, Soumyaroop; Boutaoui, Nadia; Ziniti, John P; Soto-Quiros, Manuel E; Avila, Lydiana; Cho, Michael H; Himes, Blanca; Litonjua, Augusto A; Jacobson, Francine; Bakke, Per; Gulsvik, Amund; Anderson, Wayne H; Lomas, David A; Forno, Erick; Datta, Soma; Silverman, Edwin K; Celedón, Juan C
2011-12-01
Traditional genome-wide association studies (GWASs) of large cohorts of subjects with chronic obstructive pulmonary disease (COPD) have successfully identified novel candidate genes, but several other plausible loci do not meet strict criteria for genome-wide significance after correction for multiple testing. The authors hypothesise that by applying unbiased weights derived from unique populations we can identify additional COPD susceptibility loci. Methods The authors performed a homozygosity haplotype analysis on a group of subjects with and without COPD to identify regions of conserved homozygosity haplotype (RCHHs). Weights were constructed based on the frequency of these RCHHs in case versus controls, and used to adjust the p values from a large collaborative GWAS of COPD. The authors identified 2318 RCHHs, of which 576 were significantly (p<0.05) over-represented in cases. After applying the weights constructed from these regions to a collaborative GWAS of COPD, the authors identified two single nucleotide polymorphisms (SNPs) in a novel gene (fibroblast growth factor-7 (FGF7)) that gained genome-wide significance by the false discovery rate method. In a follow-up analysis, both SNPs (rs12591300 and rs4480740) were significantly associated with COPD in an independent population (combined p values of 7.9E-7 and 2.8E-6, respectively). In another independent population, increased lung tissue FGF7 expression was associated with worse measures of lung function. Weights constructed from a homozygosity haplotype analysis of an isolated population successfully identify novel genetic associations from a GWAS on a separate population. This method can be used to identify promising candidate genes that fail to meet strict correction for multiple testing.
Influences on women's decision making about intrauterine device use in Madagascar.
Gottert, Ann; Jacquin, Karin; Rahaivondrafahitra, Bakoly; Moracco, Kathryn; Maman, Suzanne
2015-04-01
We explored influences on decision making about intrauterine device (IUD) use among women in the Women's Health Project (WHP), managed by Population Services International in Madagascar. We conducted six small group photonarrative discussions (n=18 individuals) and 12 individual in-depth interviews with women who were IUD users and nonusers. All participants had had contact with WHP counselors in three sites in Madagascar. Data analysis involved creating summaries of each transcript, coding in Atlas.ti and then synthesizing findings in a conceptual model. We identified three stages of women's decision making about IUD use, and specific forms of social support that seemed helpful at each stage. During the first stage, receiving correct information from a trusted source such as a counselor conveys IUD benefits and corrects misinformation, but lingering fears about the method often appeared to delay method adoption among interested women. During the second stage, hearing testimony from satisfied users and receiving ongoing emotional support appeared to help alleviate these fears. During the third stage, accompaniment by a counselor or peer seemed to help some women gain confidence to go to the clinic to receive the IUD. Identifying and supplying the types of social support women find helpful at different stages of the decision-making process could help program managers better respond to women's staged decision-making process about IUD use. This qualitative study suggests that women in Madagascar perceive multiple IUD benefits but also fear the method even after misinformation is corrected, leading to a staged decision-making process about IUD use. Programs should identify and supply the types of social support that women find helpful at each stage of decision making. Copyright © 2015 Elsevier Inc. All rights reserved.
McMullen, Allison R; Wallace, Meghan A; Pincus, David H; Wilkey, Kathy; Burnham, C A
2016-08-01
Invasive fungal infections have a high rate of morbidity and mortality, and accurate identification is necessary to guide appropriate antifungal therapy. With the increasing incidence of invasive disease attributed to filamentous fungi, rapid and accurate species-level identification of these pathogens is necessary. Traditional methods for identification of filamentous fungi can be slow and may lack resolution. Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) has emerged as a rapid and accurate method for identification of bacteria and yeasts, but a paucity of data exists on the performance characteristics of this method for identification of filamentous fungi. The objective of our study was to evaluate the accuracy of the Vitek MS for mold identification. A total of 319 mold isolates representing 43 genera recovered from clinical specimens were evaluated. Of these isolates, 213 (66.8%) were correctly identified using the Vitek MS Knowledge Base, version 3.0 database. When a modified SARAMIS (Spectral Archive and Microbial Identification System) database was used to augment the version 3.0 Knowledge Base, 245 (76.8%) isolates were correctly identified. Unidentified isolates were subcultured for repeat testing; 71/319 (22.3%) remained unidentified. Of the unidentified isolates, 69 were not in the database. Only 3 (0.9%) isolates were misidentified by MALDI-TOF MS (including Aspergillus amoenus [n = 2] and Aspergillus calidoustus [n = 1]) although 10 (3.1%) of the original phenotypic identifications were not correct. In addition, this methodology was able to accurately identify 133/144 (93.6%) Aspergillus sp. isolates to the species level. MALDI-TOF MS has the potential to expedite mold identification, and misidentifications are rare. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; ...
2016-06-10
Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty informationmore » on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. As a result, this is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.« less
NASA Astrophysics Data System (ADS)
Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; Holmes, Aimee; Luke, Edward
2016-06-01
Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty information on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. This is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.
Schultz, Natalie M; Griffis, Timothy J; Lee, Xuhui; Baker, John M
2011-11-15
Plant water extracts typically contain organic materials that may cause spectral interference when using isotope ratio infrared spectroscopy (IRIS), resulting in errors in the measured isotope ratios. Manufacturers of IRIS instruments have developed post-processing software to identify the degree of contamination in water samples, and potentially correct the isotope ratios of water with known contaminants. Here, the correction method proposed by an IRIS manufacturer, Los Gatos Research, Inc., was employed and the results were compared with those obtained from isotope ratio mass spectrometry (IRMS). Deionized water was spiked with methanol and ethanol to create correction curves for δ(18)O and δ(2)H. The contamination effects of different sample types (leaf, stem, soil) and different species from agricultural fields, grasslands, and forests were compared. The average corrections in leaf samples ranged from 0.35 to 15.73‰ for δ(2)H and 0.28 to 9.27‰ for δ(18)O. The average corrections in stem samples ranged from 1.17 to 13.70‰ for δ(2)H and 0.47 to 7.97‰ for δ(18)O. There was no contamination observed in soil water. Cleaning plant samples with activated charcoal had minimal effects on the degree of spectral contamination, reducing the corrections, by on average, 0.44‰ for δ(2)H and 0.25‰ for δ(18)O. The correction method eliminated the discrepancies between IRMS and IRIS for δ(18)O, and greatly reduced the discrepancies for δ(2)H. The mean differences in isotope ratios between IRMS and the corrected IRIS method were 0.18‰ for δ(18)O, and -3.39‰ for δ(2)H. The inability to create an ethanol correction curve for δ(2)H probably caused the larger discrepancies. We conclude that ethanol and methanol are the primary compounds causing interference in IRIS analyzers, and that each individual analyzer will probably require customized correction curves. Copyright © 2011 John Wiley & Sons, Ltd.
Corrective responses in human food intake identified from an analysis of 7-d food-intake records2
Bray, George A; Flatt, Jean-Pierre; Volaufova, Julia; DeLany, James P; Champagne, Catherine M
2009-01-01
Background We tested the hypothesis that ad libitum food intake shows corrective responses over periods of 1–5 d. Design This was a prospective study of food intake in women. Methods Two methods, a weighed food intake and a measured food intake, were used to determine daily nutrient intake during 2 wk in 20 women. Energy expenditure with the use of doubly labeled water was done contemporaneously with the weighed food-intake record. The daily deviations in macronutrient and energy intake from the average 7-d values were compared with the deviations observed 1, 2, 3, 4, and 5 d later to estimate the corrective responses. Results Both methods of recording food intake gave similar patterns of macronutrient and total energy intakes and for deviations from average intakes. The intraindividual CVs for energy intake ranged from ±12% to ±47% with an average of ±25%. Reported energy intake was 85.5–95.0% of total energy expenditure determined by doubly labeled water. Significant corrective responses were observed in food intakes with a 3- to 4-d lag that disappeared when data were randomized within each subject. Conclusions Human beings show corrective responses to deviations from average energy and macronutrient intakes with a lag time of 3–4 d, but not 1–2 d. This suggests that short-term studies may fail to recognize important signals of food-intake regulation that operate over several days. These corrective responses probably play a crucial role in bringing about weight stability. PMID:19064509
Learning from examples - Generation and evaluation of decision trees for software resource analysis
NASA Technical Reports Server (NTRS)
Selby, Richard W.; Porter, Adam A.
1988-01-01
A general solution method for the automatic generation of decision (or classification) trees is investigated. The approach is to provide insights through in-depth empirical characterization and evaluation of decision trees for software resource data analysis. The trees identify classes of objects (software modules) that had high development effort. Sixteen software systems ranging from 3,000 to 112,000 source lines were selected for analysis from a NASA production environment. The collection and analysis of 74 attributes (or metrics), for over 4,700 objects, captured information about the development effort, faults, changes, design style, and implementation style. A total of 9,600 decision trees were automatically generated and evaluated. The trees correctly identified 79.3 percent of the software modules that had high development effort or faults, and the trees generated from the best parameter combinations correctly identified 88.4 percent of the modules on the average.
Sakalauskiene, Giedre
2009-01-01
Low back pain is a global worldwide problem. A great attention is given to correction of this health status by a wide range of rehabilitation specialists. Some single or integrated physical factors, physiotherapy, specific and nonspecific physical exercises, alternative methods of treatment, also the complex of multidisciplinary rehabilitation means are applied in the management of low back pain. The evidence-based data are analyzed in order to identify which nonpharmacological means are effective in pain correction; in addition, the effectiveness of various methods and models of low back pain management are compared in this article. Research data evaluating the impact effectiveness of single or integrated means of rehabilitation are very controversial. There are no evidence-based specific recommendations for the correction of this health status objectively assessing advantages of physiotherapy or physical factors and referring the definite indications of their prescription. It is thought that multidisciplinary rehabilitation is most effective in management of chronic low back pain. The positive results depend on the experience of a physician and other rehabilitation specialists. A patient's motivation to participate in the process of pain control is very important. It is recommended to inform a patient about the effectiveness of administered methods. There is a lack of evidence-based trials evaluating the effectiveness of nonpharmacological methods of pain control in Lithuania. Therefore, the greater attention of researchers and administrative structures of health care should be given to this problem in order to develop the evidence-based guidelines for an effective correction of low back pain.
Convergent genetic and expression data implicate immunity in Alzheimer's disease
Jones, Lesley; Lambert, Jean-Charles; Wang, Li-San; Choi, Seung-Hoan; Harold, Denise; Vedernikov, Alexey; Escott-Price, Valentina; Stone, Timothy; Richards, Alexander; Bellenguez, Céline; Ibrahim-Verbaas, Carla A; Naj, Adam C; Sims, Rebecca; Gerrish, Amy; Jun, Gyungah; DeStefano, Anita L; Bis, Joshua C; Beecham, Gary W; Grenier-Boley, Benjamin; Russo, Giancarlo; Thornton-Wells, Tricia A; Jones, Nicola; Smith, Albert V; Chouraki, Vincent; Thomas, Charlene; Ikram, M Arfan; Zelenika, Diana; Vardarajan, Badri N; Kamatani, Yoichiro; Lin, Chiao-Feng; Schmidt, Helena; Kunkle, Brian; Dunstan, Melanie L; Ruiz, Agustin; Bihoreau, Marie-Thérèse; Reitz, Christiane; Pasquier, Florence; Hollingworth, Paul; Hanon, Olivier; Fitzpatrick, Annette L; Buxbaum, Joseph D; Campion, Dominique; Crane, Paul K; Becker, Tim; Gudnason, Vilmundur; Cruchaga, Carlos; Craig, David; Amin, Najaf; Berr, Claudine; Lopez, Oscar L; De Jager, Philip L; Deramecourt, Vincent; Johnston, Janet A; Evans, Denis; Lovestone, Simon; Letteneur, Luc; Kornhuber, Johanes; Tárraga, Lluís; Rubinsztein, David C; Eiriksdottir, Gudny; Sleegers, Kristel; Goate, Alison M; Fiévet, Nathalie; Huentelman, Matthew J; Gill, Michael; Emilsson, Valur; Brown, Kristelle; Kamboh, M Ilyas; Keller, Lina; Barberger-Gateau, Pascale; McGuinness, Bernadette; Larson, Eric B; Myers, Amanda J; Dufouil, Carole; Todd, Stephen; Wallon, David; Love, Seth; Kehoe, Pat; Rogaeva, Ekaterina; Gallacher, John; George-Hyslop, Peter St; Clarimon, Jordi; Lleὀ, Alberti; Bayer, Anthony; Tsuang, Debby W; Yu, Lei; Tsolaki, Magda; Bossù, Paola; Spalletta, Gianfranco; Proitsi, Petra; Collinge, John; Sorbi, Sandro; Garcia, Florentino Sanchez; Fox, Nick; Hardy, John; Naranjo, Maria Candida Deniz; Razquin, Cristina; Bosco, Paola; Clarke, Robert; Brayne, Carol; Galimberti, Daniela; Mancuso, Michelangelo; Moebus, Susanne; Mecocci, Patrizia; del Zompo, Maria; Maier, Wolfgang; Hampel, Harald; Pilotto, Alberto; Bullido, Maria; Panza, Francesco; Caffarra, Paolo; Nacmias, Benedetta; Gilbert, John R; Mayhaus, Manuel; Jessen, Frank; Dichgans, Martin; Lannfelt, Lars; Hakonarson, Hakon; Pichler, Sabrina; Carrasquillo, Minerva M; Ingelsson, Martin; Beekly, Duane; Alavarez, Victoria; Zou, Fanggeng; Valladares, Otto; Younkin, Steven G; Coto, Eliecer; Hamilton-Nelson, Kara L; Mateo, Ignacio; Owen, Michael J; Faber, Kelley M; Jonsson, Palmi V; Combarros, Onofre; O'Donovan, Michael C; Cantwell, Laura B; Soininen, Hilkka; Blacker, Deborah; Mead, Simon; Mosley, Thomas H; Bennett, David A; Harris, Tamara B; Fratiglioni, Laura; Holmes, Clive; de Bruijn, Renee FAG; Passmore, Peter; Montine, Thomas J; Bettens, Karolien; Rotter, Jerome I; Brice, Alexis; Morgan, Kevin; Foroud, Tatiana M; Kukull, Walter A; Hannequin, Didier; Powell, John F; Nalls, Michael A; Ritchie, Karen; Lunetta, Kathryn L; Kauwe, John SK; Boerwinkle, Eric; Riemenschneider, Matthias; Boada, Mercè; Hiltunen, Mikko; Martin, Eden R; Pastor, Pau; Schmidt, Reinhold; Rujescu, Dan; Dartigues, Jean-François; Mayeux, Richard; Tzourio, Christophe; Hofman, Albert; Nöthen, Markus M; Graff, Caroline; Psaty, Bruce M; Haines, Jonathan L; Lathrop, Mark; Pericak-Vance, Margaret A; Launer, Lenore J; Farrer, Lindsay A; van Duijn, Cornelia M; Van Broekhoven, Christine; Ramirez, Alfredo; Schellenberg, Gerard D; Seshadri, Sudha; Amouyel, Philippe; Holmans, Peter A
2015-01-01
Background Late–onset Alzheimer's disease (AD) is heritable with 20 genes showing genome wide association in the International Genomics of Alzheimer's Project (IGAP). To identify the biology underlying the disease we extended these genetic data in a pathway analysis. Methods The ALIGATOR and GSEA algorithms were used in the IGAP data to identify associated functional pathways and correlated gene expression networks in human brain. Results ALIGATOR identified an excess of curated biological pathways showing enrichment of association. Enriched areas of biology included the immune response (p = 3.27×10-12 after multiple testing correction for pathways), regulation of endocytosis (p = 1.31×10-11), cholesterol transport (p = 2.96 × 10-9) and proteasome-ubiquitin activity (p = 1.34×10-6). Correlated gene expression analysis identified four significant network modules, all related to the immune response (corrected p 0.002 – 0.05). Conclusions The immune response, regulation of endocytosis, cholesterol transport and protein ubiquitination represent prime targets for AD therapeutics. PMID:25533204
Wadlin, Jill K.; Hanko, Gayle; Stewart, Rebecca; Pape, John; Nachamkin, Irving
1999-01-01
We evaluated three commercial systems (RapID Yeast Plus System; Innovative Diagnostic Systems, Norcross, Ga.; API 20C Aux; bioMerieux-Vitek, Hazelwood, Mo.; and Vitek Yeast Biochemical Card, bioMerieux-Vitek) against an auxinographic and microscopic morphologic reference method for the ability to identify yeasts commonly isolated in our clinical microbiology laboratory. Two-hundred one yeast isolates were compared in the study. The RapID Yeast Plus System was significantly better than either API 20C Aux (193 versus 167 correct identifications; P < 0.0001) or the Vitek Yeast Biochemical Card (193 versus 173 correct identifications; P = 0.003) for obtaining correct identifications to the species level without additional testing. There was no significant difference between results obtained with API 20C Aux and the Vitek Yeast Biochemical Card system (P = 0.39). The API 20C Aux system did not correctly identify any of the Candida krusei isolates (n = 23) without supplemental testing and accounted for the major differences between the API 20C Aux and RapID Yeast Plus systems. Overall, the RapID Yeast Plus System was easy to use and is a good system for the routine identification of clinically relevant yeasts. PMID:10325356
TATARELLI, P.; LORENZI, I.; CAVIGLIA, I.; SACCO, R.A.; LA MASA, D.
2016-01-01
Summary Introduction. Hand decontamination with alcohol-based antiseptic agents is considered the best practise to reduce healthcare associated infections. We present a new method to monitor hand hygiene, introduced in a tertiary care pediatric hospital in Northern Italy, which estimates the mean number of daily hand decontamination procedures performed per patient. Methods. The total amount of isopropyl alcohol and chlorhexidine solution supplied in a trimester to each hospital ward was put in relation with the number of hospitalization days, and expressed as litres/1000 hospitalization-days (World Health Organization standard method). Moreover, the ratio between the total volume of hand hygiene products supplied and the effective amount of hand disinfection product needed for a correct procedure was calculated. Then, this number was divided by 90 (days in a quarter) and then by the mean number of bed active in each day in a Unit, resulting in the mean estimated number of hand hygiene procedures per patient per day (new method). Results. The two methods had similar performance for estimating the adherence to correct hand disinfection procedures. The new method identified wards and/or periods with high or low adherence to the procedure and indicated where to perform interventions and their effectiveness. The new method could result easy-to understand also for non-infection control experts. Conclusions. This method can help non-infection control experts to understand adherence to correct hand-hygiene procedures and improve quality standards. PMID:28167854
Yothers, Mitchell P; Browder, Aaron E; Bumm, Lloyd A
2017-01-01
We have developed a real-space method to correct distortion due to thermal drift and piezoelectric actuator nonlinearities on scanning tunneling microscope images using Matlab. The method uses the known structures typically present in high-resolution atomic and molecularly resolved images as an internal standard. Each image feature (atom or molecule) is first identified in the image. The locations of each feature's nearest neighbors are used to measure the local distortion at that location. The local distortion map across the image is simultaneously fit to our distortion model, which includes thermal drift in addition to piezoelectric actuator hysteresis and creep. The image coordinates of the features and image pixels are corrected using an inverse transform from the distortion model. We call this technique the thermal-drift, hysteresis, and creep transform. Performing the correction in real space allows defects, domain boundaries, and step edges to be excluded with a spatial mask. Additional real-space image analyses are now possible with these corrected images. Using graphite(0001) as a model system, we show lattice fitting to the corrected image, averaged unit cell images, and symmetry-averaged unit cell images. Statistical analysis of the distribution of the image features around their best-fit lattice sites measures the aggregate noise in the image, which can be expressed as feature confidence ellipsoids.
NASA Astrophysics Data System (ADS)
Yothers, Mitchell P.; Browder, Aaron E.; Bumm, Lloyd A.
2017-01-01
We have developed a real-space method to correct distortion due to thermal drift and piezoelectric actuator nonlinearities on scanning tunneling microscope images using Matlab. The method uses the known structures typically present in high-resolution atomic and molecularly resolved images as an internal standard. Each image feature (atom or molecule) is first identified in the image. The locations of each feature's nearest neighbors are used to measure the local distortion at that location. The local distortion map across the image is simultaneously fit to our distortion model, which includes thermal drift in addition to piezoelectric actuator hysteresis and creep. The image coordinates of the features and image pixels are corrected using an inverse transform from the distortion model. We call this technique the thermal-drift, hysteresis, and creep transform. Performing the correction in real space allows defects, domain boundaries, and step edges to be excluded with a spatial mask. Additional real-space image analyses are now possible with these corrected images. Using graphite(0001) as a model system, we show lattice fitting to the corrected image, averaged unit cell images, and symmetry-averaged unit cell images. Statistical analysis of the distribution of the image features around their best-fit lattice sites measures the aggregate noise in the image, which can be expressed as feature confidence ellipsoids.
Almost, Joan; Gifford, Wendy A; Doran, Diane; Ogilvie, Linda; Miller, Crystal; Rose, Don N; Squires, Mae
2013-06-21
Nurses are the primary healthcare providers in correctional facilities. A solid knowledge and expertise that includes the use of research evidence in clinical decision making is needed to optimize nursing practice and promote positive health outcomes within these settings. The institutional emphasis on custodial care within a heavily secured, regulated, and punitive environment presents unique contextual challenges for nursing practice. Subsequently, correctional nurses are not always able to obtain training or ongoing education that is required for broad scopes of practice. The purpose of the proposed study is to develop an educational intervention for correctional nurses to support the provision of evidence-informed care. A two-phase mixed methods research design will be used. The setting will be three provincial correctional facilities. Phase one will focus on identifying nurses' scope of practice and practice needs, describing work environment characteristics that support evidence-informed practice and developing the intervention. Semi-structured interviews will be completed with nurses and nurse managers. To facilitate priorities for the intervention, a Delphi process will be used to rank the learning needs identified by participants. Based on findings, an online intervention will be developed. Phase two will involve evaluating the acceptability and feasibility of the intervention to inform a future experimental design. The context of provincial correctional facilities presents unique challenges for nurses' provision of care. This study will generate information to address practice and learning needs specific to correctional nurses. Interventions tailored to barriers and supports within specific contexts are important to enable nurses to provide evidence-informed care.
Jiang, Hongzhen; Zhao, Jianlin; Di, Jianglei; Qin, Chuan
2009-10-12
We propose an effective reconstruction method for correcting the joint misplacement of the sub-holograms caused by the displacement error of CCD in spatial synthetic aperture digital Fresnel holography. For every two adjacent sub-holograms along the motion path of CCD, we reconstruct the corresponding holographic images under different joint distances between the sub-holograms and then find out the accurate joint distance by evaluating the quality of the corresponding synthetic reconstructed images. Then the accurate relative position relationships of the sub-holograms can be confirmed according to all of the identified joint distances, with which the accurate synthetic reconstructed image can be obtained by superposing the reconstruction results of the sub-holograms. The numerical reconstruction results are in agreement with the theoretical analysis. Compared with the traditional reconstruction method, this method could be used to not only correct the joint misplacement of the sub-holograms without the limitation of the actually overlapping circumstances of the adjacent sub-holograms, but also make the joint precision of the sub-holograms reach sub-pixel accuracy.
Development of an immunochromatographic assay for the β-adrenergic agonist feed additive zilpaterol.
Shelver, Weilin L; Smith, David J
2018-06-06
Zilpaterol is a β-adrenergic agonist feed additive approved in the United States to increase weight gain and improve feed efficiency of cattle. A zilpaterol immunochromatographic assay was developed as an economical and user-friendly rapid detection method for zilpaterol and validated using urine and tissue samples derived from animal studies. The assay sensitivity was 1.7-23.2 ng g -1 or mL -1 across a variety of feed and animal matrices and did not cross-react with clenbuterol or ractopamine. No sample pre-treatment of cattle and sheep urine was needed, but horse urine and feed required dilution; skeletal muscle required solvent extraction prior to testing. Of 32 incurred sheep urine samples tested, zilpaterol content was correctly identified in all but 2 samples. Horse urine containing >10 ng mL -1 of incurred zilpaterol residue (n = 48) was correctly identified as zilpaterol positive. The assay correctly identified 0-day withdrawal sheep muscle samples as zilpaterol positive and the control and longer withdrawal day sheep muscle samples as negative. Zilpaterol was demonstrated to be stable in horse urine when stored at -20°C for 7 years.
Tu, Li-ping; Chen, Jing-bo; Hu, Xiao-juan; Zhang, Zhi-feng
2016-01-01
Background and Goal. The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods. Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results. The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions. At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible. PMID:28050555
Qi, Zhen; Tu, Li-Ping; Chen, Jing-Bo; Hu, Xiao-Juan; Xu, Jia-Tuo; Zhang, Zhi-Feng
2016-01-01
Background and Goal . The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods . Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results . The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions . At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible.
Silva-Rodríguez, Jesús; Aguiar, Pablo; Sánchez, Manuel; Mosquera, Javier; Luna-Vega, Víctor; Cortés, Julia; Garrido, Miguel; Pombar, Miguel; Ruibal, Alvaro
2014-05-01
Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manual ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.
LeBlanc, Julia K; DeWitt, Jon; Johnson, Cynthia; Okumu, Wycliffe; McGreevy, Kathleen; Symms, Michelle; McHenry, Lee; Sherman, Stuart; Imperiale, Thomas
2009-04-01
The efficacy of 1-injection versus a 2-injections method of EUS-guided celiac plexus block (EUS-CPB) in patients with chronic pancreatitis is not known. To compare the clinical effectiveness and safety of EUS-CPB by using 1 versus 2 injections in patients with chronic pancreatitis and pain. The secondary aim is to identify factors that predict responsiveness. A prospective randomized study. EUS-CPB was performed by using bupivacaine and triamcinolone injected into 1 or 2 sites at the level of the celiac trunk during a single EUS-CPB procedure. Duration of pain relief, onset of pain relief, and complications. Fifty [corrected] subjects were enrolled (23 received 1 injection, 27 [corrected] received 2 injections). The median duration of pain relief in the 31 responders was 28 days (range 1-673 days). [corrected] Fifteen [corrected] of 23 (65%) [corrected] subjects who received 1 injection [corrected] had relief from pain compared with 16 of 27 (59%) [corrected] subjects who received 2 injections [corrected] (P = .67). [corrected] The median times to onset in the 1-injection and 2-injections groups were 21 and 14 days, respectively (P = .99). No correlation existed between duration of pain relief and time to onset of pain relief or onset within 24 hours. Age, sex, race, prior EUS-CPB, and smoking or alcohol history did not predict duration of pain relief. Telephone interviewers were not blinded. There was no difference in duration of pain relief or onset of pain relief in subjects with chronic pancreatitis and pain when the same total amount of medication was delivered in 1 or 2 injections during a single EUS-CPB procedure. Both methods were safe.
Bias correction of satellite-based rainfall data
NASA Astrophysics Data System (ADS)
Bhattacharya, Biswa; Solomatine, Dimitri
2015-04-01
Limitation in hydro-meteorological data availability in many catchments limits the possibility of reliable hydrological analyses especially for near-real-time predictions. However, the variety of satellite based and meteorological model products for rainfall provides new opportunities. Often times the accuracy of these rainfall products, when compared to rain gauge measurements, is not impressive. The systematic differences of these rainfall products from gauge observations can be partially compensated by adopting a bias (error) correction. Many of such methods correct the satellite based rainfall data by comparing their mean value to the mean value of rain gauge data. Refined approaches may also first find out a suitable time scale at which different data products are better comparable and then employ a bias correction at that time scale. More elegant methods use quantile-to-quantile bias correction, which however, assumes that the available (often limited) sample size can be useful in comparing probabilities of different rainfall products. Analysis of rainfall data and understanding of the process of its generation reveals that the bias in different rainfall data varies in space and time. The time aspect is sometimes taken into account by considering the seasonality. In this research we have adopted a bias correction approach that takes into account the variation of rainfall in space and time. A clustering based approach is employed in which every new data point (e.g. of Tropical Rainfall Measuring Mission (TRMM)) is first assigned to a specific cluster of that data product and then, by identifying the corresponding cluster of gauge data, the bias correction specific to that cluster is adopted. The presented approach considers the space-time variation of rainfall and as a result the corrected data is more realistic. Keywords: bias correction, rainfall, TRMM, satellite rainfall
Evaluation of the Microbial Identification System for identification of clinically isolated yeasts.
Crist, A E; Johnson, L M; Burke, P J
1996-01-01
The Microbial Identification System (MIS; Microbial ID, Inc., Newark, Del.) was evaluated for the identification of 550 clinically isolated yeasts. The organisms evaluated were fresh clinical isolates identified by methods routinely used in our laboratory (API 20C and conventional methods) and included Candida albicans (n = 294), C. glabrata (n = 145), C. tropicalis (n = 58), C. parapsilosis (n = 33), and other yeasts (n = 20). In preparation for fatty acid analysis, yeasts were inoculated onto Sabouraud dextrose agar and incubated at 28 degrees C for 24 h. Yeasts were harvested, saponified, derivatized, and extracted, and fatty acid analysis was performed according to the manufacturer's instructions. Fatty acid profiles were analyzed, and computer identifications were made with the Yeast Clinical Library (database version 3.8). Of the 550 isolates tested, 374 (68.0%) were correctly identified to the species level, with 87 (15.8%) being incorrectly identified and 89 (16.2%) giving no identification. Repeat testing of isolates giving no identification resulted in an additional 18 isolates being correctly identified. This gave the MIS an overall identification rate of 71.3%. The most frequently misidentified yeast was C. glabrata, which was identified as Saccharomyces cerevisiae 32.4% of the time. On the basis of these results, the MIS, with its current database, does not appear suitable for the routine identification of clinically important yeasts. PMID:8880489
Systems, methods and apparatus for verification of knowledge-based systems
NASA Technical Reports Server (NTRS)
Rash, James L. (Inventor); Gracinin, Denis (Inventor); Erickson, John D. (Inventor); Rouff, Christopher A. (Inventor); Hinchey, Michael G. (Inventor)
2010-01-01
Systems, methods and apparatus are provided through which in some embodiments, domain knowledge is translated into a knowledge-based system. In some embodiments, a formal specification is derived from rules of a knowledge-based system, the formal specification is analyzed, and flaws in the formal specification are used to identify and correct errors in the domain knowledge, from which a knowledge-based system is translated.
Goh, Swee Han; Driedger, David; Gillett, Sandra; Low, Donald E.; Hemmingsen, Sean M.; Amos, Mayben; Chan, David; Lovgren, Marguerite; Willey, Barbara M.; Shaw, Carol; Smith, John A.
1998-01-01
It was recently reported that Streptococcus iniae, a bacterial pathogen of aquatic animals, can cause serious disease in humans. Using the chaperonin 60 (Cpn60) gene identification method with reverse checkerboard hybridization and chemiluminescent detection, we identified correctly each of 12 S. iniae samples among 34 aerobic gram-positive isolates from animal and clinical human sources. PMID:9650992
Identifying the theory of dark matter with direct detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gluscevic, Vera; Gresham, Moira I.; McDermott, Samuel D.
2015-12-01
Identifying the true theory of dark matter depends crucially on accurately characterizing interactions of dark matter (DM) with other species. In the context of DM direct detection, we present a study of the prospects for correctly identifying the low-energy effective DM-nucleus scattering operators connected to UV-complete models of DM-quark interactions. We take a census of plausible UV-complete interaction models with different low-energy leading-order DM-nuclear responses. For each model (corresponding to different spin–, momentum–, and velocity-dependent responses), we create a large number of realizations of recoil-energy spectra, and use Bayesian methods to investigate the probability that experiments will be able tomore » select the correct scattering model within a broad set of competing scattering hypotheses. We conclude that agnostic analysis of a strong signal (such as Generation-2 would see if cross sections are just below the current limits) seen on xenon and germanium experiments is likely to correctly identify momentum dependence of the dominant response, ruling out models with either 'heavy' or 'light' mediators, and enabling downselection of allowed models. However, a unique determination of the correct UV completion will critically depend on the availability of measurements from a wider variety of nuclear targets, including iodine or fluorine. We investigate how model-selection prospects depend on the energy window available for the analysis. In addition, we discuss accuracy of the DM particle mass determination under a wide variety of scattering models, and investigate impact of the specific types of particle-physics uncertainties on prospects for model selection.« less
Identifying the theory of dark matter with direct detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gluscevic, Vera; Gresham, Moira I.; McDermott, Samuel D.
2015-12-29
Identifying the true theory of dark matter depends crucially on accurately characterizing interactions of dark matter (DM) with other species. In the context of DM direct detection, we present a study of the prospects for correctly identifying the low-energy effective DM-nucleus scattering operators connected to UV-complete models of DM-quark interactions. We take a census of plausible UV-complete interaction models with different low-energy leading-order DM-nuclear responses. For each model (corresponding to different spin–, momentum–, and velocity-dependent responses), we create a large number of realizations of recoil-energy spectra, and use Bayesian methods to investigate the probability that experiments will be able tomore » select the correct scattering model within a broad set of competing scattering hypotheses. We conclude that agnostic analysis of a strong signal (such as Generation-2 would see if cross sections are just below the current limits) seen on xenon and germanium experiments is likely to correctly identify momentum dependence of the dominant response, ruling out models with either “heavy” or “light” mediators, and enabling downselection of allowed models. However, a unique determination of the correct UV completion will critically depend on the availability of measurements from a wider variety of nuclear targets, including iodine or fluorine. We investigate how model-selection prospects depend on the energy window available for the analysis. In addition, we discuss accuracy of the DM particle mass determination under a wide variety of scattering models, and investigate impact of the specific types of particle-physics uncertainties on prospects for model selection.« less
Prenatal Diagnosis of Placenta Accreta: Sonography or Magnetic Resonance Imaging?
Dwyer, Bonnie K.; Belogolovkin, Victoria; Tran, Lan; Rao, Anjali; Carroll, Ian; Barth, Richard; Chitkara, Usha
2009-01-01
Objective The purpose of this study was to compare the accuracy of transabdominal sonography and magnetic resonance imaging (MRI) for prenatal diagnosis of placenta accreta. Methods A historical cohort study was undertaken at 3 institutions identifying women at risk for placenta accreta who had undergone both sonography and MRI prenatally. Sonographic and MRI findings were compared with the final diagnosis as determined at delivery and by pathologic examination. Results Thirty-two patients who had both sonography and MRI prenatally to evaluate for placenta accreta were identified. Of these, 15 had confirmation of placenta accreta at delivery. Sonography correctly identified the presence of placenta accreta in 14 of 15 patients (93% sensitivity; 95% confidence interval [CI], 80%–100%) and the absence of placenta accreta in 12 of 17 patients (71% specificity; 95% CI, 49%–93%). Magnetic resonance imaging correctly identified the presence of placenta accreta in 12 of 15 patients (80% sensitivity; 95% CI, 60%–100%) and the absence of placenta accreta in 11 of 17 patients (65% specificity; 95% CI, 42%–88%). In 7 of 32 cases, sonography and MRI had discordant diagnoses: sonography was correct in 5 cases, and MRI was correct in 2. There was no statistical difference in sensitivity (P = .25) or specificity (P = .5) between sonography and MRI. Conclusions Both sonography and MRI have fairly good sensitivity for prenatal diagnosis of placenta accreta; however, specificity does not appear to be as good as reported in other studies. In the case of inconclusive findings with one imaging modality, the other modality may be useful for clarifying the diagnosis. PMID:18716136
Software-assisted post-interventional assessment of radiofrequency ablation
NASA Astrophysics Data System (ADS)
Rieder, Christian; Geisler, Benjamin; Bruners, Philipp; Isfort, Peter; Na, Hong-Sik; Mahnken, Andreas H.; Hahn, Horst K.
2014-03-01
Radiofrequency ablation (RFA) is becoming a standard procedure for minimally invasive tumor treatment in clinical practice. Due to its common technical procedure, low complication rate, and low cost, RFA has become an alternative to surgical resection in the liver. To evaluate the therapy success of RFA, thorough follow-up imaging is essential. Conventionally, shape, size, and position of tumor and coagulation are visually compared in a side-by-side manner using pre- and post-interventional images. To objectify the verification of the treatment success, a novel software assistant allowing for fast and accurate comparison of tumor and coagulation is proposed. In this work, the clinical value of the proposed assessment software is evaluated. In a retrospective clinical study, 39 cases of hepatic tumor ablation are evaluated using the prototype software and conventional image comparison by four radiologists with different levels of experience. The cases are randomized and evaluated in two sessions to avoid any recall-bias. Self-confidence of correct diagnosis (local recurrence vs. no local recurrence) on a six-point scale is given for each case by the radiologists. Sensitivity, specificity, positive and negative predictive values as well as receiver operating curves are calculated for both methods. It is shown that the software-assisted method allows physicians to correctly identify local tumor recurrence with a higher percentage than the conventional method (sensitivity: 0.6 vs. 0.35), whereas the percentage of correctly identified successful ablations is slightly reduced (specificity: 0.83 vs. 0.89).
Campe, Amely; Schulz, Sophia; Bohnet, Willa
2016-01-01
Although equids have had to be tagged with a transponder since 2009, breeding associations in Germany disagree as to which method is best suited for identification (with or without hot iron branding). Therefore, the aim of this systematic literature review was to gain an overview of how effective identification is using transponders and hot iron branding and as to which factors influence the success of identification. Existing literature showed that equids can be identified by means of transponders with a probability of 85-100%, whereas symbol brandings could be identified correctly in 78-89%, whole number brandings in 0-87% and single figures in 37-92% of the readings, respectively. The successful reading of microchips can be further optimised by a correctly operated implantation process and thorough training of the applying persons. affect identification with a scanner. The removal of transponders for manipulation purposes is virtually impossible. Influences during the application of branding marks can hardly, if at all, be standardised, but influence the subsequent readability relevantly. Therefore, identification by means of hot branding cannot be considered sufficiently reliable. Impaired quality of identification can be reduced during reading but cannot be counteracted. Based on the existing studies it can be concluded that the transponder method is the best suited of the investigated methods for clearly identifying equids, being forgery-proof and permanent. It is not to be expected that applying hot branding in addition to microchips would optimise the probability of identification relevantly.
Candida bloodstream infection: a clinical microbiology laboratory perspective.
Pongrácz, Júlia; Kristóf, Katalin
2014-09-01
The incidence of Candida bloodstream infection (BSI) has been on the rise in several countries worldwide. Species distribution is changing; an increase in the percentage of non-albicans species, mainly fluconazole non-susceptible C. glabrata was reported. Existing microbiology diagnostic methods lack sensitivity, and new methods need to be developed or further evaluation for routine application is necessary. Although reliable, standardized methods for antifungal susceptibility testing are available, the determination of clinical breakpoints remains challenging. Correct species identification is important and provides information on the intrinsic susceptibility profile of the isolate. Currently, acquired resistance in clinical Candida isolates is rare, but reports indicate that it could be an issue in the future. The role of the clinical microbiology laboratory is to isolate and correctly identify the infective agent and provide relevant and reliable susceptibility data as soon as possible to guide antifungal therapy.
Library Education in the ASEAN Countries.
ERIC Educational Resources Information Center
Atan, H. B.; Havard-Williams, P.
1987-01-01
Identifies the hierarchy of library development in Southeast Asian countries that results in the neglect of public and school libraries. Developing local library school curricula which focus on the specific needs of each country and cooperation among library schools are suggested as methods of correcting this situation. (CLB)
Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu
2013-06-01
It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical IMRT dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method.
Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.
2009-01-01
Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086
Statistical bias correction modelling for seasonal rainfall forecast for the case of Bali island
NASA Astrophysics Data System (ADS)
Lealdi, D.; Nurdiati, S.; Sopaheluwakan, A.
2018-04-01
Rainfall is an element of climate which is highly influential to the agricultural sector. Rain pattern and distribution highly determines the sustainability of agricultural activities. Therefore, information on rainfall is very useful for agriculture sector and farmers in anticipating the possibility of extreme events which often cause failures of agricultural production. This research aims to identify the biases from seasonal forecast products from ECMWF (European Centre for Medium-Range Weather Forecasts) rainfall forecast and to build a transfer function in order to correct the distribution biases as a new prediction model using quantile mapping approach. We apply this approach to the case of Bali Island, and as a result, the use of bias correction methods in correcting systematic biases from the model gives better results. The new prediction model obtained with this approach is better than ever. We found generally that during rainy season, the bias correction approach performs better than in dry season.
Different hunting strategies select for different weights in red deer.
Martínez, María; Rodríguez-Vigal, Carlos; Jones, Owen R; Coulson, Tim; San Miguel, Alfonso
2005-09-22
Much insight can be derived from records of shot animals. Most researchers using such data assume that their data represents a random sample of a particular demographic class. However, hunters typically select a non-random subset of the population and hunting is, therefore, not a random process. Here, with red deer (Cervus elaphus) hunting data from a ranch in Toledo, Spain, we demonstrate that data collection methods have a significant influence upon the apparent relationship between age and weight. We argue that a failure to correct for such methodological bias may have significant consequences for the interpretation of analyses involving weight or correlated traits such as breeding success, and urge researchers to explore methods to identify and correct for such bias in their data.
Exploring work-life issues in provincial corrections settings.
Almost, Joan; Doran, Diane; Ogilvie, Linda; Miller, Crystal; Kennedy, Shirley; Timmings, Carol; Rose, Don N; Squires, Mae; Lee, Charlotte T; Bookey-Bassett, Sue
2013-01-01
Correctional nurses hold a unique position within the nursing profession as their work environment combines the demands of two systems, corrections and health care. Nurses working within these settings must be constantly aware of security issues while ensuring that quality care is provided. The primary role of nurses in correctional health care underscores the importance of understanding nurses' perceptions about their work. The purpose of this study was to examine the work environment of nurses working in provincial correctional facilities. A mixed-methods design was used. Interviews were conducted with 13 nurses and healthcare managers (HCMs) from five facilities. Surveys were distributed to 511 nurses and HCMs in all provincial facilities across the province of Ontario, Canada. The final sample consisted of 270 nurses and 27 HCMs with completed surveys. Participants identified several key issues in their work environments, including inadequate staffing and heavy workloads, limited control over practice and scope of practice, limited resources, and challenging workplace relationships. Work environment interventions are needed to address these issues and subsequently improve the recruitment and retention of correctional nurses.
Parameter as a Switch Between Dynamical States of a Network in Population Decoding.
Yu, Jiali; Mao, Hua; Yi, Zhang
2017-04-01
Population coding is a method to represent stimuli using the collective activities of a number of neurons. Nevertheless, it is difficult to extract information from these population codes with the noise inherent in neuronal responses. Moreover, it is a challenge to identify the right parameter of the decoding model, which plays a key role for convergence. To address the problem, a population decoding model is proposed for parameter selection. Our method successfully identified the key conditions for a nonzero continuous attractor. Both the theoretical analysis and the application studies demonstrate the correctness and effectiveness of this strategy.
caCORRECT2: Improving the accuracy and reliability of microarray data in the presence of artifacts
2011-01-01
Background In previous work, we reported the development of caCORRECT, a novel microarray quality control system built to identify and correct spatial artifacts commonly found on Affymetrix arrays. We have made recent improvements to caCORRECT, including the development of a model-based data-replacement strategy and integration with typical microarray workflows via caCORRECT's web portal and caBIG grid services. In this report, we demonstrate that caCORRECT improves the reproducibility and reliability of experimental results across several common Affymetrix microarray platforms. caCORRECT represents an advance over state-of-art quality control methods such as Harshlighting, and acts to improve gene expression calculation techniques such as PLIER, RMA and MAS5.0, because it incorporates spatial information into outlier detection as well as outlier information into probe normalization. The ability of caCORRECT to recover accurate gene expressions from low quality probe intensity data is assessed using a combination of real and synthetic artifacts with PCR follow-up confirmation and the affycomp spike in data. The caCORRECT tool can be accessed at the website: http://cacorrect.bme.gatech.edu. Results We demonstrate that (1) caCORRECT's artifact-aware normalization avoids the undesirable global data warping that happens when any damaged chips are processed without caCORRECT; (2) When used upstream of RMA, PLIER, or MAS5.0, the data imputation of caCORRECT generally improves the accuracy of microarray gene expression in the presence of artifacts more than using Harshlighting or not using any quality control; (3) Biomarkers selected from artifactual microarray data which have undergone the quality control procedures of caCORRECT are more likely to be reliable, as shown by both spike in and PCR validation experiments. Finally, we present a case study of the use of caCORRECT to reliably identify biomarkers for renal cell carcinoma, yielding two diagnostic biomarkers with potential clinical utility, PRKAB1 and NNMT. Conclusions caCORRECT is shown to improve the accuracy of gene expression, and the reproducibility of experimental results in clinical application. This study suggests that caCORRECT will be useful to clean up possible artifacts in new as well as archived microarray data. PMID:21957981
Loonen, A J M; Jansz, A R; Stalpers, J; Wolffs, P F G; van den Brule, A J C
2012-07-01
Matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOF MS) is a fast and reliable method for the identification of bacteria from agar media. Direct identification from positive blood cultures should decrease the time to obtaining the result. In this study, three different processing methods for the rapid direct identification of bacteria from positive blood culture bottles were compared. In total, 101 positive aerobe BacT/ALERT bottles were included in this study. Aliquots from all bottles were used for three bacterial processing methods, i.e. the commercially available Bruker's MALDI Sepsityper kit, the commercially available Molzym's MolYsis Basic5 kit and a centrifugation/washing method. In addition, the best method was used to evaluate the possibility of MALDI application after a reduced incubation time of 7 h of Staphylococcus aureus- and Escherichia coli-spiked (1,000, 100 and 10 colony-forming units [CFU]) aerobe BacT/ALERT blood cultures. Sixty-six (65%), 51 (50.5%) and 79 (78%) bottles were identified correctly at the species level when the centrifugation/washing method, MolYsis Basic 5 and Sepsityper were used, respectively. Incorrect identification was obtained in 35 (35%), 50 (49.5%) and 22 (22%) bottles, respectively. Gram-positive cocci were correctly identified in 33/52 (64%) of the cases. However, Gram-negative rods showed a correct identification in 45/47 (96%) of all bottles when the Sepsityper kit was used. Seven hours of pre-incubation of S. aureus- and E. coli-spiked aerobe BacT/ALERT blood cultures never resulted in reliable identification with MALDI-TOF MS. Sepsityper is superior for the direct identification of microorganisms from aerobe BacT/ALERT bottles. Gram-negative pathogens show better results compared to Gram-positive bacteria. Reduced incubation followed by MALDI-TOF MS did not result in faster reliable identification.
Mobile image based color correction using deblurring
NASA Astrophysics Data System (ADS)
Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.
2015-03-01
Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space.
Christ, Ana Paula Guarnieri; Ramos, Solange Rodrigues; Cayô, Rodrigo; Gales, Ana Cristina; Hachich, Elayse Maria; Sato, Maria Inês Zanoli
2017-05-15
MALDI-TOF Mass Spectrometry Biotyping has proven to be a reliable method for identifying bacteria at the species level based on the analysis of the ribosomal proteins mass fingerprint. We evaluate the usefulness of this method to identify Enterococcus species isolated from marine recreational water at Brazilian beaches. A total of 127 Enterococcus spp. isolates were identified to species level by bioMérieux's API® 20 Strep and MALDI-TOF systems. The biochemical test identified 117/127 isolates (92%), whereas MALDI identified 100% of the isolates, with an agreement of 63% between the methods. The 16S rRNA gene sequencing of isolates with discrepant results showed that MALDI-TOF and API® correctly identified 74% and 11% of these isolates, respectively. This discrepancy probably relies on the bias of the API® has to identify clinical isolates. MALDI-TOF proved to be a feasible approach for identifying Enterococcus from environmental matrices increasing the rapidness and accuracy of results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Baron, Ellen Jo; D'Souza, Holly; Qi Wang, Andrew; Gibbs, David L.
2008-01-01
The Biomic V3 microbiology system identifies bacteria by reading the color of colonies selected by the user. For CHROMagar orientation, Biomic results agreed with conventional methods for 94% of the strains assayed. For CHROMagar MRSA, Biomic correctly identified 100% of the strains tested and did not misidentify two methicillin-susceptible Staphylococcus aureus strains growing on the plates. PMID:18701661
A Non-parametric Cutout Index for Robust Evaluation of Identified Proteins*
Serang, Oliver; Paulo, Joao; Steen, Hanno; Steen, Judith A.
2013-01-01
This paper proposes a novel, automated method for evaluating sets of proteins identified using mass spectrometry. The remaining peptide-spectrum match score distributions of protein sets are compared to an empirical absent peptide-spectrum match score distribution, and a Bayesian non-parametric method reminiscent of the Dirichlet process is presented to accurately perform this comparison. Thus, for a given protein set, the process computes the likelihood that the proteins identified are correctly identified. First, the method is used to evaluate protein sets chosen using different protein-level false discovery rate (FDR) thresholds, assigning each protein set a likelihood. The protein set assigned the highest likelihood is used to choose a non-arbitrary protein-level FDR threshold. Because the method can be used to evaluate any protein identification strategy (and is not limited to mere comparisons of different FDR thresholds), we subsequently use the method to compare and evaluate multiple simple methods for merging peptide evidence over replicate experiments. The general statistical approach can be applied to other types of data (e.g. RNA sequencing) and generalizes to multivariate problems. PMID:23292186
Spurious correlations and inference in landscape genetics
Samuel A. Cushman; Erin L. Landguth
2010-01-01
Reliable interpretation of landscape genetic analyses depends on statistical methods that have high power to identify the correct process driving gene flow while rejecting incorrect alternative hypotheses. Little is known about statistical power and inference in individual-based landscape genetics. Our objective was to evaluate the power of causalmodelling with partial...
ERIC Educational Resources Information Center
Goncher, Andrea M.; Jayalath, Dhammika; Boles, Wageeh
2016-01-01
Concept inventory tests are one method to evaluate conceptual understanding and identify possible misconceptions. The multiple-choice question format, offering a choice between a correct selection and common misconceptions, can provide an assessment of students' conceptual understanding in various dimensions. Misconceptions of some engineering…
78 FR 60301 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-01
... Evaluation Program (HSEEP) After Action Report (AAR) Improvement Plan (IP). DATES: Comments must be submitted...) Improvement Plan (IP) provides a standardized method for reporting the results of preparedness exercises and identifying, correcting and sharing as appropriate strengths and areas for improvement. Thus, the HSEEP AAR/IP...
Identification of metastable states in peptide's dynamics
NASA Astrophysics Data System (ADS)
Ruzhytska, Svitlana; Jacobi, Martin Nilsson; Jensen, Christian H.; Nerukh, Dmitry
2010-10-01
A recently developed spectral method for identifying metastable states in Markov chains is used to analyze the conformational dynamics of a four-residue peptide valine-proline-alanine-leucine. We compare our results to empirically defined conformational states and show that the found metastable states correctly reproduce the conformational dynamics of the system.
Temperature and pressure effects on capacitance probe cryogenic liquid level measurement accuracy
NASA Technical Reports Server (NTRS)
Edwards, Lawrence G.; Haberbusch, Mark
1993-01-01
The inaccuracies of liquid nitrogen and liquid hydrogen level measurements by use of a coaxial capacitance probe were investigated as a function of fluid temperatures and pressures. Significant liquid level measurement errors were found to occur due to the changes in the fluids dielectric constants which develop over the operating temperature and pressure ranges of the cryogenic storage tanks. The level measurement inaccuracies can be reduced by using fluid dielectric correction factors based on measured fluid temperatures and pressures. The errors in the corrected liquid level measurements were estimated based on the reported calibration errors of the temperature and pressure measurement systems. Experimental liquid nitrogen (LN2) and liquid hydrogen (LH2) level measurements were obtained using the calibrated capacitance probe equations and also by the dielectric constant correction factor method. The liquid levels obtained by the capacitance probe for the two methods were compared with the liquid level estimated from the fluid temperature profiles. Results show that the dielectric constant corrected liquid levels agreed within 0.5 percent of the temperature profile estimated liquid level. The uncorrected dielectric constant capacitance liquid level measurements deviated from the temperature profile level by more than 5 percent. This paper identifies the magnitude of liquid level measurement error that can occur for LN2 and LH2 fluids due to temperature and pressure effects on the dielectric constants over the tank storage conditions from 5 to 40 psia. A method of reducing the level measurement errors by using dielectric constant correction factors based on fluid temperature and pressure measurements is derived. The improved accuracy by use of the correction factors is experimentally verified by comparing liquid levels derived from fluid temperature profiles.
Bonan, Brigitte; Martelli, Nicolas; Berhoune, Malik; Maestroni, Marie-Laure; Havard, Laurent; Prognon, Patrice
2009-02-01
To apply the Hazard analysis and Critical Control Points method to the preparation of anti-cancer drugs. To identify critical control points in our cancer chemotherapy process and to propose control measures and corrective actions to manage these processes. The Hazard Analysis and Critical Control Points application began in January 2004 in our centralized chemotherapy compounding unit. From October 2004 to August 2005, monitoring of the process nonconformities was performed to assess the method. According to the Hazard Analysis and Critical Control Points method, a multidisciplinary team was formed to describe and assess the cancer chemotherapy process. This team listed all of the critical points and calculated their risk indexes according to their frequency of occurrence, their severity and their detectability. The team defined monitoring, control measures and corrective actions for each identified risk. Finally, over a 10-month period, pharmacists reported each non-conformity of the process in a follow-up document. Our team described 11 steps in the cancer chemotherapy process. The team identified 39 critical control points, including 11 of higher importance with a high-risk index. Over 10 months, 16,647 preparations were performed; 1225 nonconformities were reported during this same period. The Hazard Analysis and Critical Control Points method is relevant when it is used to target a specific process such as the preparation of anti-cancer drugs. This method helped us to focus on the production steps, which can have a critical influence on product quality, and led us to improve our process.
Image Corruption Detection in Diffusion Tensor Imaging for Post-Processing and Real-Time Monitoring
Li, Yue; Shea, Steven M.; Lorenz, Christine H.; Jiang, Hangyi; Chou, Ming-Chung; Mori, Susumu
2013-01-01
Due to the high sensitivity of diffusion tensor imaging (DTI) to physiological motion, clinical DTI scans often suffer a significant amount of artifacts. Tensor-fitting-based, post-processing outlier rejection is often used to reduce the influence of motion artifacts. Although it is an effective approach, when there are multiple corrupted data, this method may no longer correctly identify and reject the corrupted data. In this paper, we introduce a new criterion called “corrected Inter-Slice Intensity Discontinuity” (cISID) to detect motion-induced artifacts. We compared the performance of algorithms using cISID and other existing methods with regard to artifact detection. The experimental results show that the integration of cISID into fitting-based methods significantly improves the retrospective detection performance at post-processing analysis. The performance of the cISID criterion, if used alone, was inferior to the fitting-based methods, but cISID could effectively identify severely corrupted images with a rapid calculation time. In the second part of this paper, an outlier rejection scheme was implemented on a scanner for real-time monitoring of image quality and reacquisition of the corrupted data. The real-time monitoring, based on cISID and followed by post-processing, fitting-based outlier rejection, could provide a robust environment for routine DTI studies. PMID:24204551
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonzogni, A. A.; McCutchan, E. A.; Johnson, T. D.
Fission yields form an integral part of the prediction of antineutrino spectra generated by nuclear reactors, but little attention has been paid to the quality and reliability of the data used in current calculations. Following a critical review of the thermal and fast ENDF/B-VII.1 235U fission yields, deficiencies are identified and improved yields are obtained, based on corrections of erroneous yields, consistency between decay and fission yield data, and updated isomeric ratios. These corrected yields are used to calculate antineutrino spectra using the summation method. An anomalous value for the thermal fission yield of 86Ge generates an excess of antineutrinosmore » at 5–7 MeV, a feature which is no longer present when the corrected yields are used. Thermal spectra calculated with two distinct fission yield libraries (corrected ENDF/B and JEFF) differ by up to 6% in the 0–7 MeV energy window, allowing for a basic estimate of the uncertainty involved in the fission yield component of summation calculations. Lastly, the fast neutron antineutrino spectrum is calculated, which at the moment can only be obtained with the summation method and may be relevant for short baseline reactor experiments using highly enriched uranium fuel.« less
Rules based process window OPC
NASA Astrophysics Data System (ADS)
O'Brien, Sean; Soper, Robert; Best, Shane; Mason, Mark
2008-03-01
As a preliminary step towards Model-Based Process Window OPC we have analyzed the impact of correcting post-OPC layouts using rules based methods. Image processing on the Brion Tachyon was used to identify sites where the OPC model/recipe failed to generate an acceptable solution. A set of rules for 65nm active and poly were generated by classifying these failure sites. The rules were based upon segment runlengths, figure spaces, and adjacent figure widths. 2.1 million sites for active were corrected in a small chip (comparing the pre and post rules based operations), and 59 million were found at poly. Tachyon analysis of the final reticle layout found weak margin sites distinct from those sites repaired by rules-based corrections. For the active layer more than 75% of the sites corrected by rules would have printed without a defect indicating that most rulesbased cleanups degrade the lithographic pattern. Some sites were missed by the rules based cleanups due to either bugs in the DRC software or gaps in the rules table. In the end dramatic changes to the reticle prevented catastrophic lithography errors, but this method is far too blunt. A more subtle model-based procedure is needed changing only those sites which have unsatisfactory lithographic margin.
Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies
NASA Technical Reports Server (NTRS)
Gordon, Howard R.
1997-01-01
Significant accomplishments made during the present reporting period are as follows: (1) We developed a new method for identifying the presence of absorbing aerosols and, simultaneously, performing atmospheric correction. The algorithm consists of optimizing the match between the top-of-atmosphere radiance spectrum and the result of models of both the ocean and aerosol optical properties; (2) We developed an algorithm for providing an accurate computation of the diffuse transmittance of the atmosphere given an aerosol model. A module for inclusion into the MODIS atmospheric-correction algorithm was completed; (3) We acquired reflectance data for oceanic whitecaps during a cruise on the RV Ka'imimoana in the Tropical Pacific (Manzanillo, Mexico to Honolulu, Hawaii). The reflectance spectrum of whitecaps was found to be similar to that for breaking waves in the surf zone measured by Frouin, Schwindling and Deschamps, however, the drop in augmented reflectance from 670 to 860 nm was not as great, and the magnitude of the augmented reflectance was significantly less than expected; and (4) We developed a method for the approximate correction for the effects of the MODIS polarization sensitivity. The correction, however, requires adequate characterization of the polarization sensitivity of MODIS prior to launch.
NASA Astrophysics Data System (ADS)
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo
2016-02-01
Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.
Challenges in projecting clustering results across gene expression-profiling datasets.
Lusa, Lara; McShane, Lisa M; Reid, James F; De Cecco, Loris; Ambrogi, Federico; Biganzoli, Elia; Gariboldi, Manuela; Pierotti, Marco A
2007-11-21
Gene expression microarray studies for several types of cancer have been reported to identify previously unknown subtypes of tumors. For breast cancer, a molecular classification consisting of five subtypes based on gene expression microarray data has been proposed. These subtypes have been reported to exist across several breast cancer microarray studies, and they have demonstrated some association with clinical outcome. A classification rule based on the method of centroids has been proposed for identifying the subtypes in new collections of breast cancer samples; the method is based on the similarity of the new profiles to the mean expression profile of the previously identified subtypes. Previously identified centroids of five breast cancer subtypes were used to assign 99 breast cancer samples, including a subset of 65 estrogen receptor-positive (ER+) samples, to five breast cancer subtypes based on microarray data for the samples. The effect of mean centering the genes (i.e., transforming the expression of each gene so that its mean expression is equal to 0) on subtype assignment by method of centroids was assessed. Further studies of the effect of mean centering and of class prevalence in the test set on the accuracy of method of centroids classifications of ER status were carried out using training and test sets for which ER status had been independently determined by ligand-binding assay and for which the proportion of ER+ and ER- samples were systematically varied. When all 99 samples were considered, mean centering before application of the method of centroids appeared to be helpful for correctly assigning samples to subtypes, as evidenced by the expression of genes that had previously been used as markers to identify the subtypes. However, when only the 65 ER+ samples were considered for classification, many samples appeared to be misclassified, as evidenced by an unexpected distribution of ER+ samples among the resultant subtypes. When genes were mean centered before classification of samples for ER status, the accuracy of the ER subgroup assignments was highly dependent on the proportion of ER+ samples in the test set; this effect of subtype prevalence was not seen when gene expression data were not mean centered. Simple corrections such as mean centering of genes aimed at microarray platform or batch effect correction can have undesirable consequences because patient population effects can easily be confused with these assay-related effects. Careful thought should be given to the comparability of the patient populations before attempting to force data comparability for purposes of assigning subtypes to independent subjects.
Ainslie, Michael A; Leighton, Timothy G
2009-11-01
The scattering cross-section sigma(s) of a gas bubble of equilibrium radius R(0) in liquid can be written in the form sigma(s)=4piR(0) (2)[(omega(1) (2)omega(2)-1)(2)+delta(2)], where omega is the excitation frequency, omega(1) is the resonance frequency, and delta is a frequency-dependent dimensionless damping coefficient. A persistent discrepancy in the frequency dependence of the contribution to delta from radiation damping, denoted delta(rad), is identified and resolved, as follows. Wildt's [Physics of Sound in the Sea (Washington, DC, 1946), Chap. 28] pioneering derivation predicts a linear dependence of delta(rad) on frequency, a result which Medwin [Ultrasonics 15, 7-13 (1977)] reproduces using a different method. Weston [Underwater Acoustics, NATO Advanced Study Institute Series Vol. II, 55-88 (1967)], using ostensibly the same method as Wildt, predicts the opposite relationship, i.e., that delta(rad) is inversely proportional to frequency. Weston's version of the derivation of the scattering cross-section is shown here to be the correct one, thus resolving the discrepancy. Further, a correction to Weston's model is derived that amounts to a shift in the resonance frequency. A new, corrected, expression for the extinction cross-section is also derived. The magnitudes of the corrections are illustrated using examples from oceanography, volcanology, planetary acoustics, neutron spallation, and biomedical ultrasound. The corrections become significant when the bulk modulus of the gas is not negligible relative to that of the surrounding liquid.
Regression discontinuity design in criminal justice evaluation: an introduction and illustration.
Rhodes, William; Jalbert, Sarah Kuck
2013-01-01
Corrections agencies frequently place offenders into risk categories, within which offenders receive different levels of supervision and programming. This supervision strategy is seldom evaluated but often can be through routine use of a regression discontinuity design (RDD). This article argues that RDD provides a rigorous and cost-effective method for correctional agencies to evaluate and improve supervision strategies and advocates for using RDD routinely in corrections administration. The objective is to better employ correctional resources. This article uses a Neyman-Pearson counterfactual framework to introduce readers to RDD, to provide intuition for why RDD should be used broadly, and to motivate a deeper reading into the methodology. The article also illustrates an application of RDD to evaluate an intensive supervision program for probationers. Application of the RDD, which requires basic knowledge of regressions and some special diagnostic tools, is within the competencies of many criminal justice evaluators. RDD is shown to be an effective strategy to identify the treatment effect in a community corrections agency using supervision that meets the necessary conditions for RDD. The article concludes with a critical review of how RDD compares to experimental methods to answer policy questions. The article recommends using RDD to evaluate whether differing levels of control and correction reduce criminal recidivism. It also advocates for routine use of RDD as an administrative tool to determine cut points used to assign offenders into different risk categories based on the offenders' risk scores.
A downscaling method for the assessment of local climate change
NASA Astrophysics Data System (ADS)
Bruno, E.; Portoghese, I.; Vurro, M.
2009-04-01
The use of complimentary models is necessary to study the impact of climate change scenarios on the hydrological response at different space-time scales. However, the structure of GCMs is such that their space resolution (hundreds of kilometres) is too coarse and not adequate to describe the variability of extreme events at basin scale (Burlando and Rosso, 2002). To bridge the space-time gap between the climate scenarios and the usual scale of the inputs for hydrological prediction models is a fundamental requisite for the evaluation of climate change impacts on water resources. Since models operate a simplification of a complex reality, their results cannot be expected to fit with climate observations. Identifying local climate scenarios for impact analysis implies the definition of more detailed local scenario by downscaling GCMs or RCMs results. Among the output correction methods we consider the statistical approach by Déqué (2007) reported as a ‘Variable correction method' in which the correction of model outputs is obtained by a function build with the observation dataset and operating a quantile-quantile transformation (Q-Q transform). However, in the case of daily precipitation fields the Q-Q transform is not able to correct the temporal property of the model output concerning the dry-wet lacunarity process. An alternative correction method is proposed based on a stochastic description of the arrival-duration-intensity processes in coherence with the Poissonian Rectangular Pulse scheme (PRP) (Eagleson, 1972). In this proposed approach, the Q-Q transform is applied to the PRP variables derived from the daily rainfall datasets. Consequently the corrected PRP parameters are used for the synthetic generation of statistically homogeneous rainfall time series that mimic the persistency of daily observations for the reference period. Then the PRP parameters are forced through the GCM scenarios to generate local scale rainfall records for the 21st century. The statistical parameters characterizing daily storm occurrence, storm intensity and duration needed to apply the PRP scheme are considered among STARDEX collection of extreme indices.
Leynes, Andrew P; Yang, Jaewon; Wiesinger, Florian; Kaushik, Sandeep S; Shanbhag, Dattesh D; Seo, Youngho; Hope, Thomas A; Larson, Peder E Z
2018-05-01
Accurate quantification of uptake on PET images depends on accurate attenuation correction in reconstruction. Current MR-based attenuation correction methods for body PET use a fat and water map derived from a 2-echo Dixon MRI sequence in which bone is neglected. Ultrashort-echo-time or zero-echo-time (ZTE) pulse sequences can capture bone information. We propose the use of patient-specific multiparametric MRI consisting of Dixon MRI and proton-density-weighted ZTE MRI to directly synthesize pseudo-CT images with a deep learning model: we call this method ZTE and Dixon deep pseudo-CT (ZeDD CT). Methods: Twenty-six patients were scanned using an integrated 3-T time-of-flight PET/MRI system. Helical CT images of the patients were acquired separately. A deep convolutional neural network was trained to transform ZTE and Dixon MR images into pseudo-CT images. Ten patients were used for model training, and 16 patients were used for evaluation. Bone and soft-tissue lesions were identified, and the SUV max was measured. The root-mean-squared error (RMSE) was used to compare the MR-based attenuation correction with the ground-truth CT attenuation correction. Results: In total, 30 bone lesions and 60 soft-tissue lesions were evaluated. The RMSE in PET quantification was reduced by a factor of 4 for bone lesions (10.24% for Dixon PET and 2.68% for ZeDD PET) and by a factor of 1.5 for soft-tissue lesions (6.24% for Dixon PET and 4.07% for ZeDD PET). Conclusion: ZeDD CT produces natural-looking and quantitatively accurate pseudo-CT images and reduces error in pelvic PET/MRI attenuation correction compared with standard methods. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.
Gettig, Jacob P
2006-04-01
To determine the prevalence of established multiple-choice test-taking correct and incorrect answer cues in the American College of Clinical Pharmacy's Updates in Therapeutics: The Pharmacotherapy Preparatory Course, 2005 Edition, as an equal or lesser surrogate indication of the prevalence of such cues in the Pharmacotherapy board certification examination. All self-assessment and patient case question-and-answer sets were assessed individually to determine if they were subject to selected correct and incorrect answer cues commonly seen in multiple-choice question writing. If the question was considered evaluable, correct answer cues-longest answer, mid-range number, one of two similar choices, and one of two opposite choices-were tallied. In addition, incorrect answer cues- inclusionary language and grammatical mismatch-were also tallied. Each cue was counted if it did what was expected or did the opposite of what was expected. Multiple cues could be identified in each question. A total of 237 (47.7%) of 497 questions in the manual were deemed evaluable. A total of 325 correct answer cues and 35 incorrect answer cues were identified in the 237 evaluable questions. Most evaluable questions contained one to two correct and/or incorrect answer cue(s). Longest answer was the most frequently identified correct answer cue; however, it was the least likely to identify the correct answer. Inclusionary language was the most frequently identified incorrect answer cue. Incorrect answer cues were considerably more likely to identify incorrect answer choices than correct answer cues were able to identify correct answer choices. The use of established multiple-choice test-taking cues is unlikely to be of significant help when taking the Pharmacotherapy board certification examination, primarily because of the lack of questions subject to such cues and the inability of correct answer cues to accurately identify correct answers. Incorrect answer cues, especially the use of inclusionary language, almost always will accurately identify an incorrect answer choice. Assuming that questions in the preparatory course manual were equal or lesser surrogates of those in the board certification examination, it is unlikely that intuition alone can replace adequate preparation and studying as the sole determinant of examination success.
Bjerke, Benjamin T; Cheung, Zoe B; Shifflett, Grant D; Iyer, Sravisht; Derman, Peter B; Cunningham, Matthew E
2015-10-01
Shoulder balance for adolescent idiopathic scoliosis (AIS) patients is associated with patient satisfaction and self-image. However, few validated systems exist for selecting the upper instrumented vertebra (UIV) post-surgical shoulder balance. The purpose is to examine the existing UIV selection criteria and correlate with post-surgical shoulder balance in AIS patients. Patients who underwent spinal fusion at age 10-18 years for AIS over a 6-year period were reviewed. All patients with a minimum of 1-year radiographic follow-up were included. Imbalance was determined to be radiographic shoulder height |RSH| ≥ 15 mm at latest follow-up. Three UIV selection methods were considered: Lenke, Ilharreborde, and Trobisch. A recommended UIV was determined using each method from pre-surgical radiographs. The recommended UIV for each method was compared to the actual UIV instrumented for all three methods; concordance between these levels was defined as "Correct" UIV selection, and discordance was defined as "Incorrect" selection. One hundred seventy-one patients were included with 2.3 ± 1.1 year follow-up. For all methods, "Correct" UIV selection resulted in more shoulder imbalance than "Incorrect" UIV selection. Overall shoulder imbalance incidence was improved from 31.0% (53/171) to 15.2% (26/171). New shoulder imbalance incidence for patients with previously level shoulders was 8.8%. We could not identify a set of UIV selection criteria that accurately predicted post-surgical shoulder balance. Further validated measures are needed in this area. The complexity of proximal thoracic curve correction is underscored in a case example, where shoulder imbalance occurred despite "Correct" UIV selection by all methods.
Joshi, Vinayak S; Reinhardt, Joseph M; Garvin, Mona K; Abramoff, Michael D
2014-01-01
The separation of the retinal vessel network into distinct arterial and venous vessel trees is of high interest. We propose an automated method for identification and separation of retinal vessel trees in a retinal color image by converting a vessel segmentation image into a vessel segment map and identifying the individual vessel trees by graph search. Orientation, width, and intensity of each vessel segment are utilized to find the optimal graph of vessel segments. The separated vessel trees are labeled as primary vessel or branches. We utilize the separated vessel trees for arterial-venous (AV) classification, based on the color properties of the vessels in each tree graph. We applied our approach to a dataset of 50 fundus images from 50 subjects. The proposed method resulted in an accuracy of 91.44% correctly classified vessel pixels as either artery or vein. The accuracy of correctly classified major vessel segments was 96.42%.
Stanley, Jeffrey R.; Adkins, Joshua N.; Slysz, Gordon W.; Monroe, Matthew E.; Purvine, Samuel O.; Karpievitch, Yuliya V.; Anderson, Gordon A.; Smith, Richard D.; Dabney, Alan R.
2011-01-01
Current algorithms for quantifying peptide identification confidence in the accurate mass and time (AMT) tag approach assume that the AMT tags themselves have been correctly identified. However, there is uncertainty in the identification of AMT tags, as this is based on matching LC-MS/MS fragmentation spectra to peptide sequences. In this paper, we incorporate confidence measures for the AMT tag identifications into the calculation of probabilities for correct matches to an AMT tag database, resulting in a more accurate overall measure of identification confidence for the AMT tag approach. The method is referred to as Statistical Tools for AMT tag Confidence (STAC). STAC additionally provides a Uniqueness Probability (UP) to help distinguish between multiple matches to an AMT tag and a method to calculate an overall false discovery rate (FDR). STAC is freely available for download as both a command line and a Windows graphical application. PMID:21692516
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zwan, Benjamin J., E-mail: benjamin.zwan@uon.edu.au; O’Connor, Daryl J.; King, Brian W.
2014-08-15
Purpose: To develop a frame-by-frame correction for the energy response of amorphous silicon electronic portal imaging devices (a-Si EPIDs) to radiation that has transmitted through the multileaf collimator (MLC) and to integrate this correction into the backscatter shielded EPID (BSS-EPID) dose-to-water conversion model. Methods: Individual EPID frames were acquired using a Varian frame grabber and iTools acquisition software then processed using in-house software developed inMATLAB. For each EPID image frame, the region below the MLC leaves was identified and all pixels in this region were multiplied by a factor of 1.3 to correct for the under-response of the imager tomore » MLC transmitted radiation. The corrected frames were then summed to form a corrected integrated EPID image. This correction was implemented as an initial step in the BSS-EPID dose-to-water conversion model which was then used to compute dose planes in a water phantom for 35 IMRT fields. The calculated dose planes, with and without the proposed MLC transmission correction, were compared to measurements in solid water using a two-dimensional diode array. Results: It was observed that the integration of the MLC transmission correction into the BSS-EPID dose model improved agreement between modeled and measured dose planes. In particular, the MLC correction produced higher pass rates for almost all Head and Neck fields tested, yielding an average pass rate of 99.8% for 2%/2 mm criteria. A two-sample independentt-test and fisher F-test were used to show that the MLC transmission correction resulted in a statistically significant reduction in the mean and the standard deviation of the gamma values, respectively, to give a more accurate and consistent dose-to-water conversion. Conclusions: The frame-by-frame MLC transmission response correction was shown to improve the accuracy and reduce the variability of the BSS-EPID dose-to-water conversion model. The correction may be applied as a preprocessing step in any pretreatment portal dosimetry calculation and has been shown to be beneficial for highly modulated IMRT fields.« less
SU-E-J-45: The Correlation Between CBCT Flat Panel Misalignment and 3D Image Guidance Accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenton, O; Valdes, G; Yin, L
Purpose To simulate the impact of CBCT flat panel misalignment on the image quality, the calculated correction vectors in 3D image guided proton therapy and to determine if these calibration errors can be caught in our QA process. Methods The X-ray source and detector geometrical calibration (flexmap) file of the CBCT system in the AdaPTinsight software (IBA proton therapy) was edited to induce known changes in the rotational and translational calibrations of the imaging panel. Translations of up to ±10 mm in the x, y and z directions (see supplemental) and rotational errors of up to ±3° were induced. Themore » calibration files were then used to reconstruct the CBCT image of a pancreatic patient and CatPhan phantom. Correction vectors were calculated for the patient using the software’s auto match system and compared to baseline values. The CatPhan CBCT images were used for quantitative evaluation of image quality for each type of induced error. Results Translations of 1 to 3 mm in the x and y calibration resulted in corresponding correction vector errors of equal magnitude. Similar 10mm shifts were seen in the y-direction; however, in the x-direction, the image quality was too degraded for a match. These translational errors can be identified through differences in isocenter from orthogonal kV images taken during routine QA. Errors in the z-direction had no effect on the correction vector and image quality.Rotations of the imaging panel calibration resulted in corresponding correction vector rotations of the patient images. These rotations also resulted in degraded image quality which can be identified through quantitative image quality metrics. Conclusion Misalignment of CBCT geometry can lead to incorrect translational and rotational patient correction vectors. These errors can be identified through QA of the imaging isocenter as compared to orthogonal images combined with monitoring of CBCT image quality.« less
Cemento-osseous dysplasia of the jaw bones: key radiographic features
Alsufyani, NA; Lam, EWN
2011-01-01
Objective The purpose of this study is to assess possible diagnostic differences between general dentists (GPs) and oral and maxillofacial radiologists (RGs) in the identification of pathognomonic radiographic features of cemento-osseous dysplasia (COD) and its interpretation. Methods Using a systematic objective survey instrument, 3 RGs and 3 GPs reviewed 50 image sets of COD and similarly appearing entities (dense bone island, cementoblastoma, cemento-ossifying fibroma, fibrous dysplasia, complex odontoma and sclerosing osteitis). Participants were asked to identify the presence or absence of radiographic features and then to make an interpretation of the images. Results RGs identified a well-defined border (odds ratio (OR) 6.67, P < 0.05); radiolucent periphery (OR 8.28, P < 0.005); bilateral occurrence (OR 10.23, P < 0.01); mixed radiolucent/radiopaque internal structure (OR 10.53, P < 0.01); the absence of non-concentric bony expansion (OR 7.63, P < 0.05); and the association with anterior and posterior teeth (OR 4.43, P < 0.05) as key features of COD. Consequently, RGs were able to correctly interpret 79.3% of COD cases. In contrast, GPs identified the absence of root resorption (OR 4.52, P < 0.05) and the association with anterior and posterior teeth (OR 3.22, P = 0.005) as the only key features of COD and were able to correctly interpret 38.7% of COD cases. Conclusions There are statistically significant differences between RGs and GPs in the identification and interpretation of the radiographic features associated with COD (P < 0.001). We conclude that COD is radiographically discernable from other similarly appearing entities only if the characteristic radiographic features are correctly identified and then correctly interpreted. PMID:21346079
[Differentiation by geometric morphometrics among 11 Anopheles (Nyssorhynchus) in Colombia].
Calle, David Alonso; Quiñones, Martha Lucía; Erazo, Holmes Francisco; Jaramillo, Nicolás
2008-09-01
The correct identification of the Anopheles species of the subgenus Nyssorhynchus is important because this subgenus includes the main malaria vectors in Colombia. This information is necessary for focusing a malaria control program. Geometric morphometrics were used to evaluate morphometric variation of 11 species of subgenus Nyssorhynchus present in Colombia and to distinguish females of each species. Materials and methods. The specimens were obtained from series and family broods from females collected with protected human hosts as attractants. The field collected specimens and their progeny were identified at each of the associated stages by conventional keys. For some species, wild females were used. Landmarks were selected on wings from digital pictures from 336 individuals, and digitized with coordinates. The coordinate matrix was processed by generalized Procrustes analysis which generated size and shape variables, free of non-biological variation. Size and shape variables were analyzed by univariate and multivariate statistics. The subdivision of subgenus Nyssorhynchus in sections is not correlated with wing shape. Discriminant analyses correctly classified 97% of females in the section Albimanus and 86% in the section Argyritarsis. In addition, these methodologies allowed the correct identification of 3 sympatric species from Putumayo which have been difficult to identify in the adult female stage. The geometric morphometrics were demonstrated to be a very useful tool as an adjunct to taxonomy of females the use of this method is recommended in studies of the subgenus Nyssorhynchus in Colombia.
Insights into Inpatients with Poor Vision: A High Value Proposition
Press, Valerie G.; Matthiesen, Madeleine I.; Ranadive, Alisha; Hariprasad, Seenu M.; Meltzer, David O.; Arora, Vineet M.
2015-01-01
Background Vision impairment is an under-recognized risk factor for adverse events among hospitalized patients, yet vision is neither routinely tested nor documented for inpatients. Low-cost ($8 and up) non-prescription ‘readers’ may be a simple, high-value intervention to improve inpatients’ vision. We aimed to study initial feasibility and efficacy of screening and correcting inpatients’ vision. Methods From June 2012 through January 2014 we began testing whether participants’ vision corrected with non-prescription lenses for eligible participants failing a vision screen (Snellen chart) performed by research assistants (RAs). Descriptive statistics and tests of comparison, including t-tests and chi-squared tests, were used when appropriate. All analyses were performed using Stata version 12 (StataCorps, College Station, TX). Results Over 800 participants’ vision was screened (n=853). Older (≥65 years; 56%) participants were more likely to have insufficient vision than younger (<65 years; 28%; p<0.001). Non-prescription readers corrected the majority of eligible participants’ vision (82%, 95/116). Discussion Among an easily identified sub-group of inpatients with poor vision, low-cost ‘readers’ successfully corrected most participants’ vision. Hospitalists and other clinicians working in the inpatient setting can play an important role in identifying opportunities to provide high-value care related to patients’ vision. PMID:25755206
Rajkumari, N; Mathur, P; Xess, I; Misra, M C
2014-01-01
As most trauma patients require long-term hospital stay and long-term antibiotic therapy, the risk of fungal infections in such patients is steadily increasing. Early diagnosis and rapid treatment is life saving in such critically ill trauma patients. To see the distribution of various species of Candida among trauma patients and compare the accuracy, rapid identification and cost effectiveness between VITEK 2, CHROMagar and conventional methods. Retrospective laboratory-based surveillance study performed over a period of 52 months (January 2009 to April 2013) at a level I trauma centre in New Delhi, India. All microbiological samples positive for Candida were processed for microbial identification using standard methods. Identification of Candida was done using chromogenic medium and by automated VITEK 2 Compact system and later confirmed using the conventional method. Time to identification in both was noted and accuracy compared with conventional method. Performed using the SPSS software for Windows (SPSS Inc. Chicago, IL, version 15.0). P values calculated using χ2 test for categorical variables. A P<0.05 was considered significant. Out of 445 yeasts isolates, Candida tropicalis (217, 49%) was the species that was maximally isolated. VITEK 2 was able to correctly identify 354 (79.5%) isolates but could not identify 48 (10.7%) isolates and wrongly identified or showed low discrimination in 43 (9.6%) isolates but CHROM agar correctly identified 381 (85.6%) isolates with 64 (14.4%) misidentification. Highest rate of misidentification was seen in C. tropicalis and C. glabrata (13, 27.1% each) by VITEK 2 and among C. albicans (9, 14%) by CHROMagar. Though CHROMagar gives identification at a lower cost compared with VITEK 2 and are more accurate, which is useful in low resource countries, its main drawback is the long duration taken for complete identification.
STRIDE: Species Tree Root Inference from Gene Duplication Events.
Emms, David M; Kelly, Steven
2017-12-01
The correct interpretation of any phylogenetic tree is dependent on that tree being correctly rooted. We present STRIDE, a fast, effective, and outgroup-free method for identification of gene duplication events and species tree root inference in large-scale molecular phylogenetic analyses. STRIDE identifies sets of well-supported in-group gene duplication events from a set of unrooted gene trees, and analyses these events to infer a probability distribution over an unrooted species tree for the location of its root. We show that STRIDE correctly identifies the root of the species tree in multiple large-scale molecular phylogenetic data sets spanning a wide range of timescales and taxonomic groups. We demonstrate that the novel probability model implemented in STRIDE can accurately represent the ambiguity in species tree root assignment for data sets where information is limited. Furthermore, application of STRIDE to outgroup-free inference of the origin of the eukaryotic tree resulted in a root probability distribution that provides additional support for leading hypotheses for the origin of the eukaryotes. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Suggestibility and state anxiety: how the two concepts relate in a source identification paradigm.
Ridley, Anne M; Clifford, Brian R
2006-01-01
Source identification tests provide a stringent method for testing the suggestibility of memory because they reduce response bias and experimental demand characteristics. Using the techniques and materials of Maria Zaragoza and her colleagues, we investigated how state anxiety affects the ability of undergraduates to identify correctly the source of misleading post-event information. The results showed that individuals high in state anxiety were less likely to make source misattributions of misleading information, indicating lower levels of suggestibility. This effect was strengthened when forgotten or non-recognised misleading items (for which a source identification task is not possible) were excluded from the analysis. Confidence in the correct attribution of misleading post-event information to its source was significantly less than confidence in source misattributions. Participants who were high in state anxiety tended to be less confident than those lower in state anxiety when they correctly identified the source of both misleading post-event information and non-misled items. The implications of these findings are discussed, drawing on the literature on anxiety and cognition as well as suggestibility.
Identification of Acinetobacter seifertii isolated from Bolivian hospitals.
Cerezales, Mónica; Xanthopoulou, Kyriaki; Ertel, Julia; Nemec, Alexandr; Bustamante, Zulema; Seifert, Harald; Gallego, Lucia; Higgins, Paul G
2018-06-01
Acinetobacter seifertii is a recently described species that belongs to the Acinetobacter calcoaceticus-Acinetobacter baumannii complex. It has been recovered from clinical samples and is sometimes associated with antimicrobial resistance determinants. We present here the case of three A. seifertii clinical isolates which were initially identified as Acinetobacter sp. by phenotypic methods but no identification at the species level was achieved using semi-automated identification methods. The isolates were further analysed by whole genome sequencing and identified as A. seifertii. Due to the fact that A. seifertii has been isolated from serious infections such as respiratory tract and bloodstream infections, we emphasize the importance of correctly identifying isolates of the genus Acinetobacter at the species level to gain a deeper knowledge of their prevalence and clinical impact.
Paolucci, M; Foschi, C; Tamburini, M V; Ambretti, S; Lazzarotto, T; Landini, M P
2014-09-01
In this study we evaluated MALDI-TOF MS and FilmArray methods for the rapid identification of yeast from positive blood cultures. FilmArray correctly identified 20/22 of yeast species, while MALDI-TOF MS identified 9/22. FilmArray is a reliable and rapid identification system for the direct identification of yeasts from positive blood cultures. Copyright © 2014 Elsevier B.V. All rights reserved.
Mainenti, Pier Paolo; Iodice, Delfina; Segreto, Sabrina; Storto, Giovanni; Magliulo, Mario; Palma, Giovanni Domenico De; Salvatore, Marco; Pace, Leonardo
2011-01-01
AIM: To evaluate whether FDG-positron emission tomography (PET)/computed tomography (CT) may be an accurate technique in the assessment of the T stage in patients with colorectal cancer. METHODS: Thirty four consecutive patients (20 men and 14 women; mean age: 63 years) with a histologically proven diagnosis of colorectal adenocarcinoma and scheduled for surgery in our hospital were enrolled in this study. All patients underwent FDG-PET/CT preoperatively. The primary tumor site and extent were evaluated on PET/CT images. Colorectal wall invasion was analysed according to a modified T classification that considers only three stages (≤ T2, T3, T4). Assessment of accuracy was carried out using 95% confidence intervals for T. RESULTS: Thirty five/37 (94.6%) adenocarcinomas were identified and correctly located on PET/CT images. PET/CT correctly staged the T of 33/35 lesions identified showing an accuracy of 94.3% (95% CI: 87%-100%). All T1, T3 and T4 lesions were correctly staged, while two T2 neoplasms were overstated as T3. CONCLUSION: Our data suggest that FDG-PET/CT may be an accurate modality for identifying primary tumor and defining its local extent in patients with colorectal cancer. PMID:21472100
A review of propeller noise prediction methodology: 1919-1994
NASA Technical Reports Server (NTRS)
Metzger, F. Bruce
1995-01-01
This report summarizes a review of the literature regarding propeller noise prediction methods. The review is divided into six sections: (1) early methods; (2) more recent methods based on earlier theory; (3) more recent methods based on the Acoustic Analogy; (4) more recent methods based on Computational Acoustics; (5) empirical methods; and (6) broadband methods. The report concludes that there are a large number of noise prediction procedures available which vary markedly in complexity. Deficiencies in accuracy of methods in many cases may be related, not to the methods themselves, but the accuracy and detail of the aerodynamic inputs used to calculate noise. The steps recommended in the report to provide accurate and easy to use prediction methods are: (1) identify reliable test data; (2) define and conduct test programs to fill gaps in the existing data base; (3) identify the most promising prediction methods; (4) evaluate promising prediction methods relative to the data base; (5) identify and correct the weaknesses in the prediction methods, including lack of user friendliness, and include features now available only in research codes; (6) confirm the accuracy of improved prediction methods to the data base; and (7) make the methods widely available and provide training in their use.
Comparison of methods for quantitative evaluation of endoscopic distortion
NASA Astrophysics Data System (ADS)
Wang, Quanzeng; Castro, Kurt; Desai, Viraj N.; Cheng, Wei-Chung; Pfefer, Joshua
2015-03-01
Endoscopy is a well-established paradigm in medical imaging, and emerging endoscopic technologies such as high resolution, capsule and disposable endoscopes promise significant improvements in effectiveness, as well as patient safety and acceptance of endoscopy. However, the field lacks practical standardized test methods to evaluate key optical performance characteristics (OPCs), in particular the geometric distortion caused by fisheye lens effects in clinical endoscopic systems. As a result, it has been difficult to evaluate an endoscope's image quality or assess its changes over time. The goal of this work was to identify optimal techniques for objective, quantitative characterization of distortion that are effective and not burdensome. Specifically, distortion measurements from a commercially available distortion evaluation/correction software package were compared with a custom algorithm based on a local magnification (ML) approach. Measurements were performed using a clinical gastroscope to image square grid targets. Recorded images were analyzed with the ML approach and the commercial software where the results were used to obtain corrected images. Corrected images based on the ML approach and the software were compared. The study showed that the ML method could assess distortion patterns more accurately than the commercial software. Overall, the development of standardized test methods for characterizing distortion and other OPCs will facilitate development, clinical translation, manufacturing quality and assurance of performance during clinical use of endoscopic technologies.
Different hunting strategies select for different weights in red deer
Martínez, María; Rodríguez-Vigal, Carlos; Jones, Owen R; Coulson, Tim; Miguel, Alfonso San
2005-01-01
Much insight can be derived from records of shot animals. Most researchers using such data assume that their data represents a random sample of a particular demographic class. However, hunters typically select a non-random subset of the population and hunting is, therefore, not a random process. Here, with red deer (Cervus elaphus) hunting data from a ranch in Toledo, Spain, we demonstrate that data collection methods have a significant influence upon the apparent relationship between age and weight. We argue that a failure to correct for such methodological bias may have significant consequences for the interpretation of analyses involving weight or correlated traits such as breeding success, and urge researchers to explore methods to identify and correct for such bias in their data. PMID:17148205
Grinde, Kelsey E.; Arbet, Jaron; Green, Alden; O'Connell, Michael; Valcarcel, Alessandra; Westra, Jason; Tintle, Nathan
2017-01-01
To date, gene-based rare variant testing approaches have focused on aggregating information across sets of variants to maximize statistical power in identifying genes showing significant association with diseases. Beyond identifying genes that are associated with diseases, the identification of causal variant(s) in those genes and estimation of their effect is crucial for planning replication studies and characterizing the genetic architecture of the locus. However, we illustrate that straightforward single-marker association statistics can suffer from substantial bias introduced by conditioning on gene-based test significance, due to the phenomenon often referred to as “winner's curse.” We illustrate the ramifications of this bias on variant effect size estimation and variant prioritization/ranking approaches, outline parameters of genetic architecture that affect this bias, and propose a bootstrap resampling method to correct for this bias. We find that our correction method significantly reduces the bias due to winner's curse (average two-fold decrease in bias, p < 2.2 × 10−6) and, consequently, substantially improves mean squared error and variant prioritization/ranking. The method is particularly helpful in adjustment for winner's curse effects when the initial gene-based test has low power and for relatively more common, non-causal variants. Adjustment for winner's curse is recommended for all post-hoc estimation and ranking of variants after a gene-based test. Further work is necessary to continue seeking ways to reduce bias and improve inference in post-hoc analysis of gene-based tests under a wide variety of genetic architectures. PMID:28959274
A method to correct coordinate distortion in EBSD maps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Y.B., E-mail: yubz@dtu.dk; Elbrønd, A.; Lin, F.X.
2014-10-15
Drift during electron backscatter diffraction mapping leads to coordinate distortions in resulting orientation maps, which affects, in some cases significantly, the accuracy of analysis. A method, thin plate spline, is introduced and tested to correct such coordinate distortions in the maps after the electron backscatter diffraction measurements. The accuracy of the correction as well as theoretical and practical aspects of using the thin plate spline method is discussed in detail. By comparing with other correction methods, it is shown that the thin plate spline method is most efficient to correct different local distortions in the electron backscatter diffraction maps. -more » Highlights: • A new method is suggested to correct nonlinear spatial distortion in EBSD maps. • The method corrects EBSD maps more precisely than presently available methods. • Errors less than 1–2 pixels are typically obtained. • Direct quantitative analysis of dynamic data are available after this correction.« less
Zhao, S M; Leach, J; Gong, L Y; Ding, J; Zheng, B Y
2012-01-02
The effect of atmosphere turbulence on light's spatial structure compromises the information capacity of photons carrying the Orbital Angular Momentum (OAM) in free-space optical (FSO) communications. In this paper, we study two aberration correction methods to mitigate this effect. The first one is the Shack-Hartmann wavefront correction method, which is based on the Zernike polynomials, and the second is a phase correction method specific to OAM states. Our numerical results show that the phase correction method for OAM states outperforms the Shark-Hartmann wavefront correction method, although both methods improve significantly purity of a single OAM state and the channel capacities of FSO communication link. At the same time, our experimental results show that the values of participation functions go down at the phase correction method for OAM states, i.e., the correction method ameliorates effectively the bad effect of atmosphere turbulence.
Statistical tests and identifiability conditions for pooling and analyzing multisite datasets
Zhou, Hao Henry; Singh, Vikas; Johnson, Sterling C.; Wahba, Grace
2018-01-01
When sample sizes are small, the ability to identify weak (but scientifically interesting) associations between a set of predictors and a response may be enhanced by pooling existing datasets. However, variations in acquisition methods and the distribution of participants or observations between datasets, especially due to the distributional shifts in some predictors, may obfuscate real effects when datasets are combined. We present a rigorous statistical treatment of this problem and identify conditions where we can correct the distributional shift. We also provide an algorithm for the situation where the correction is identifiable. We analyze various properties of the framework for testing model fit, constructing confidence intervals, and evaluating consistency characteristics. Our technical development is motivated by Alzheimer’s disease (AD) studies, and we present empirical results showing that our framework enables harmonizing of protein biomarkers, even when the assays across sites differ. Our contribution may, in part, mitigate a bottleneck that researchers face in clinical research when pooling smaller sized datasets and may offer benefits when the subjects of interest are difficult to recruit or when resources prohibit large single-site studies. PMID:29386387
Dual-energy-based metal segmentation for metal artifact reduction in dental computed tomography.
Hegazy, Mohamed A A; Eldib, Mohamed Elsayed; Hernandez, Daniel; Cho, Myung Hye; Cho, Min Hyoung; Lee, Soo Yeol
2018-02-01
In a dental CT scan, the presence of dental fillings or dental implants generates severe metal artifacts that often compromise readability of the CT images. Many metal artifact reduction (MAR) techniques have been introduced, but dental CT scans still suffer from severe metal artifacts particularly when multiple dental fillings or implants exist around the region of interest. The high attenuation coefficient of teeth often causes erroneous metal segmentation, compromising the MAR performance. We propose a metal segmentation method for a dental CT that is based on dual-energy imaging with a narrow energy gap. Unlike a conventional dual-energy CT, we acquire two projection data sets at two close tube voltages (80 and 90 kV p ), and then, we compute the difference image between the two projection images with an optimized weighting factor so as to maximize the contrast of the metal regions. We reconstruct CT images from the weighted difference image to identify the metal region with global thresholding. We forward project the identified metal region to designate metal trace on the projection image. We substitute the pixel values on the metal trace with the ones computed by the region filling method. The region filling in the metal trace removes high-intensity data made by the metallic objects from the projection image. We reconstruct final CT images from the region-filled projection image with the fusion-based approach. We have done imaging experiments on a dental phantom and a human skull phantom using a lab-built micro-CT and a commercial dental CT system. We have corrected the projection images of a dental phantom and a human skull phantom using the single-energy and dual-energy-based metal segmentation methods. The single-energy-based method often failed in correcting the metal artifacts on the slices on which tooth enamel exists. The dual-energy-based method showed better MAR performances in all cases regardless of the presence of tooth enamel on the slice of interest. We have compared the MAR performances between both methods in terms of the relative error (REL), the sum of squared difference (SSD) and the normalized absolute difference (NAD). For the dental phantom images corrected by the single-energy-based method, the metric values were 95.3%, 94.5%, and 90.6%, respectively, while they were 90.1%, 90.05%, and 86.4%, respectively, for the images corrected by the dual-energy-based method. For the human skull phantom images, the metric values were improved from 95.6%, 91.5%, and 89.6%, respectively, to 88.2%, 82.5%, and 81.3%, respectively. The proposed dual-energy-based method has shown better performance in metal segmentation leading to better MAR performance in dental imaging. We expect the proposed metal segmentation method can be used to improve the MAR performance of existing MAR techniques that have metal segmentation steps in their correction procedures. © 2017 American Association of Physicists in Medicine.
Analytical Approach to the Fuel Optimal Impulsive Transfer Problem Using Primer Vector Method
NASA Astrophysics Data System (ADS)
Fitrianingsih, E.; Armellin, R.
2018-04-01
One of the objectives of mission design is selecting an optimum orbital transfer which often translated as a transfer which requires minimum propellant consumption. In order to assure the selected trajectory meets the requirement, the optimality of transfer should first be analyzed either by directly calculating the ΔV of the candidate trajectories and select the one that gives a minimum value or by evaluating the trajectory according to certain criteria of optimality. The second method is performed by analyzing the profile of the modulus of the thrust direction vector which is known as primer vector. Both methods come with their own advantages and disadvantages. However, it is possible to use the primer vector method to verify if the result from the direct method is truly optimal or if the ΔV can be reduced further by implementing correction maneuver to the reference trajectory. In addition to its capability to evaluate the transfer optimality without the need to calculate the transfer ΔV, primer vector also enables us to identify the time and position to apply correction maneuver in order to optimize a non-optimum transfer. This paper will present the analytical approach to the fuel optimal impulsive transfer using primer vector method. The validity of the method is confirmed by comparing the result to those from the numerical method. The investigation of the optimality of direct transfer is used to give an example of the application of the method. The case under study is the prograde elliptic transfers from Earth to Mars. The study enables us to identify the optimality of all the possible transfers.
Developing a quality assurance program for online services.
Humphries, A W; Naisawald, G V
1991-01-01
A quality assurance (QA) program provides not only a mechanism for establishing training and competency standards, but also a method for continuously monitoring current service practices to correct shortcomings. The typical QA cycle includes these basic steps: select subject for review, establish measurable standards, evaluate existing services using the standards, identify problems, implement solutions, and reevaluate services. The Claude Moore Health Sciences Library (CMHSL) developed a quality assurance program for online services designed to evaluate services against specific criteria identified by research studies as being important to customer satisfaction. These criteria include reliability, responsiveness, approachability, communication, and physical factors. The application of these criteria to the library's existing online services in the quality review process is discussed with specific examples of the problems identified in each service area, as well as the solutions implemented to correct deficiencies. The application of the QA cycle to an online services program serves as a model of possible interventions. The use of QA principles to enhance online service quality can be extended to other library service areas. PMID:1909197
Developing a quality assurance program for online services.
Humphries, A W; Naisawald, G V
1991-07-01
A quality assurance (QA) program provides not only a mechanism for establishing training and competency standards, but also a method for continuously monitoring current service practices to correct shortcomings. The typical QA cycle includes these basic steps: select subject for review, establish measurable standards, evaluate existing services using the standards, identify problems, implement solutions, and reevaluate services. The Claude Moore Health Sciences Library (CMHSL) developed a quality assurance program for online services designed to evaluate services against specific criteria identified by research studies as being important to customer satisfaction. These criteria include reliability, responsiveness, approachability, communication, and physical factors. The application of these criteria to the library's existing online services in the quality review process is discussed with specific examples of the problems identified in each service area, as well as the solutions implemented to correct deficiencies. The application of the QA cycle to an online services program serves as a model of possible interventions. The use of QA principles to enhance online service quality can be extended to other library service areas.
Identifying and Correcting Timing Errors at Seismic Stations in and around Iran
Syracuse, Ellen Marie; Phillips, William Scott; Maceira, Monica; ...
2017-09-06
A fundamental component of seismic research is the use of phase arrival times, which are central to event location, Earth model development, and phase identification, as well as derived products. Hence, the accuracy of arrival times is crucial. However, errors in the timing of seismic waveforms and the arrival times based on them may go unidentified by the end user, particularly when seismic data are shared between different organizations. Here, we present a method used to analyze travel-time residuals for stations in and around Iran to identify time periods that are likely to contain station timing problems. For the 14more » stations with the strongest evidence of timing errors lasting one month or longer, timing corrections are proposed to address the problematic time periods. Finally, two additional stations are identified with incorrect locations in the International Registry of Seismograph Stations, and one is found to have erroneously reported arrival times in 2011.« less
DNA-based technology helps people solve problems. It can be used to correctly match organ donors with recipients, identify victims of natural and man-made disasters, and detect bacteria and other organisms that may pollute air, soil, food, or water.
Christian R. Mora; Laurence R. Schimleck; Fikret Isik; Jerry M. Mahon Jr.; Alexander Clark III; Richard F. Daniels
2009-01-01
Acoustic tools are increasingly used to estimate standing-tree (dynamic) stiffness; however, such techniques overestimate static stiffness, the standard measurement for determining modulus of elasticity (MOE) of wood. This study aimed to identify correction methods for standing-tree estimates making dynamic and static stiffness comparable. Sixty Pinus taeda L...
Artifacts and power corrections: Reexamining Z{sub {psi}}(p{sup 2}) and Z{sub V} in the momentum-subtraction scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boucaud, Ph.; Leroy, J. P.; Le Yaouanc, A.
2006-08-01
The next-to-leading-order (NLO) term in the operator product expansion (OPE) of the quark propagator vector part Z{sub {psi}} and the vertex function g{sub 1} of the vector current in the Landau gauge should be dominated by the same condensate as in the gluon propagator. On the other hand, the perturbative part has been calculated to a very high precision thanks to Chetyrkin and collaborators. We test this on the lattice, with both clover and overlap fermion actions at {beta}=6.0, 6.4, 6.6, 6.8. Elucidation of discretization artifacts appears to be absolutely crucial. First hypercubic artifacts are eliminated by amore » powerful method, which gives results notably different from the standard democratic method. Then, the presence of unexpected, very large, nonperturbative, O(4) symmetric discretization artifacts, increasing towards small momenta, is demonstrated by considering Z{sub V}{sup MOM}, which should be constant in the absence of such artifacts. They impede in general the analysis of OPE. However, in two special cases with overlap action--(1) for Z{sub {psi}}; (2) for g{sub 1}, but only at large p{sup 2}--we are able to identify the condensate; it agrees with the one resulting from gluonic Green functions. We conclude that the OPE analysis of quark and gluon Green function has reached a quite consistent status, and that the power corrections have been correctly identified. A practical consequence of the whole analysis is that the renormalization constant Z{sub {psi}} (=Z{sub 2}{sup -1} of the momentum-subtraction (MOM) scheme) may differ sizably from the one given by democratic selection methods. More generally, the values of the renormalization constants may be seriously affected by the differences in the treatment of the various types of artifacts, and by the subtraction of power corrections.« less
NASA Astrophysics Data System (ADS)
Eum, H. I.; Cannon, A. J.
2015-12-01
Climate models are a key provider to investigate impacts of projected future climate conditions on regional hydrologic systems. However, there is a considerable mismatch of spatial resolution between GCMs and regional applications, in particular a region characterized by complex terrain such as Korean peninsula. Therefore, a downscaling procedure is an essential to assess regional impacts of climate change. Numerous statistical downscaling methods have been used mainly due to the computational efficiency and simplicity. In this study, four statistical downscaling methods [Bias-Correction/Spatial Disaggregation (BCSD), Bias-Correction/Constructed Analogue (BCCA), Multivariate Adaptive Constructed Analogs (MACA), and Bias-Correction/Climate Imprint (BCCI)] are applied to downscale the latest Climate Forecast System Reanalysis data to stations for precipitation, maximum temperature, and minimum temperature over South Korea. By split sampling scheme, all methods are calibrated with observational station data for 19 years from 1973 to 1991 are and tested for the recent 19 years from 1992 to 2010. To assess skill of the downscaling methods, we construct a comprehensive suite of performance metrics that measure an ability of reproducing temporal correlation, distribution, spatial correlation, and extreme events. In addition, we employ Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to identify robust statistical downscaling methods based on the performance metrics for each season. The results show that downscaling skill is considerably affected by the skill of CFSR and all methods lead to large improvements in representing all performance metrics. According to seasonal performance metrics evaluated, when TOPSIS is applied, MACA is identified as the most reliable and robust method for all variables and seasons. Note that such result is derived from CFSR output which is recognized as near perfect climate data in climate studies. Therefore, the ranking of this study may be changed when various GCMs are downscaled and evaluated. Nevertheless, it may be informative for end-users (i.e. modelers or water resources managers) to understand and select more suitable downscaling methods corresponding to priorities on regional applications.
Assessment of statistical methods used in library-based approaches to microbial source tracking.
Ritter, Kerry J; Carruthers, Ethan; Carson, C Andrew; Ellender, R D; Harwood, Valerie J; Kingsley, Kyle; Nakatsu, Cindy; Sadowsky, Michael; Shear, Brian; West, Brian; Whitlock, John E; Wiggins, Bruce A; Wilbur, Jayson D
2003-12-01
Several commonly used statistical methods for fingerprint identification in microbial source tracking (MST) were examined to assess the effectiveness of pattern-matching algorithms to correctly identify sources. Although numerous statistical methods have been employed for source identification, no widespread consensus exists as to which is most appropriate. A large-scale comparison of several MST methods, using identical fecal sources, presented a unique opportunity to assess the utility of several popular statistical methods. These included discriminant analysis, nearest neighbour analysis, maximum similarity and average similarity, along with several measures of distance or similarity. Threshold criteria for excluding uncertain or poorly matched isolates from final analysis were also examined for their ability to reduce false positives and increase prediction success. Six independent libraries used in the study were constructed from indicator bacteria isolated from fecal materials of humans, seagulls, cows and dogs. Three of these libraries were constructed using the rep-PCR technique and three relied on antibiotic resistance analysis (ARA). Five of the libraries were constructed using Escherichia coli and one using Enterococcus spp. (ARA). Overall, the outcome of this study suggests a high degree of variability across statistical methods. Despite large differences in correct classification rates among the statistical methods, no single statistical approach emerged as superior. Thresholds failed to consistently increase rates of correct classification and improvement was often associated with substantial effective sample size reduction. Recommendations are provided to aid in selecting appropriate analyses for these types of data.
A velocity-correction projection method based immersed boundary method for incompressible flows
NASA Astrophysics Data System (ADS)
Cai, Shanggui
2014-11-01
In the present work we propose a novel direct forcing immersed boundary method based on the velocity-correction projection method of [J.L. Guermond, J. Shen, Velocity-correction projection methods for incompressible flows, SIAM J. Numer. Anal., 41 (1)(2003) 112]. The principal idea of immersed boundary method is to correct the velocity in the vicinity of the immersed object by using an artificial force to mimic the presence of the physical boundaries. Therefore, velocity-correction projection method is preferred to its pressure-correction counterpart in the present work. Since the velocity-correct projection method is considered as a dual class of pressure-correction method, the proposed method here can also be interpreted in the way that first the pressure is predicted by treating the viscous term explicitly without the consideration of the immersed boundary, and the solenoidal velocity is used to determine the volume force on the Lagrangian points, then the non-slip boundary condition is enforced by correcting the velocity with the implicit viscous term. To demonstrate the efficiency and accuracy of the proposed method, several numerical simulations are performed and compared with the results in the literature. China Scholarship Council.
Reynolds, Timothy M; Twomey, Patrick J
2007-01-01
Aims To evaluate the impact of different equations for calculation of estimated glomerular filtration rate (eGFR) on general practitioner (GP) workload. Methods Retrospective evaluation of routine workload data from a district general hospital chemical pathology laboratory serving a GP patient population of approximately 250 000. The most recent serum creatinine result from 80 583 patients was identified and used for the evaluation. eGFR was calculated using one of three different variants of the four‐parameter Modification of Diet in Renal Disease (MDRD) equation. Results The original MDRD equation (eGFR186) and the modified equation with assay‐specific data (eGFR175corrected) both identified similar numbers of patients with stage 4 and stage 5 chronic kidney disease (ChKD), but the modified equation without assay specific data (eGFR175) resulted in a significant increase in stage 4 ChKD. For stage 3 ChKD the eGFR175 identified 28.69% of the population, the eGFR186 identified 21.35% of the population and the eGFR175corrected identified 13.6% of the population. Conclusions Depending on the choice of equation there can be very large changes in the proportions of patients identified with the different stages of ChKD. Given that according to the General Medical Services Quality Framework, all patients with ChKD stages 3–5 should be included on a practice renal registry, and receive relevant drug therapy, this could have significant impacts on practice workload and drug budgets. It is essential that practices work with their local laboratories. PMID:17761741
Adriaens, E; Guest, R; Willoughby, J A; Fochtman, P; Kandarova, H; Verstraelen, S; Van Rompay, A R
2018-06-01
Assessment of ocular irritancy is an international regulatory requirement in the safety evaluation of industrial and consumer products. Although many in vitro ocular irritation assays exist, alone they are incapable of fully categorizing chemicals. The objective of CEFIC-LRI-AIMT6-VITO CON4EI (CONsortium for in vitro Eye Irritation testing strategy) project was to develop tiered testing strategies for eye irritation assessment that can lead to complete replacement of the in vivo Draize rabbit eye test (OECD TG 405). A set of 80 reference chemicals was tested with seven test methods, one method was the Slug Mucosal Irritation (SMI) test method. The method measures the amount of mucus produced (MP) during a single 1-hour contact with a 1% and 10% dilution of the chemical. Based on the total MP, a classification (Cat 1, Cat 2, or No Cat) is predicted. The SMI test method correctly identified 65.8% of the Cat 1 chemicals with a specificity of 90.5% (low over-prediction rate for in vivo Cat 2 and No Cat chemicals). Mispredictions were predominantly unidirectional towards lower classifications with 26.7% of the liquids and 40% of the solids being underpredicted. In general, the performance was better for liquids than for solids with respectively 76.5% vs 57.1% (Cat 1), 61.5% vs 50% (Cat 2), and 87.5% vs 85.7% (No Cat) being identified correctly. Copyright © 2017 Elsevier Ltd. All rights reserved.
Poster — Thur Eve — 72: Clinical Subtleties of Flattening-Filter-Free Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corns, Robert; Thomas, Steven; Huang, Vicky
2014-08-15
Flattening-filter-free (fff) beams offer superior dose rates, reducing treatment times for important techniques that utilize small field sizes, such as stereotactic ablative radiotherapy (SABR). The impact of ion collection efficiency (P{sub ion}) on the percent depth dose (PDD) has been discussed at length in the literature. Relative corrections of the order of l%–2% are possible. In the process of commissioning 6fff and 10fff beams, we identified a number of other important details that influence commissioning. We looked at the absolute dose difference between corrected and uncorrected PDD. We discovered a curve with a broad maximum between 10 and 20 cm.more » We wondered about the consequences of this PDD correction on the absolute dose calibration of the linac because the TG-51 protocol does not correct the PDD curve. The quality factor k{sub Q} depends on the PDD, so in principle, a correction to the PDD will alter the absolute calibration of the linac. Finally, there are other clinical tables, such as TMR, which are derived from PDD. Attention to details on how this computation is performed is important because different corrections are possible depending the method of calculation.« less
Automatic evidence retrieval for systematic reviews.
Choong, Miew Keen; Galgani, Filippo; Dunn, Adam G; Tsafnat, Guy
2014-10-01
Snowballing involves recursively pursuing relevant references cited in the retrieved literature and adding them to the search results. Snowballing is an alternative approach to discover additional evidence that was not retrieved through conventional search. Snowballing's effectiveness makes it best practice in systematic reviews despite being time-consuming and tedious. Our goal was to evaluate an automatic method for citation snowballing's capacity to identify and retrieve the full text and/or abstracts of cited articles. Using 20 review articles that contained 949 citations to journal or conference articles, we manually searched Microsoft Academic Search (MAS) and identified 78.0% (740/949) of the cited articles that were present in the database. We compared the performance of the automatic citation snowballing method against the results of this manual search, measuring precision, recall, and F1 score. The automatic method was able to correctly identify 633 (as proportion of included citations: recall=66.7%, F1 score=79.3%; as proportion of citations in MAS: recall=85.5%, F1 score=91.2%) of citations with high precision (97.7%), and retrieved the full text or abstract for 490 (recall=82.9%, precision=92.1%, F1 score=87.3%) of the 633 correctly retrieved citations. The proposed method for automatic citation snowballing is accurate and is capable of obtaining the full texts or abstracts for a substantial proportion of the scholarly citations in review articles. By automating the process of citation snowballing, it may be possible to reduce the time and effort of common evidence surveillance tasks such as keeping trial registries up to date and conducting systematic reviews.
Method of wavefront tilt correction for optical heterodyne detection systems under strong turbulence
NASA Astrophysics Data System (ADS)
Xiang, Jing-song; Tian, Xin; Pan, Le-chun
2014-07-01
Atmospheric turbulence decreases the heterodyne mixing efficiency of the optical heterodyne detection systems. Wavefront tilt correction is often used to improve the optical heterodyne mixing efficiency. But the performance of traditional centroid tracking tilt correction is poor under strong turbulence conditions. In this paper, a tilt correction method which tracking the peak value of laser spot on focal plane is proposed. Simulation results show that, under strong turbulence conditions, the performance of peak value tracking tilt correction is distinctly better than that of traditional centroid tracking tilt correction method, and the phenomenon of large antenna's performance inferior to small antenna's performance which may be occurred in centroid tracking tilt correction method can also be avoid in peak value tracking tilt correction method.
SU-E-I-38: Improved Metal Artifact Correction Using Adaptive Dual Energy Calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Elder, E; Roper, J
2015-06-15
Purpose: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Methods: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Results: Highly attenuating copper rods cause severe streaking artifacts on standard CT images. EDEC improves the image quality, but cannot eliminate the streaking artifacts. Compared tomore » EDEC, the proposed ADEC method further reduces the streaking resulting from metallic inserts and beam-hardening effects and obtains material decomposition images with significantly improved accuracy. Conclusion: We propose an adaptive dual energy calibration method to correct for metal artifacts. ADEC is evaluated with the Shepp-Logan phantom, and shows superior metal artifact correction performance. In the future, we will further evaluate the performance of the proposed method with phantom and patient data.« less
Marcotte, Thomas D.; Deutsch, Reena; Michael, Benedict Daniel; Franklin, Donald; Cookson, Debra Rosario; Bharti, Ajay R.; Grant, Igor; Letendre, Scott L.
2013-01-01
Background Neurocognitive (NC) impairment (NCI) occurs commonly in people living with HIV. Despite substantial effort, no biomarkers have been sufficiently validated for diagnosis and prognosis of NCI in the clinic. The goal of this project was to identify diagnostic or prognostic biomarkers for NCI in a comprehensively characterized HIV cohort. Methods Multidisciplinary case review selected 98 HIV-infected individuals and categorized them into four NC groups using normative data: stably normal (SN), stably impaired (SI), worsening (Wo), or improving (Im). All subjects underwent comprehensive NC testing, phlebotomy, and lumbar puncture at two timepoints separated by a median of 6.2 months. Eight biomarkers were measured in CSF and blood by immunoassay. Results were analyzed using mixed model linear regression and staged recursive partitioning. Results At the first visit, subjects were mostly middle-aged (median 45) white (58%) men (84%) who had AIDS (70%). Of the 73% who took antiretroviral therapy (ART), 54% had HIV RNA levels below 50 c/mL in plasma. Mixed model linear regression identified that only MCP-1 in CSF was associated with neurocognitive change group. Recursive partitioning models aimed at diagnosis (i.e., correctly classifying neurocognitive status at the first visit) were complex and required most biomarkers to achieve misclassification limits. In contrast, prognostic models were more efficient. A combination of three biomarkers (sCD14, MCP-1, SDF-1α) correctly classified 82% of Wo and SN subjects, including 88% of SN subjects. A combination of two biomarkers (MCP-1, TNF-α) correctly classified 81% of Im and SI subjects, including 100% of SI subjects. Conclusions This analysis of well-characterized individuals identified concise panels of biomarkers associated with NC change. Across all analyses, the two most frequently identified biomarkers were sCD14 and MCP-1, indicators of monocyte/macrophage activation. While the panels differed depending on the outcome and on the degree of misclassification, nearly all stable patients were correctly classified. PMID:24101401
NASA Astrophysics Data System (ADS)
Guo, Haotian; Duan, Fajie; Zhang, Jilong
2016-01-01
Blade tip-timing is the most effective method for blade vibration online measurement of turbomachinery. In this article a synchronous resonance vibration measurement method of blade based on tip-timing is presented. This method requires no once-per revolution sensor which makes it more generally applicable in the condition where this sensor is difficult to install, especially for the high-pressure rotors of dual-rotor engines. Only three casing mounted probes are required to identify the engine order, amplitude, natural frequency and the damping coefficient of the blade. A method is developed to identify the blade which a tip-timing data belongs to without once-per revolution sensor. Theoretical analyses of resonance parameter measurement are presented. Theoretic error of the method is investigated and corrected. Experiments are conducted and the results indicate that blade resonance parameter identification is achieved without once-per revolution sensor.
Automatic Evidence Retrieval for Systematic Reviews
Choong, Miew Keen; Galgani, Filippo; Dunn, Adam G
2014-01-01
Background Snowballing involves recursively pursuing relevant references cited in the retrieved literature and adding them to the search results. Snowballing is an alternative approach to discover additional evidence that was not retrieved through conventional search. Snowballing’s effectiveness makes it best practice in systematic reviews despite being time-consuming and tedious. Objective Our goal was to evaluate an automatic method for citation snowballing’s capacity to identify and retrieve the full text and/or abstracts of cited articles. Methods Using 20 review articles that contained 949 citations to journal or conference articles, we manually searched Microsoft Academic Search (MAS) and identified 78.0% (740/949) of the cited articles that were present in the database. We compared the performance of the automatic citation snowballing method against the results of this manual search, measuring precision, recall, and F1 score. Results The automatic method was able to correctly identify 633 (as proportion of included citations: recall=66.7%, F1 score=79.3%; as proportion of citations in MAS: recall=85.5%, F1 score=91.2%) of citations with high precision (97.7%), and retrieved the full text or abstract for 490 (recall=82.9%, precision=92.1%, F1 score=87.3%) of the 633 correctly retrieved citations. Conclusions The proposed method for automatic citation snowballing is accurate and is capable of obtaining the full texts or abstracts for a substantial proportion of the scholarly citations in review articles. By automating the process of citation snowballing, it may be possible to reduce the time and effort of common evidence surveillance tasks such as keeping trial registries up to date and conducting systematic reviews. PMID:25274020
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-01-01
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C/2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C/2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi’s model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments. PMID:29401668
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-02-03
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.
Bhaganagarapu, Kaushik; Jackson, Graeme D; Abbott, David F
2013-01-01
An enduring issue with data-driven analysis and filtering methods is the interpretation of results. To assist, we present an automatic method for identification of artifact in independent components (ICs) derived from functional MRI (fMRI). The method was designed with the following features: does not require temporal information about an fMRI paradigm; does not require the user to train the algorithm; requires only the fMRI images (additional acquisition of anatomical imaging not required); is able to identify a high proportion of artifact-related ICs without removing components that are likely to be of neuronal origin; can be applied to resting-state fMRI; is automated, requiring minimal or no human intervention. We applied the method to a MELODIC probabilistic ICA of resting-state functional connectivity data acquired in 50 healthy control subjects, and compared the results to a blinded expert manual classification. The method identified between 26 and 72% of the components as artifact (mean 55%). About 0.3% of components identified as artifact were discordant with the manual classification; retrospective examination of these ICs suggested the automated method had correctly identified these as artifact. We have developed an effective automated method which removes a substantial number of unwanted noisy components in ICA analyses of resting-state fMRI data. Source code of our implementation of the method is available.
An Automated Method for Identifying Artifact in Independent Component Analysis of Resting-State fMRI
Bhaganagarapu, Kaushik; Jackson, Graeme D.; Abbott, David F.
2013-01-01
An enduring issue with data-driven analysis and filtering methods is the interpretation of results. To assist, we present an automatic method for identification of artifact in independent components (ICs) derived from functional MRI (fMRI). The method was designed with the following features: does not require temporal information about an fMRI paradigm; does not require the user to train the algorithm; requires only the fMRI images (additional acquisition of anatomical imaging not required); is able to identify a high proportion of artifact-related ICs without removing components that are likely to be of neuronal origin; can be applied to resting-state fMRI; is automated, requiring minimal or no human intervention. We applied the method to a MELODIC probabilistic ICA of resting-state functional connectivity data acquired in 50 healthy control subjects, and compared the results to a blinded expert manual classification. The method identified between 26 and 72% of the components as artifact (mean 55%). About 0.3% of components identified as artifact were discordant with the manual classification; retrospective examination of these ICs suggested the automated method had correctly identified these as artifact. We have developed an effective automated method which removes a substantial number of unwanted noisy components in ICA analyses of resting-state fMRI data. Source code of our implementation of the method is available. PMID:23847511
Non-parametric combination and related permutation tests for neuroimaging.
Winkler, Anderson M; Webster, Matthew A; Brooks, Jonathan C; Tracey, Irene; Smith, Stephen M; Nichols, Thomas E
2016-04-01
In this work, we show how permutation methods can be applied to combination analyses such as those that include multiple imaging modalities, multiple data acquisitions of the same modality, or simply multiple hypotheses on the same data. Using the well-known definition of union-intersection tests and closed testing procedures, we use synchronized permutations to correct for such multiplicity of tests, allowing flexibility to integrate imaging data with different spatial resolutions, surface and/or volume-based representations of the brain, including non-imaging data. For the problem of joint inference, we propose and evaluate a modification of the recently introduced non-parametric combination (NPC) methodology, such that instead of a two-phase algorithm and large data storage requirements, the inference can be performed in a single phase, with reasonable computational demands. The method compares favorably to classical multivariate tests (such as MANCOVA), even when the latter is assessed using permutations. We also evaluate, in the context of permutation tests, various combining methods that have been proposed in the past decades, and identify those that provide the best control over error rate and power across a range of situations. We show that one of these, the method of Tippett, provides a link between correction for the multiplicity of tests and their combination. Finally, we discuss how the correction can solve certain problems of multiple comparisons in one-way ANOVA designs, and how the combination is distinguished from conjunctions, even though both can be assessed using permutation tests. We also provide a common algorithm that accommodates combination and correction. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Alves, Cíntia; Pereira, Rui; Prieto, Lourdes; Aler, Mercedes; Amaral, Cesar R L; Arévalo, Cristina; Berardi, Gabriela; Di Rocco, Florencia; Caputo, Mariela; Carmona, Cristian Hernandez; Catelli, Laura; Costa, Heloísa Afonso; Coufalova, Pavla; Furfuro, Sandra; García, Óscar; Gaviria, Anibal; Goios, Ana; Gómez, Juan José Builes; Hernández, Alexis; Hernández, Eva Del Carmen Betancor; Miranda, Luís; Parra, David; Pedrosa, Susana; Porto, Maria João Anjos; Rebelo, Maria de Lurdes; Spirito, Matteo; Torres, María Del Carmen Villalobos; Amorim, António; Pereira, Filipe
2017-05-01
DNA is a powerful tool available for forensic investigations requiring identification of species. However, it is necessary to develop and validate methods able to produce results in degraded and or low quality DNA samples with the high standards obligatory in forensic research. Here, we describe a voluntary collaborative exercise to test the recently developed Species Identification by Insertions/Deletions (SPInDel) method. The SPInDel kit allows the identification of species by the generation of numeric profiles combining the lengths of six mitochondrial ribosomal RNA (rRNA) gene regions amplified in a single reaction followed by capillary electrophoresis. The exercise was organized during 2014 by a Working Commission of the Spanish and Portuguese-Speaking Working Group of the International Society for Forensic Genetics (GHEP-ISFG), created in 2013. The 24 participating laboratories from 10 countries were asked to identify the species in 11 DNA samples from previous GHEP-ISFG proficiency tests using a SPInDel primer mix and control samples of the 10 target species. A computer software was also provided to the participants to assist the analyses of the results. All samples were correctly identified by 22 of the 24 laboratories, including samples with low amounts of DNA (hair shafts) and mixtures of saliva and blood. Correct species identifications were obtained in 238 of the 241 (98.8%) reported SPInDel profiles. Two laboratories were responsible for the three cases of misclassifications. The SPInDel was efficient in the identification of species in mixtures considering that only a single laboratory failed to detect a mixture in one sample. This result suggests that SPInDel is a valid method for mixture analyses without the need for DNA sequencing, with the advantage of identifying more than one species in a single reaction. The low frequency of wrong (5.0%) and missing (2.1%) alleles did not interfere with the correct species identification, which demonstrated the advantage of using a method based on the analysis of multiple loci. Overall, the SPInDel method was easily implemented by laboratories using different genotyping platforms, the interpretation of results was straightforward and the SPInDel software was used without any problems. The results of this collaborative exercise indicate that the SPInDel method can be applied successfully in forensic casework investigations. Copyright © 2017 Elsevier B.V. All rights reserved.
Validation of MIMGO: a method to identify differentially expressed GO terms in a microarray dataset
2012-01-01
Background We previously proposed an algorithm for the identification of GO terms that commonly annotate genes whose expression is upregulated or downregulated in some microarray data compared with in other microarray data. We call these “differentially expressed GO terms” and have named the algorithm “matrix-assisted identification method of differentially expressed GO terms” (MIMGO). MIMGO can also identify microarray data in which genes annotated with a differentially expressed GO term are upregulated or downregulated. However, MIMGO has not yet been validated on a real microarray dataset using all available GO terms. Findings We combined Gene Set Enrichment Analysis (GSEA) with MIMGO to identify differentially expressed GO terms in a yeast cell cycle microarray dataset. GSEA followed by MIMGO (GSEA + MIMGO) correctly identified (p < 0.05) microarray data in which genes annotated to differentially expressed GO terms are upregulated. We found that GSEA + MIMGO was slightly less effective than, or comparable to, GSEA (Pearson), a method that uses Pearson’s correlation as a metric, at detecting true differentially expressed GO terms. However, unlike other methods including GSEA (Pearson), GSEA + MIMGO can comprehensively identify the microarray data in which genes annotated with a differentially expressed GO term are upregulated or downregulated. Conclusions MIMGO is a reliable method to identify differentially expressed GO terms comprehensively. PMID:23232071
Kury, Fabrício S P; Cimino, James J
2015-01-01
The corrections ("stipulations") to a proposed research study protocol produced by an institutional review board (IRB) can often be repetitive across many studies; however, there is no standard set of stipulations that could be used, for example, by researchers wishing to anticipate and correct problems in their research proposals prior to submitting to an IRB. The objective of the research was to computationally identify the most repetitive types of stipulations generated in the course of IRB deliberations. The text of each stipulation was normalized using the natural language processing techniques. An undirected weighted network was constructed in which each stipulation was represented by a node, and each link, if present, had weight corresponding to the TF-IDF Cosine Similarity of the stipulations. Network analysis software was then used to identify clusters in the network representing similar stipulations. The final results were correlated with additional data to produce further insights about the IRB workflow. From a corpus of 18,582 stipulations we identified 31 types of repetitive stipulations. Those types accounted for 3,870 stipulations (20.8% of the corpus) produced for 697 (88.7%) of all protocols in 392 (also 88.7%) of all the CNS IRB meetings with stipulations entered in our data source. A notable peroportion of the corrections produced by the IRB can be considered highly repetitive. Our shareable method relied on a minimal manual analysis and provides an intuitive exploration with theoretically unbounded granularity. Finer granularity allowed for the insight that is anticipated to prevent the need for identifying the IRB panel expertise or any human supervision.
Cloud, Joann L; Harmsen, Dag; Iwen, Peter C; Dunn, James J; Hall, Gerri; Lasala, Paul Rocco; Hoggan, Karen; Wilson, Deborah; Woods, Gail L; Mellmann, Alexander
2010-04-01
Correct identification of nonfermenting Gram-negative bacilli (NFB) is crucial for patient management. We compared phenotypic identifications of 96 clinical NFB isolates with identifications obtained by 5' 16S rRNA gene sequencing. Sequencing identified 88 isolates (91.7%) with >99% similarity to a sequence from the assigned species; 61.5% of sequencing results were concordant with phenotypic results, indicating the usability of sequencing to identify NFB.
DOE Office of Scientific and Technical Information (OSTI.GOV)
DOE /NV
This Corrective Action Decision Document has been prepared for Corrective Action Unit 340, the NTS Pesticide Release Sites, in accordance with the Federal Facility Agreement and Consent Order of 1996 (FFACO, 1996). Corrective Action Unit 340 is located at the Nevada Test Site, Nevada, and is comprised of the following Corrective Action Sites: 23-21-01, Area 23 Quonset Hut 800 Pesticide Release Ditch; 23-18-03, Area 23 Skid Huts Pesticide Storage; and 15-18-02, Area 15 Quonset Hut 15-11 Pesticide Storage. The purpose of this Corrective Action Decision Document is to identify and provide a rationale for the selection of a recommended correctivemore » action alternative for each Corrective Action Site. The scope of this Corrective Action Decision Document consists of the following tasks: Develop corrective action objectives; Identify corrective action alternative screening criteria; Develop corrective action alternatives; Perform detailed and comparative evaluations of the corrective action alternatives in relation to the corrective action objectives and screening criteria; and Recommend and justify a preferred corrective action alternative for each Corrective Action Site.« less
NASA Astrophysics Data System (ADS)
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-12-01
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.
Seibold, E; Maier, T; Kostrzewa, M; Zeman, E; Splettstoesser, W
2010-04-01
Francisella tularensis, the causative agent of tularemia, is a potential agent of bioterrorism. The phenotypic discrimination of closely related, but differently virulent, Francisella tularensis subspecies with phenotyping methods is difficult and time-consuming, often producing ambiguous results. As a fast and simple alternative, matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) was applied to 50 different strains of the genus Francisella to assess its ability to identify and discriminate between strains according to their designated species and subspecies. Reference spectra from five representative strains of Francisella philomiragia, Francisella tularensis subsp. tularensis, Francisella tularensis subsp. holarctica, Francisella tularensis subsp. mediasiatica, and Francisella tularensis subsp. novicida were established and evaluated for their capability to correctly identify Francisella species and subspecies by matching a collection of spectra from 45 blind-coded Francisella strains against a database containing the five reference spectra and 3,287 spectra from other microorganisms. As a reference method for identification of strains from the genus Francisella, 23S rRNA gene sequencing was used. All strains were correctly identified, with both methods showing perfect agreement at the species level as well as at the subspecies level. The identification of Francisella strains by MALDI-TOF MS and subsequent database matching was reproducible using biological replicates, different culture media, different cultivation times, different serial in vitro passages of the same strain, different preparation protocols, and different mass spectrometers.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-28
... Identifier: CMS-10003] Public Information Collection Requirements Submitted to the Office of Management and Budget (OMB); Correction AGENCY: Centers for Medicare & Medicaid Services (CMS), HHS. ACTION: Correction of notice. SUMMARY: This document corrects a technical error in the notice [Document Identifier: CMS...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-08
... Identifier: CMS-10379] Public Information Collection Requirements Submitted to the Office of Management and Budget (OMB); Correction AGENCY: Centers for Medicare & Medicaid Services (CMS), HHS. ACTION: Correction of notice. SUMMARY: This document corrects the information provided for [Document Identifier: CMS...
NASA Astrophysics Data System (ADS)
Hakala, Kirsti; Addor, Nans; Seibert, Jan
2017-04-01
Streamflow stemming from Switzerland's mountainous landscape will be influenced by climate change, which will pose significant challenges to the water management and policy sector. In climate change impact research, the determination of future streamflow is impeded by different sources of uncertainty, which propagate through the model chain. In this research, we explicitly considered the following sources of uncertainty: (1) climate models, (2) downscaling of the climate projections to the catchment scale, (3) bias correction method and (4) parameterization of the hydrological model. We utilize climate projections at the 0.11 degree 12.5 km resolution from the EURO-CORDEX project, which are the most recent climate projections for the European domain. EURO-CORDEX is comprised of regional climate model (RCM) simulations, which have been downscaled from global climate models (GCMs) from the CMIP5 archive, using both dynamical and statistical techniques. Uncertainties are explored by applying a modeling chain involving 14 GCM-RCMs to ten Swiss catchments. We utilize the rainfall-runoff model HBV Light, which has been widely used in operational hydrological forecasting. The Lindström measure, a combination of model efficiency and volume error, was used as an objective function to calibrate HBV Light. Ten best sets of parameters are then achieved by calibrating using the genetic algorithm and Powell optimization (GAP) method. The GAP optimization method is based on the evolution of parameter sets, which works by selecting and recombining high performing parameter sets with each other. Once HBV is calibrated, we then perform a quantitative comparison of the influence of biases inherited from climate model simulations to the biases stemming from the hydrological model. The evaluation is conducted over two time periods: i) 1980-2009 to characterize the simulation realism under the current climate and ii) 2070-2099 to identify the magnitude of the projected change of streamflow under the climate scenarios RCP4.5 and RCP8.5. We utilize two techniques for correcting biases in the climate model output: quantile mapping and a new method, frequency bias correction. The FBC method matches the frequencies between observed and GCM-RCM data. In this way, it can be used to correct for all time scales, which is a known limitation of quantile mapping. A novel approach for the evaluation of the climate simulations and bias correction methods was then applied. Streamflow can be thought of as the "great integrator" of uncertainties. The ability, or the lack thereof, to correctly simulate streamflow is a way to assess the realism of the bias-corrected climate simulations. Long-term monthly mean as well as high and low flow metrics are used to evaluate the realism of the simulations under current climate and to gauge the impacts of climate change on streamflow. Preliminary results show that under present climate, calibration of the hydrological model comprises of a much smaller band of uncertainty in the modeling chain as compared to the bias correction of the GCM-RCMs. Therefore, for future time periods, we expect the bias correction of climate model data to have a greater influence on projected changes in streamflow than the calibration of the hydrological model.
Investigations in adaptive processing of multispectral data
NASA Technical Reports Server (NTRS)
Kriegler, F. J.; Horwitz, H. M.
1973-01-01
Adaptive data processing procedures are applied to the problem of classifying objects in a scene scanned by multispectral sensor. These procedures show a performance improvement over standard nonadaptive techniques. Some sources of error in classification are identified and those correctable by adaptive processing are discussed. Experiments in adaptation of signature means by decision-directed methods are described. Some of these methods assume correlation between the trajectories of different signature means; for others this assumption is not made.
2013-10-01
correct group assignment of samples in unsupervised hierarchical clustering by the Unweighted Pair-Group Method using Arithmetic averages ( UPGMA ) based on...centering of log2 transformed MAS5.0 signal values; probe set clustering was performed by the UPGMA method using Cosine correlation as the similarity met...A) The 108 differentially-regulated genes identified were subjected to unsupervised hierarchical clustering analysis using the UPGMA algorithm with
Richter, Jacob T.; Sloss, Brian L.; Isermann, Daniel A.
2016-01-01
Previous research has generally ignored the potential effects of spawning habitat availability and quality on recruitment of Walleye Sander vitreus, largely because information on spawning habitat is lacking for many lakes. Furthermore, traditional transect-based methods used to describe habitat are time and labor intensive. Our objectives were to determine if side-scan sonar could be used to accurately classify Walleye spawning habitat in the nearshore littoral zone and provide lakewide estimates of spawning habitat availability similar to estimates obtained from a transect–quadrat-based method. Based on assessments completed on 16 northern Wisconsin lakes, interpretation of side-scan sonar images resulted in correct identification of substrate size-class for 93% (177 of 191) of selected locations and all incorrect classifications were within ± 1 class of the correct substrate size-class. Gravel, cobble, and rubble substrates were incorrectly identified from side-scan images in only two instances (1% misclassification), suggesting that side-scan sonar can be used to accurately identify preferred Walleye spawning substrates. Additionally, we detected no significant differences in estimates of lakewide littoral zone substrate compositions estimated using side-scan sonar and a traditional transect–quadrat-based method. Our results indicate that side-scan sonar offers a practical, accurate, and efficient technique for assessing substrate composition and quantifying potential Walleye spawning habitat in the nearshore littoral zone of north temperate lakes.
Lu, Weiping; Gu, Dayong; Chen, Xingyun; Xiong, Renping; Liu, Ping; Yang, Nan; Zhou, Yuanguo
2010-10-01
The traditional techniques for diagnosis of invasive fungal infections in the clinical microbiology laboratory need improvement. These techniques are prone to delay results due to their time-consuming process, or result in misidentification of the fungus due to low sensitivity or low specificity. The aim of this study was to develop a method for the rapid detection and identification of fungal pathogens. The internal transcribed spacer two fragments of fungal ribosomal DNA were amplified using a polymerase chain reaction for all samples. Next, the products were hybridized with the probes immobilized on the surface of a microarray. These species-specific probes were designed to detect nine different clinical pathogenic fungi including Candida albicans, Candida tropocalis, Candida glabrata, Candida parapsilosis, Candida krusei, Candida lusitaniae, Candida guilliermondii, Candida keyfr, and Cryptococcus neoformans. The hybridizing signals were enhanced with gold nanoparticles and silver deposition, and detected using a flatbed scanner or visually. Fifty-nine strains of fungal pathogens, including standard and clinically isolated strains, were correctly identified by this method. The sensitivity of the assay for Candida albicans was 10 cells/mL. Ten cultures from clinical specimens and 12 clinical samples spiked with fungi were also identified correctly. This technique offers a reliable alternative to conventional methods for the detection and identification of fungal pathogens. It has higher efficiency, specificity and sensitivity compared with other methods commonly used in the clinical laboratory.
Farnan, Jeanne M; Gaffney, Sean; Poston, Jason T; Slawinski, Kris; Cappaert, Melissa; Kamin, Barry; Arora, Vineet M
2016-03-01
Patient safety curricula in undergraduate medical education (UME) are often didactic format with little focus on skills training. Despite recent focus on safety, practical training in residency education is also lacking. Assessments of safety skills in UME and graduate medical education (GME) are generally knowledge, and not application-focused. We aimed to develop and pilot a safety-focused simulation with medical students and interns to assess knowledge regarding hazards of hospitalisation. A simulation demonstrating common hospital-based safety threats was designed. A case scenario was created including salient patient information and simulated safety threats such as the use of upper-extremity restraints and medication errors. After entering the room and reviewing the mock chart, learners were timed and asked to identify and document as many safety hazards as possible. Learner satisfaction was assessed using constructed-response evaluation. Descriptive statistics, including per cent correct and mean correct hazards, were performed. All 86 third-year medical students completed the encounter. Some hazards were identified by a majority of students (fall risk, 83% of students) while others were rarely identified (absence of deep venous thrombosis prophylaxis, 13% of students). Only 5% of students correctly identified pressure ulcer risk. 128 of 131 interns representing 49 medical schools participated in the GME implementation. Incoming interns were able to identify a mean of 5.1 hazards out of the 9 displayed (SD 1.4) with 40% identifying restraints as a hazard, and 20% identifying the inappropriate urinary catheter as a hazard. A simulation showcasing safety hazards was a feasible and effective way to introduce trainees to safety-focused content. Both students and interns had difficulty identifying common hazards of hospitalisation. Despite poor performance, learners appreciated the interactive experience and its clinical utility. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Mobile Image Based Color Correction Using Deblurring
Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.
2016-01-01
Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space. PMID:28572697
Sonzogni, A. A.; McCutchan, E. A.; Johnson, T. D.; ...
2016-04-01
Fission yields form an integral part of the prediction of antineutrino spectra generated by nuclear reactors, but little attention has been paid to the quality and reliability of the data used in current calculations. Following a critical review of the thermal and fast ENDF/B-VII.1 235U fission yields, deficiencies are identified and improved yields are obtained, based on corrections of erroneous yields, consistency between decay and fission yield data, and updated isomeric ratios. These corrected yields are used to calculate antineutrino spectra using the summation method. An anomalous value for the thermal fission yield of 86Ge generates an excess of antineutrinosmore » at 5–7 MeV, a feature which is no longer present when the corrected yields are used. Thermal spectra calculated with two distinct fission yield libraries (corrected ENDF/B and JEFF) differ by up to 6% in the 0–7 MeV energy window, allowing for a basic estimate of the uncertainty involved in the fission yield component of summation calculations. Lastly, the fast neutron antineutrino spectrum is calculated, which at the moment can only be obtained with the summation method and may be relevant for short baseline reactor experiments using highly enriched uranium fuel.« less
Seismic wavefield propagation in 2D anisotropic media: Ray theory versus wave-equation simulation
NASA Astrophysics Data System (ADS)
Bai, Chao-ying; Hu, Guang-yi; Zhang, Yan-teng; Li, Zhong-sheng
2014-05-01
Despite the ray theory that is based on the high frequency assumption of the elastic wave-equation, the ray theory and the wave-equation simulation methods should be mutually proof of each other and hence jointly developed, but in fact parallel independent progressively. For this reason, in this paper we try an alternative way to mutually verify and test the computational accuracy and the solution correctness of both the ray theory (the multistage irregular shortest-path method) and the wave-equation simulation method (both the staggered finite difference method and the pseudo-spectral method) in anisotropic VTI and TTI media. Through the analysis and comparison of wavefield snapshot, common source gather profile and synthetic seismogram, it is able not only to verify the accuracy and correctness of each of the methods at least for kinematic features, but also to thoroughly understand the kinematic and dynamic features of the wave propagation in anisotropic media. The results show that both the staggered finite difference method and the pseudo-spectral method are able to yield the same results even for complex anisotropic media (such as a fault model); the multistage irregular shortest-path method is capable of predicting similar kinematic features as the wave-equation simulation method does, which can be used to mutually test each other for methodology accuracy and solution correctness. In addition, with the aid of the ray tracing results, it is easy to identify the multi-phases (or multiples) in the wavefield snapshot, common source point gather seismic section and synthetic seismogram predicted by the wave-equation simulation method, which is a key issue for later seismic application.
Standoff Human Identification Using Body Shape
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzner, Shari; Heredia-Langner, Alejandro; Amidan, Brett G.
2015-09-01
The ability to identify individuals is a key component of maintaining safety and security in public spaces and around critical infrastructure. Monitoring an open space is challenging because individuals must be identified and re-identified from a standoff distance nonintrusively, making methods like fingerprinting and even facial recognition impractical. We propose using body shape features as a means for identification from standoff sensing, either complementing other identifiers or as an alternative. An important challenge in monitoring open spaces is reconstructing identifying features when only a partial observation is available, because of the view-angle limitations and occlusion or subject pose changes. Tomore » address this challenge, we investigated the minimum number of features required for a high probability of correct identification, and we developed models for predicting a key body feature—height—from a limited set of observed features. We found that any set of nine randomly selected body measurements was sufficient to correctly identify an individual in a dataset of 4426 subjects. For predicting height, anthropometric measures were investigated for correlation with height. Their correlation coefficients and associated linear models were reported. These results—a sufficient number of features for identification and height prediction from a single feature—contribute to developing systems for standoff identification when views of a subject are limited.« less
NASA Astrophysics Data System (ADS)
Nallala, Jayakrupakar; Gobinet, Cyril; Diebold, Marie-Danièle; Untereiner, Valérie; Bouché, Olivier; Manfait, Michel; Sockalingum, Ganesh Dhruvananda; Piot, Olivier
2012-11-01
Innovative diagnostic methods are the need of the hour that could complement conventional histopathology for cancer diagnosis. In this perspective, we propose a new concept based on spectral histopathology, using IR spectral micro-imaging, directly applied to paraffinized colon tissue array stabilized in an agarose matrix without any chemical pre-treatment. In order to correct spectral interferences from paraffin and agarose, a mathematical procedure is implemented. The corrected spectral images are then processed by a multivariate clustering method to automatically recover, on the basis of their intrinsic molecular composition, the main histological classes of the normal and the tumoral colon tissue. The spectral signatures from different histological classes of the colonic tissues are analyzed using statistical methods (Kruskal-Wallis test and principal component analysis) to identify the most discriminant IR features. These features allow characterizing some of the biomolecular alterations associated with malignancy. Thus, via a single analysis, in a label-free and nondestructive manner, main changes associated with nucleotide, carbohydrates, and collagen features can be identified simultaneously between the compared normal and the cancerous tissues. The present study demonstrates the potential of IR spectral imaging as a complementary modern tool, to conventional histopathology, for an objective cancer diagnosis directly from paraffin-embedded tissue arrays.
Sequetyping: Serotyping Streptococcus pneumoniae by a Single PCR Sequencing Strategy
Leung, Marcus H.; Bryson, Kevin; Freystatter, Kathrin; Pichon, Bruno; Edwards, Giles; Gillespie, Stephen H.
2012-01-01
The introduction of pneumococcal conjugate vaccines necessitates continued monitoring of circulating strains to assess vaccine efficacy and replacement serotypes. Conventional serological methods are costly, labor-intensive, and prone to misidentification, while current DNA-based methods have limited serotype coverage requiring multiple PCR primers. In this study, a computer algorithm was developed to interrogate the capsulation locus (cps) of vaccine serotypes to locate primer pairs in conserved regions that border variable regions and could differentiate between serotypes. In silico analysis of cps from 92 serotypes indicated that a primer pair spanning the regulatory gene cpsB could putatively amplify 84 serotypes and differentiate 46. This primer set was specific to Streptococcus pneumoniae, with no amplification observed for other species, including S. mitis, S. oralis, and S. pseudopneumoniae. One hundred thirty-eight pneumococcal strains covering 48 serotypes were tested. Of 23 vaccine serotypes included in the study, most (19/22, 86%) were identified correctly at least to the serogroup level, including all of the 13-valent conjugate vaccine and other replacement serotypes. Reproducibility was demonstrated by the correct sequetyping of different strains of a serotype. This novel sequence-based method employing a single PCR primer pair is cost-effective and simple. Furthermore, it has the potential to identify new serotypes that may evolve in the future. PMID:22553238
Ng, L S Y; Sim, J H C; Eng, L C; Menon, S; Tan, T Y
2012-08-01
Aero-tolerant Actinomyces spp. are an under-recognised cause of cutaneous infections, in part because identification using conventional phenotypic methods is difficult and may be inaccurate. Matrix-assisted laser desorption ionisation time-of-flight mass spectrometry (MALDI-TOF MS) is a promising new technique for bacterial identification, but with limited data on the identification of aero-tolerant Actinomyces spp. This study evaluated the accuracy of a phenotypic biochemical kit, MALDI-TOF MS and genotypic identification methods for the identification of this problematic group of organisms. Thirty aero-tolerant Actinomyces spp. were isolated from soft-tissue infections over a 2-year period. Species identification was performed by 16 s rRNA sequencing and genotypic results were compared with results obtained by API Coryne and MALDI-TOF MS. There was poor agreement between API Coryne and genotypic identification, with only 33% of isolates correctly identified to the species level. MALDI-TOF MS correctly identified 97% of isolates to the species level, with 33% of identifications achieved with high confidence scores. MALDI-TOF MS is a promising new tool for the identification of aero-tolerant Actinomyces spp., but improvement of the database is required in order to increase the confidence level of identification.
Fürnstahl, Philipp; Vlachopoulos, Lazaros; Schweizer, Andreas; Fucentese, Sandro F; Koch, Peter P
2015-08-01
The accurate reduction of tibial plateau malunions can be challenging without guidance. In this work, we report on a novel technique that combines 3-dimensional computer-assisted planning with patient-specific surgical guides for improving reliability and accuracy of complex intraarticular corrective osteotomies. Preoperative planning based on 3-dimensional bone models was performed to simulate fragment mobilization and reduction in 3 cases. Surgical implementation of the preoperative plan using patient-specific cutting and reduction guides was evaluated; benefits and limitations of the approach were identified and discussed. The preliminary results are encouraging and show that complex, intraarticular corrective osteotomies can be accurately performed with this technique. For selective patients with complex malunions around the tibia plateau, this method might be an attractive option, with the potential to facilitate achieving the most accurate correction possible.
Semantic Labelling of Road Furniture in Mobile Laser Scanning Data
NASA Astrophysics Data System (ADS)
Li, F.; Oude Elberink, S.; Vosselman, G.
2017-09-01
Road furniture semantic labelling is vital for large scale mapping and autonomous driving systems. Much research has been investigated on road furniture interpretation in both 2D images and 3D point clouds. Precise interpretation of road furniture in mobile laser scanning data still remains unexplored. In this paper, a novel method is proposed to interpret road furniture based on their logical relations and functionalities. Our work represents the most detailed interpretation of road furniture in mobile laser scanning data. 93.3 % of poles are correctly extracted and all of them are correctly recognised. 94.3 % of street light heads are detected and 76.9 % of them are correctly identified. Despite errors arising from the recognition of other components, our framework provides a promising solution to automatically map road furniture at a detailed level in urban environments.
NASA Astrophysics Data System (ADS)
Behtani, A.; Bouazzouni, A.; Khatir, S.; Tiachacht, S.; Zhou, Y.-L.; Abdel Wahab, M.
2017-05-01
In this paper, the problem of using measured modal parameters to detect and locate damage in beam composite stratified structures with four layers of graphite/epoxy [0°/902°/0°] is investigated. A technique based on the residual force method is applied to composite stratified structure with different boundary conditions, the results of damage detection for several damage cases demonstrate that using residual force method as damage index, the damage location can be identified correctly and the damage extents can be estimated as well.
McElvania Tekippe, Erin; Shuey, Sunni; Winkler, David W; Butler, Meghan A; Burnham, Carey-Ann D
2013-05-01
Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) can be used as a method for the rapid identification of microorganisms. This study evaluated the Bruker Biotyper (MALDI-TOF MS) system for the identification of clinically relevant Gram-positive organisms. We tested 239 aerobic Gram-positive organisms isolated from clinical specimens. We evaluated 4 direct-smear methods, including "heavy" (H) and "light" (L) smears, with and without a 1-μl direct formic acid (FA) overlay. The quality measure assigned to a MALDI-TOF MS identification is a numerical value or "score." We found that a heavy smear with a formic acid overlay (H+FA) produced optimal MALDI-TOF MS identification scores and the highest percentage of correctly identified organisms. Using a score of ≥2.0, we identified 183 of the 239 isolates (76.6%) to the genus level, and of the 181 isolates resolved to the species level, 141 isolates (77.9%) were correctly identified. To maximize the number of correct identifications while minimizing misidentifications, the data were analyzed using a score of ≥1.7 for genus- and species-level identification. Using this score, 220 of the 239 isolates (92.1%) were identified to the genus level, and of the 181 isolates resolved to the species level, 167 isolates (92.2%) could be assigned an accurate species identification. We also evaluated a subset of isolates for preanalytic factors that might influence MALDI-TOF MS identification. Frequent subcultures increased the number of unidentified isolates. Incubation temperatures and subcultures of the media did not alter the rate of identification. These data define the ideal bacterial preparation, identification score, and medium conditions for optimal identification of Gram-positive bacteria by use of MALDI-TOF MS.
McElvania TeKippe, Erin; Shuey, Sunni; Winkler, David W.; Butler, Meghan A.
2013-01-01
Matrix-assisted laser desorption ionization–time of flight mass spectrometry (MALDI-TOF MS) can be used as a method for the rapid identification of microorganisms. This study evaluated the Bruker Biotyper (MALDI-TOF MS) system for the identification of clinically relevant Gram-positive organisms. We tested 239 aerobic Gram-positive organisms isolated from clinical specimens. We evaluated 4 direct-smear methods, including “heavy” (H) and “light” (L) smears, with and without a 1-μl direct formic acid (FA) overlay. The quality measure assigned to a MALDI-TOF MS identification is a numerical value or “score.” We found that a heavy smear with a formic acid overlay (H+FA) produced optimal MALDI-TOF MS identification scores and the highest percentage of correctly identified organisms. Using a score of ≥2.0, we identified 183 of the 239 isolates (76.6%) to the genus level, and of the 181 isolates resolved to the species level, 141 isolates (77.9%) were correctly identified. To maximize the number of correct identifications while minimizing misidentifications, the data were analyzed using a score of ≥1.7 for genus- and species-level identification. Using this score, 220 of the 239 isolates (92.1%) were identified to the genus level, and of the 181 isolates resolved to the species level, 167 isolates (92.2%) could be assigned an accurate species identification. We also evaluated a subset of isolates for preanalytic factors that might influence MALDI-TOF MS identification. Frequent subcultures increased the number of unidentified isolates. Incubation temperatures and subcultures of the media did not alter the rate of identification. These data define the ideal bacterial preparation, identification score, and medium conditions for optimal identification of Gram-positive bacteria by use of MALDI-TOF MS. PMID:23426925
Masuyama, Kotoka; Shojo, Hideki; Nakanishi, Hiroaki; Inokuchi, Shota; Adachi, Noboru
2017-01-01
Sex determination is important in archeology and anthropology for the study of past societies, cultures, and human activities. Sex determination is also one of the most important components of individual identification in criminal investigations. We developed a new method of sex determination by detecting a single-nucleotide polymorphism in the amelogenin gene using amplified product-length polymorphisms in combination with sex-determining region Y analysis. We particularly focused on the most common types of postmortem DNA damage in ancient and forensic samples: fragmentation and nucleotide modification resulting from deamination. Amplicon size was designed to be less than 60 bp to make the method more useful for analyzing degraded DNA samples. All DNA samples collected from eight Japanese individuals (four male, four female) were evaluated correctly using our method. The detection limit for accurate sex determination was determined to be 20 pg of DNA. We compared our new method with commercial short tandem repeat analysis kits using DNA samples artificially fragmented by ultraviolet irradiation. Our novel method was the most robust for highly fragmented DNA samples. To deal with allelic dropout resulting from deamination, we adopted "bidirectional analysis," which analyzed samples from both sense and antisense strands. This new method was applied to 14 Jomon individuals (3500-year-old bone samples) whose sex had been identified morphologically. We could correctly identify the sex of 11 out of 14 individuals. These results show that our method is reliable for the sex determination of highly degenerated samples.
Masuyama, Kotoka; Shojo, Hideki; Nakanishi, Hiroaki; Inokuchi, Shota; Adachi, Noboru
2017-01-01
Sex determination is important in archeology and anthropology for the study of past societies, cultures, and human activities. Sex determination is also one of the most important components of individual identification in criminal investigations. We developed a new method of sex determination by detecting a single-nucleotide polymorphism in the amelogenin gene using amplified product-length polymorphisms in combination with sex-determining region Y analysis. We particularly focused on the most common types of postmortem DNA damage in ancient and forensic samples: fragmentation and nucleotide modification resulting from deamination. Amplicon size was designed to be less than 60 bp to make the method more useful for analyzing degraded DNA samples. All DNA samples collected from eight Japanese individuals (four male, four female) were evaluated correctly using our method. The detection limit for accurate sex determination was determined to be 20 pg of DNA. We compared our new method with commercial short tandem repeat analysis kits using DNA samples artificially fragmented by ultraviolet irradiation. Our novel method was the most robust for highly fragmented DNA samples. To deal with allelic dropout resulting from deamination, we adopted “bidirectional analysis,” which analyzed samples from both sense and antisense strands. This new method was applied to 14 Jomon individuals (3500-year-old bone samples) whose sex had been identified morphologically. We could correctly identify the sex of 11 out of 14 individuals. These results show that our method is reliable for the sex determination of highly degenerated samples. PMID:28052096
Effects of atmospheric aerosols on scattering reflected visible light from earth resource features
NASA Technical Reports Server (NTRS)
Noll, K. E.; Tschantz, B. A.; Davis, W. T.
1972-01-01
The vertical variations in atmospheric light attenuation under ambient conditions were identified, and a method through which aerial photographs of earth features might be corrected to yield quantitative information about the actual features was provided. A theoretical equation was developed based on the Bouguer-Lambert extinction law and basic photographic theory.
Use of Machine Learning to Identify Children with Autism and Their Motor Abnormalities
ERIC Educational Resources Information Center
Crippa, Alessandro; Salvatore, Christian; Perego, Paolo; Forti, Sara; Nobile, Maria; Molteni, Massimo; Castiglioni, Isabella
2015-01-01
In the present work, we have undertaken a proof-of-concept study to determine whether a simple upper-limb movement could be useful to accurately classify low-functioning children with autism spectrum disorder (ASD) aged 2-4. To answer this question, we developed a supervised machine-learning method to correctly discriminate 15 preschool children…
Addressing Common Student Errors with Classroom Voting in Multivariable Calculus
ERIC Educational Resources Information Center
Cline, Kelly; Parker, Mark; Zullo, Holly; Stewart, Ann
2012-01-01
One technique for identifying and addressing common student errors is the method of classroom voting, in which the instructor presents a multiple-choice question to the class, and after a few minutes for consideration and small group discussion, each student votes on the correct answer, often using a hand-held electronic clicker. If a large number…
USDA-ARS?s Scientific Manuscript database
BACKGROUND: Diurnal variation in blood pressure (BP) is regulated, in part, by an endogenous circadian clock; however, few human studies have identified associations between clock genes and BP. Accounting for environmental temperature may be necessary to correct for seasonal bias. METHODS: We examin...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Jang-Hwan; Maier, Andreas; Keil, Andreas
2014-06-15
Purpose: A C-arm CT system has been shown to be capable of scanning a single cadaver leg under loaded conditions by virtue of its highly flexible acquisition trajectories. In Part I of this study, using the 4D XCAT-based numerical simulation, the authors predicted that the involuntary motion in the lower body of subjects in weight-bearing positions would seriously degrade image quality and the authors suggested three motion compensation methods by which the reconstructions could be corrected to provide diagnostic image quality. Here, the authors demonstrate that a flat-panel angiography system is appropriate for scanning both legs of subjectsin vivo undermore » weight-bearing conditions and further evaluate the three motion-correction algorithms using in vivo data. Methods: The geometry of a C-arm CT system for a horizontal scan trajectory was calibrated using the PDS-2 phantom. The authors acquired images of two healthy volunteers while lying supine on a table, standing, and squatting at several knee flexion angles. In order to identify the involuntary motion of the lower body, nine 1-mm-diameter tantalum fiducial markers were attached around the knee. The static mean marker position in 3D, a reference for motion compensation, was estimated by back-projecting detected markers in multiple projections using calibrated projection matrices and identifying the intersection points in 3D of the back-projected rays. Motion was corrected using three different methods (described in detail previously): (1) 2D projection shifting, (2) 2D deformable projection warping, and (3) 3D rigid body warping. For quantitative image quality analysis, SSIM indices for the three methods were compared using the supine data as a ground truth. Results: A 2D Euclidean distance-based metric of subjects’ motion ranged from 0.85 mm (±0.49 mm) to 3.82 mm (±2.91 mm) (corresponding to 2.76 to 12.41 pixels) resulting in severe motion artifacts in 3D reconstructions. Shifting in 2D, 2D warping, and 3D warping improved the SSIM in the central slice by 20.22%, 16.83%, and 25.77% in the data with the largest motion among the five datasets (SCAN5); improvement in off-center slices was 18.94%, 29.14%, and 36.08%, respectively. Conclusions: The authors showed that C-arm CT control can be implemented for nonstandard horizontal trajectories which enabled us to scan and successfully reconstruct both legs of volunteers in weight-bearing positions. As predicted using theoretical models, the proposed motion correction methods improved image quality by reducing motion artifacts in reconstructions; 3D warping performed better than the 2D methods, especially in off-center slices.« less
An algorithm for direct causal learning of influences on patient outcomes.
Rathnam, Chandramouli; Lee, Sanghoon; Jiang, Xia
2017-01-01
This study aims at developing and introducing a new algorithm, called direct causal learner (DCL), for learning the direct causal influences of a single target. We applied it to both simulated and real clinical and genome wide association study (GWAS) datasets and compared its performance to classic causal learning algorithms. The DCL algorithm learns the causes of a single target from passive data using Bayesian-scoring, instead of using independence checks, and a novel deletion algorithm. We generate 14,400 simulated datasets and measure the number of datasets for which DCL correctly and partially predicts the direct causes. We then compare its performance with the constraint-based path consistency (PC) and conservative PC (CPC) algorithms, the Bayesian-score based fast greedy search (FGS) algorithm, and the partial ancestral graphs algorithm fast causal inference (FCI). In addition, we extend our comparison of all five algorithms to both a real GWAS dataset and real breast cancer datasets over various time-points in order to observe how effective they are at predicting the causal influences of Alzheimer's disease and breast cancer survival. DCL consistently outperforms FGS, PC, CPC, and FCI in discovering the parents of the target for the datasets simulated using a simple network. Overall, DCL predicts significantly more datasets correctly (McNemar's test significance: p<0.0001) than any of the other algorithms for these network types. For example, when assessing overall performance (simple and complex network results combined), DCL correctly predicts approximately 1400 more datasets than the top FGS method, 1600 more datasets than the top CPC method, 4500 more datasets than the top PC method, and 5600 more datasets than the top FCI method. Although FGS did correctly predict more datasets than DCL for the complex networks, and DCL correctly predicted only a few more datasets than CPC for these networks, there is no significant difference in performance between these three algorithms for this network type. However, when we use a more continuous measure of accuracy, we find that all the DCL methods are able to better partially predict more direct causes than FGS and CPC for the complex networks. In addition, DCL consistently had faster runtimes than the other algorithms. In the application to the real datasets, DCL identified rs6784615, located on the NISCH gene, and rs10824310, located on the PRKG1 gene, as direct causes of late onset Alzheimer's disease (LOAD) development. In addition, DCL identified ER category as a direct predictor of breast cancer mortality within 5 years, and HER2 status as a direct predictor of 10-year breast cancer mortality. These predictors have been identified in previous studies to have a direct causal relationship with their respective phenotypes, supporting the predictive power of DCL. When the other algorithms discovered predictors from the real datasets, these predictors were either also found by DCL or could not be supported by previous studies. Our results show that DCL outperforms FGS, PC, CPC, and FCI in almost every case, demonstrating its potential to advance causal learning. Furthermore, our DCL algorithm effectively identifies direct causes in the LOAD and Metabric GWAS datasets, which indicates its potential for clinical applications. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Baocheng; Qu, Dandan; Tian, Qing; Pang, Liping
2018-05-01
For the problem that the linear scale of intrusion signals in the optical fiber pre-warning system (OFPS) is inconsistent, this paper presents a method to correct the scale. Firstly, the intrusion signals are intercepted, and an aggregate of the segments with equal length is obtained. Then, the Mellin transform (MT) is applied to convert them into the same scale. The spectral characteristics are obtained by the Fourier transform. Finally, we adopt back-propagation (BP) neural network to identify intrusion types, which takes the spectral characteristics as input. We carried out the field experiments and collected the optical fiber intrusion signals which contain the picking signal, shoveling signal, and running signal. The experimental results show that the proposed algorithm can effectively improve the recognition accuracy of the intrusion signals.
Spooner, Jennifer; Keen, Jenny; Nayyar, Kalpana; Birkett, Neil; Bond, Nicholas; Bannister, David; Tigue, Natalie; Higazi, Daniel; Kemp, Benjamin; Vaughan, Tristan; Kippen, Alistair; Buchanan, Andrew
2015-07-01
Fabs are an important class of antibody fragment as both research reagents and therapeutic agents. There are a plethora of methods described for their recombinant expression and purification. However, these do not address the issue of excessive light chain production that forms light chain dimers nor do they describe a universal purification strategy. Light chain dimer impurities and the absence of a universal Fab purification strategy present persistent challenges for biotechnology applications using Fabs, particularly around the need for bespoke purification strategies. This study describes methods to address light chain dimer formation during Fab expression and identifies a novel CH 1 affinity resin as a simple and efficient one-step purification for correctly assembled Fab. © 2015 Wiley Periodicals, Inc.
Color correction strategies in optical design
NASA Astrophysics Data System (ADS)
Pfisterer, Richard N.; Vorndran, Shelby D.
2014-12-01
An overview of color correction strategies is presented. Starting with basic first-order aberration theory, we identify known color corrected solutions for doublets and triplets. Reviewing the modern approaches of Robb-Mercado, Rayces-Aguilar, and C. de Albuquerque et al, we find that they confirm the existence of glass combinations for doublets and triplets that yield color corrected solutions that we already know exist. Finally we explore the use of the y, ӯ diagram in conjunction with aberration theory to identify the solution space of glasses capable of leading to color corrected solutions in arbitrary optical systems.
Wilson-Sands, Cathy; Brahn, Pamela; Graves, Kristal
2015-01-01
Validating participants' ability to correctly perform cardiopulmonary resuscitation (CPR) skills during basic life support courses can be a challenge for nursing professional development specialists. This study compares two methods of basic life support training, instructor-led and computer-based learning with voice-activated manikins, to identify if one method is more effective for performance of CPR skills. The findings suggest that a computer-based learning course with voice-activated manikins is a more effective method of training for improved CPR performance.
Accommodative Lag by Autorefraction and Two Dynamic Retinoscopy Methods
2008-01-01
Purpose To evaluate two clinical procedures, MEM and Nott retinoscopy, for detecting accommodative lags 1.00 diopter (D) or greater in children as identified by an open-field autorefractor. Methods 168 children 8 to <12 years old with low myopia, normal visual acuity, and no strabismus participated as part of an ancillary study within the screening process for a randomized trial. Accommodative response to a 3.00 D demand was first assessed by MEM and Nott retinoscopy, viewing binocularly with spherocylindrical refractive error corrected, with testing order randomized and each performed by a different masked examiner. The response was then determined viewing monocularly with spherical equivalent refractive error corrected, using an open-field autorefractor, which was the gold standard used for eligibility for the clinical trial. Sensitivity and specificity for accommodative lags of 1.00 D or more were calculated for each retinoscopy method compared to the autorefractor. Results 116 (69%) of the 168 children had accommodative lag of 1.00 D or more by autorefraction. MEM identified 66 children identified by autorefraction for a sensitivity of 57% (95% CI = 47% to 66%) and a specificity of 63% (95% CI = 49% to 76%). Nott retinoscopy identified 35 children for a sensitivity of 30% (95% CI = 22% to 39%) and a specificity of 81% (95% CI = 67% to 90%). Analysis of receiver operating characteristic (ROC) curves constructed for MEM and for Nott retinoscopy failed to reveal alternate cut points that would improve the combination of sensitivity and specificity for identifying accommodative lag ≥ 1.00 D as defined by autorefraction. Conclusions Neither MEM nor Nott retinoscopy provided adequate sensitivity and specificity to identify myopic children with accommodative lag ≥ 1.00 D as determined by autorefraction. A variety of methodological differences between the techniques may contribute to the modest to poor agreement. PMID:19214130
Delport, Johannes Andries; Mohorovic, Ivor; Burn, Sandi; McCormick, John Kenneth; Schaus, David; Lannigan, Robert; John, Michael
2016-07-01
Meticillin-resistant Staphylococcus aureus (MRSA) bloodstream infection is responsible for significant morbidity, with mortality rates as high as 60 % if not treated appropriately. We describe a rapid method to detect MRSA in blood cultures using a combined three-hour short-incubation BRUKER matrix-assisted laser desorption/ionization time-of-flight MS BioTyper protocol and a qualitative immunochromatographic assay, the Alere Culture Colony Test PBP2a detection test. We compared this combined method with a molecular method detecting the nuc and mecA genes currently performed in our laboratory. One hundred and seventeen S. aureus blood cultures were tested of which 35 were MRSA and 82 were meticillin-sensitive S. aureus (MSSA). The rapid combined test correctly identified 100 % (82/82) of the MSSA and 85.7 % (30/35) of the MRSA after 3 h. There were five false negative results where the isolates were correctly identified as S. aureus, but PBP2a was not detected by the Culture Colony Test. The combined method has a sensitivity of 87.5 %, specificity of 100 %, a positive predictive value of 100 % and a negative predictive value of 94.3 % with the prevalence of MRSA in our S. aureus blood cultures. The combined rapid method offers a significant benefit to early detection of MRSA in positive blood cultures.
Dudovitz, Rebecca N; Izadpanah, Nilufar; Chung, Paul J.; Slusser, Wendelin
2015-01-01
Objectives Up to 20% of school-age children have a vision problem identifiable by screening, over 80% of which can be corrected with glasses. While vision problems are associated with poor school performance, few studies describe whether and how corrective lenses affect academic achievement and health. Further, there are virtually no studies exploring how children with correctable visual deficits, their parents, and teachers perceive the connection between vision care and school function. Methods We conducted a qualitative evaluation of Vision to Learn (VTL), a school-based program providing free corrective lenses to low-income students in Los Angeles. Nine focus groups with students, parents, and teachers from three schools served by VTL explored the relationships between poor vision, receipt of corrective lenses, and school performance and health. Results Twenty parents, 25 teachers, and 21 students from three elementary schools participated. Participants described how uncorrected visual deficits reduced students’ focus, perseverance, and class participation, affecting academic functioning and psychosocial stress; how receiving corrective lenses improved classroom attention, task persistence, and willingness to practice academic skills; and how serving students in school rather than in clinics increased both access to and use of corrective lenses. Conclusions for Practice Corrective lenses may positively impact families, teachers, and students coping with visual deficits by improving school function and psychosocial wellbeing. Practices that increase ownership and use of glasses, such as serving students in school, may significantly improve both child health and academic performance. PMID:26649878
An algorithm to track laboratory zebrafish shoals.
Feijó, Gregory de Oliveira; Sangalli, Vicenzo Abichequer; da Silva, Isaac Newton Lima; Pinho, Márcio Sarroglia
2018-05-01
In this paper, a semi-automatic multi-object tracking method to track a group of unmarked zebrafish is proposed. This method can handle partial occlusion cases, maintaining the correct identity of each individual. For every object, we extracted a set of geometric features to be used in the two main stages of the algorithm. The first stage selected the best candidate, based both on the blobs identified in the image and the estimate generated by a Kalman Filter instance. In the second stage, if the same candidate-blob is selected by two or more instances, a blob-partitioning algorithm takes place in order to split this blob and reestablish the instances' identities. If the algorithm cannot determine the identity of a blob, a manual intervention is required. This procedure was compared against a manual labeled ground truth on four video sequences with different numbers of fish and spatial resolution. The performance of the proposed method is then compared against two well-known zebrafish tracking methods found in the literature: one that treats occlusion scenarios and one that only track fish that are not in occlusion. Based on the data set used, the proposed method outperforms the first method in correctly separating fish in occlusion, increasing its efficiency by at least 8.15% of the cases. As for the second, the proposed method's overall performance outperformed the second in some of the tested videos, especially those with lower image quality, because the second method requires high-spatial resolution images, which is not a requirement for the proposed method. Yet, the proposed method was able to separate fish involved in occlusion and correctly assign its identity in up to 87.85% of the cases, without accounting for user intervention. Copyright © 2018 Elsevier Ltd. All rights reserved.
Ucchesu, Mariano; Orrù, Martino; Grillo, Oscar; Venora, Gianfranco; Paglietti, Giacomo; Ardu, Andrea; Bacchetta, Gianluigi
2016-01-01
The identification of archaeological charred grape seeds is a difficult task due to the alteration of the morphological seeds shape. In archaeobotanical studies, for the correct discrimination between Vitis vinifera subsp. sylvestris and Vitis vinifera subsp. vinifera grape seeds it is very important to understand the history and origin of the domesticated grapevine. In this work, different carbonisation experiments were carried out using a hearth to reproduce the same burning conditions that occurred in archaeological contexts. In addition, several carbonisation trials on modern wild and cultivated grape seeds were performed using a muffle furnace. For comparison with archaeological materials, modern grape seed samples were obtained using seven different temperatures of carbonisation ranging between 180 and 340ºC for 120 min. Analysing the grape seed size and shape by computer vision techniques, and applying the stepwise linear discriminant analysis (LDA) method, discrimination of the wild from the cultivated charred grape seeds was possible. An overall correct classification of 93.3% was achieved. Applying the same statistical procedure to compare modern charred with archaeological grape seeds, found in Sardinia and dating back to the Early Bronze Age (2017–1751 2σ cal. BC), allowed 75.0% of the cases to be identified as wild grape. The proposed method proved to be a useful and effective procedure in identifying, with high accuracy, the charred grape seeds found in archaeological sites. Moreover, it may be considered valid support for advances in the knowledge and comprehension of viticulture adoption and the grape domestication process. The same methodology may also be successful when applied to other plant remains, and provide important information about the history of domesticated plants. PMID:26901361
Ucchesu, Mariano; Orrù, Martino; Grillo, Oscar; Venora, Gianfranco; Paglietti, Giacomo; Ardu, Andrea; Bacchetta, Gianluigi
2016-01-01
The identification of archaeological charred grape seeds is a difficult task due to the alteration of the morphological seeds shape. In archaeobotanical studies, for the correct discrimination between Vitis vinifera subsp. sylvestris and Vitis vinifera subsp. vinifera grape seeds it is very important to understand the history and origin of the domesticated grapevine. In this work, different carbonisation experiments were carried out using a hearth to reproduce the same burning conditions that occurred in archaeological contexts. In addition, several carbonisation trials on modern wild and cultivated grape seeds were performed using a muffle furnace. For comparison with archaeological materials, modern grape seed samples were obtained using seven different temperatures of carbonisation ranging between 180 and 340ºC for 120 min. Analysing the grape seed size and shape by computer vision techniques, and applying the stepwise linear discriminant analysis (LDA) method, discrimination of the wild from the cultivated charred grape seeds was possible. An overall correct classification of 93.3% was achieved. Applying the same statistical procedure to compare modern charred with archaeological grape seeds, found in Sardinia and dating back to the Early Bronze Age (2017-1751 2σ cal. BC), allowed 75.0% of the cases to be identified as wild grape. The proposed method proved to be a useful and effective procedure in identifying, with high accuracy, the charred grape seeds found in archaeological sites. Moreover, it may be considered valid support for advances in the knowledge and comprehension of viticulture adoption and the grape domestication process. The same methodology may also be successful when applied to other plant remains, and provide important information about the history of domesticated plants.
Yeast species associated with orange juice: evaluation of different identification methods.
Arias, Covadonga R; Burns, Jacqueline K; Friedrich, Lorrie M; Goodrich, Renee M; Parish, Mickey E
2002-04-01
Five different methods were used to identify yeast isolates from a variety of citrus juice sources. A total of 99 strains, including reference strains, were identified using a partial sequence of the 26S rRNA gene, restriction pattern analysis of the internal transcribed spacer region (5.8S-ITS), classical methodology, the RapID Yeast Plus system, and API 20C AUX. Twenty-three different species were identified representing 11 different genera. Distribution of the species was considerably different depending on the type of sample. Fourteen different species were identified from pasteurized single-strength orange juice that had been contaminated after pasteurization (PSOJ), while only six species were isolated from fresh-squeezed, unpasteurized orange juice (FSOJ). Among PSOJ isolates, Candida intermedia and Candida parapsilosis were the predominant species. Hanseniaspora occidentalis and Hanseniaspora uvarum represented up to 73% of total FSOJ isolates. Partial sequence of the 26S rRNA gene yielded the best results in terms of correct identification, followed by classical techniques and 5.8S-ITS analysis. The commercial identification kits RapID Yeast Plus system and API 20C AUX were able to correctly identify only 35 and 13% of the isolates, respectively. Six new 5.8S-ITS profiles were described, corresponding to Clavispora lusitaniae, Geotrichum citri-aurantii, H. occidentalis, H. vineae, Pichia fermentans, and Saccharomycopsis crataegensis. With the addition of these new profiles to the existing database, the use of 5.8S-ITS sequence became the best tool for rapid and accurate identification of yeast isolates from orange juice.
Accurate and fast multiple-testing correction in eQTL studies.
Sul, Jae Hoon; Raj, Towfique; de Jong, Simone; de Bakker, Paul I W; Raychaudhuri, Soumya; Ophoff, Roel A; Stranger, Barbara E; Eskin, Eleazar; Han, Buhm
2015-06-04
In studies of expression quantitative trait loci (eQTLs), it is of increasing interest to identify eGenes, the genes whose expression levels are associated with variation at a particular genetic variant. Detecting eGenes is important for follow-up analyses and prioritization because genes are the main entities in biological processes. To detect eGenes, one typically focuses on the genetic variant with the minimum p value among all variants in cis with a gene and corrects for multiple testing to obtain a gene-level p value. For performing multiple-testing correction, a permutation test is widely used. Because of growing sample sizes of eQTL studies, however, the permutation test has become a computational bottleneck in eQTL studies. In this paper, we propose an efficient approach for correcting for multiple testing and assess eGene p values by utilizing a multivariate normal distribution. Our approach properly takes into account the linkage-disequilibrium structure among variants, and its time complexity is independent of sample size. By applying our small-sample correction techniques, our method achieves high accuracy in both small and large studies. We have shown that our method consistently produces extremely accurate p values (accuracy > 98%) for three human eQTL datasets with different sample sizes and SNP densities: the Genotype-Tissue Expression pilot dataset, the multi-region brain dataset, and the HapMap 3 dataset. Copyright © 2015 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NNSA /NV
This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 410 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 410 is located on the Tonopah Test Range (TTR), which is included in the Nevada Test and Training Range (formerly the Nellis Air Force Range) approximately 140 miles northwest of Las Vegas, Nevada. This CAU is comprised of five Corrective Action Sites (CASs): TA-19-002-TAB2, Debris Mound; TA-21-003-TANL, Disposal Trench; TA-21-002-TAAL,more » Disposal Trench; 09-21-001-TA09, Disposal Trenches; 03-19-001, Waste Disposal Site. This CAU is being investigated because contaminants may be present in concentrations that could potentially pose a threat to human health and/or the environment, and waste may have been disposed of with out appropriate controls. Four out of five of these CASs are the result of weapons testing and disposal activities at the TTR, and they are grouped together for site closure based on the similarity of the sites (waste disposal sites and trenches). The fifth CAS, CAS 03-19-001, is a hydrocarbon spill related to activities in the area. This site is grouped with this CAU because of the location (TTR). Based on historical documentation and process know-ledge, vertical and lateral migration routes are possible for all CASs. Migration of contaminants may have occurred through transport by infiltration of precipitation through surface soil which serves as a driving force for downward migration of contaminants. Land-use scenarios limit future use of these CASs to industrial activities. The suspected contaminants of potential concern which have been identified are volatile organic compounds; semivolatile organic compounds; high explosives; radiological constituents including depleted uranium, beryllium, total petroleum hydrocarbons; and total Resource Conservation and Recovery Act metals. Field activities will consist of geophysical and radiological surveys, and collecting soil samples at biased locations by appropriate methods. A two-step data quality objective strategy will be followed: (1) define the nature of contamination at each CAS location by identifying any contamination above preliminary action levels (PALs); and, (2) determine the extent of contamination identified above PALs. The results of this field investigation will support a defensible evaluation of corrective action alternatives in the corrective action decision document.« less
Artificial intelligence techniques for automatic screening of amblyogenic factors.
Van Eenwyk, Jonathan; Agah, Arvin; Giangiacomo, Joseph; Cibis, Gerhard
2008-01-01
To develop a low-cost automated video system to effectively screen children aged 6 months to 6 years for amblyogenic factors. In 1994 one of the authors (G.C.) described video vision development assessment, a digitizable analog video-based system combining Brückner pupil red reflex imaging and eccentric photorefraction to screen young children for amblyogenic factors. The images were analyzed manually with this system. We automated the capture of digital video frames and pupil images and applied computer vision and artificial intelligence to analyze and interpret results. The artificial intelligence systems were evaluated by a tenfold testing method. The best system was the decision tree learning approach, which had an accuracy of 77%, compared to the "gold standard" specialist examination with a "refer/do not refer" decision. Criteria for referral were strabismus, including microtropia, and refractive errors and anisometropia considered to be amblyogenic. Eighty-two percent of strabismic individuals were correctly identified. High refractive errors were also correctly identified and referred 90% of the time, as well as significant anisometropia. The program was less correct in identifying more moderate refractive errors, below +5 and less than -7. Although we are pursuing a variety of avenues to improve the accuracy of the automated analysis, the program in its present form provides acceptable cost benefits for detecting ambylogenic factors in children aged 6 months to 6 years.
Diagnosis and Repair of Random Noise in the SENSOR'S Chris-Proba
NASA Astrophysics Data System (ADS)
Mobasheri, M. R.; Zendehbad, S. A.
2013-09-01
The CHRIS sensor on the PROBA-1 satellite has imaged as push-broom way, 18 meter spatial resolution and 18 bands (1.25-11 nm) spectral resolution from earth since 2001. After 13 years of the life of the sensor because of many reasons including the influence of solar radiation and magnetic fields of Earth and Sun, behaviour of the response function of the detector exit from calibration mode and performance of some CCDs has failed. This has caused some image information in some bands have been deleted or invalid. In some images, some dark streaks or light bands in different locations need to be created to identify and correct. In this paper all type of noise which likely impact on sensor data by CHRIS from record and transmission identified, calculated and formulated and method is presented through modifying. To do this we use the In-fight and On-ground measurements parameters. Otherwise creation of noise in images is divided into horizontal and vertical noise. Due to the random noise is created in different bands and different locations, those images in which noise is observed is used. In this paper, techniques to identify and correct the dark or pale stripe detail of the images are created. Finally, the noisy images were compared before and after the reform and effective algorithms to detect and correct errors were demonstrated.
Ji, Xiaohong; Liu, Peng; Sun, Zhenqi; Su, Xiaohui; Wang, Wei; Gao, Yanhui; Sun, Dianjun
2016-01-01
Objective To determine the effect of statistical correction for intra-individual variation on estimated urinary iodine concentration (UIC) by sampling on 3 consecutive days in four seasons in children. Setting School-aged children from urban and rural primary schools in Harbin, Heilongjiang, China. Participants 748 and 640 children aged 8–11 years were recruited from urban and rural schools, respectively, in Harbin. Primary and secondary outcome measures The spot urine samples were collected once a day for 3 consecutive days in each season over 1 year. The UIC of the first day was corrected by two statistical correction methods: the average correction method (average of days 1, 2; average of days 1, 2 and 3) and the variance correction method (UIC of day 1 corrected by two replicates and by three replicates). The variance correction method determined the SD between subjects (Sb) and within subjects (Sw), and calculated the correction coefficient (Fi), Fi=Sb/(Sb+Sw/di), where di was the number of observations. The UIC of day 1 was then corrected using the following equation: Results The variance correction methods showed the overall Fi was 0.742 for 2 days’ correction and 0.829 for 3 days’ correction; the values for the seasons spring, summer, autumn and winter were 0.730, 0.684, 0.706 and 0.703 for 2 days’ correction and 0.809, 0.742, 0.796 and 0.804 for 3 days’ correction, respectively. After removal of the individual effect, the correlation coefficient between consecutive days was 0.224, and between non-consecutive days 0.050. Conclusions The variance correction method is effective for correcting intra-individual variation in estimated UIC following sampling on 3 consecutive days in four seasons in children. The method varies little between ages, sexes and urban or rural setting, but does vary between seasons. PMID:26920442
Anonymizing patient genomic data for public sharing association studies.
Fernandez-Lozano, Carlos; Lopez-Campos, Guillermo; Seoane, Jose A; Lopez-Alonso, Victoria; Dorado, Julian; Martín-Sanchez, Fernando; Pazos, Alejandro
2013-01-01
The development of personalized medicine is tightly linked with the correct exploitation of molecular data, especially those associated with the genome sequence along with these use of genomic data there is an increasing demand to share these data for research purposes. Transition of clinical data to research is based in the anonymization of these data so the patient cannot be identified, the use of genomic data poses a great challenge because its nature of identifying data. In this work we have analyzed current methods for genome anonymization and propose a one way encryption method that may enable the process of genomic data sharing accessing only to certain regions of genomes for research purposes.
Optimisation of reconstruction--reprojection-based motion correction for cardiac SPECT.
Kangasmaa, Tuija S; Sohlberg, Antti O
2014-07-01
Cardiac motion is a challenging cause of image artefacts in myocardial perfusion SPECT. A wide range of motion correction methods have been developed over the years, and so far automatic algorithms based on the reconstruction--reprojection principle have proved to be the most effective. However, these methods have not been fully optimised in terms of their free parameters and implementational details. Two slightly different implementations of reconstruction--reprojection-based motion correction techniques were optimised for effective, good-quality motion correction and then compared with each other. The first of these methods (Method 1) was the traditional reconstruction-reprojection motion correction algorithm, where the motion correction is done in projection space, whereas the second algorithm (Method 2) performed motion correction in reconstruction space. The parameters that were optimised include the type of cost function (squared difference, normalised cross-correlation and mutual information) that was used to compare measured and reprojected projections, and the number of iterations needed. The methods were tested with motion-corrupt projection datasets, which were generated by adding three different types of motion (lateral shift, vertical shift and vertical creep) to motion-free cardiac perfusion SPECT studies. Method 2 performed slightly better overall than Method 1, but the difference between the two implementations was small. The execution time for Method 2 was much longer than for Method 1, which limits its clinical usefulness. The mutual information cost function gave clearly the best results for all three motion sets for both correction methods. Three iterations were sufficient for a good quality correction using Method 1. The traditional reconstruction--reprojection-based method with three update iterations and mutual information cost function is a good option for motion correction in clinical myocardial perfusion SPECT.
Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo
2005-10-01
An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.
New decoding methods of interleaved burst error-correcting codes
NASA Astrophysics Data System (ADS)
Nakano, Y.; Kasahara, M.; Namekawa, T.
1983-04-01
A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.
Exposed and embedded corrections in aphasia therapy: issues of voice and identity.
Simmons-Mackie, Nina; Damico, Jack S
2008-01-01
Because communication after the onset of aphasia can be fraught with errors, therapist corrections are pervasive in therapy for aphasia. Although corrections are designed to improve the accuracy of communication, some corrections can have social and emotional consequences during interactions. That is, exposure of errors can potentially silence the 'voice' of a speaker by orienting to an utterance as unacceptable. Although corrections can marginalize speakers with aphasia, the practice has not been widely investigated. A qualitative study of corrections during aphasia therapy was undertaken to describe corrections in therapy, identify patterns of occurrence, and develop hypotheses regarding the potential effects of corrections. Videotapes of six individual and five group aphasia therapy sessions were analysed. Sequences consistent with a definition of a therapist 'correction' were identified. Corrections were defined as instances when the therapist offered a 'fix' for a perceived error in the client's talk even though the intent was apparent. Two categories of correction were identified and were consistent with Jefferson's (1987) descriptions of exposed and embedded corrections. Exposed corrections involved explicit correcting by the therapist, while embedded corrections occurred implicitly within the ongoing talk. Patterns of occurrence appeared consistent with philosophical orientations of therapy sessions. Exposed corrections were more prevalent in sessions focusing on repairing deficits, while embedded corrections were prevalent in sessions focusing on natural communication events (e.g. conversation). In addition, exposed corrections were sometimes used when client offerings were plausible or appropriate, but were inconsistent with therapist expectations. The observation that some instances of exposed corrections effectively silenced the voice or self-expression of the person with aphasia has significant implications for outcomes from aphasia therapy. By focusing on accurate productions versus communicative intents, therapy runs the risk of reducing self-esteem and communicative confidence, as well as reinforcing a sense of 'helplessness' and disempowerment among people with aphasia. The results suggest that clinicians should carefully calibrate the use of exposed and embedded corrections to balance linguistic and psychosocial goals.
Correcting for multiple-testing in multi-arm trials: is it necessary and is it done?
Wason, James M S; Stecher, Lynne; Mander, Adrian P
2014-09-17
Multi-arm trials enable the evaluation of multiple treatments within a single trial. They provide a way of substantially increasing the efficiency of the clinical development process. However, since multi-arm trials test multiple hypotheses, some regulators require that a statistical correction be made to control the chance of making a type-1 error (false-positive). Several conflicting viewpoints are expressed in the literature regarding the circumstances in which a multiple-testing correction should be used. In this article we discuss these conflicting viewpoints and review the frequency with which correction methods are currently used in practice. We identified all multi-arm clinical trials published in 2012 by four major medical journals. Summary data on several aspects of the trial design were extracted, including whether the trial was exploratory or confirmatory, whether a multiple-testing correction was applied and, if one was used, what type it was. We found that almost half (49%) of published multi-arm trials report using a multiple-testing correction. The percentage that corrected was higher for trials in which the experimental arms included multiple doses or regimens of the same treatments (67%). The percentage that corrected was higher in exploratory than confirmatory trials, although this is explained by a greater proportion of exploratory trials testing multiple doses and regimens of the same treatment. A sizeable proportion of published multi-arm trials do not correct for multiple-testing. Clearer guidance about whether multiple-testing correction is needed for multi-arm trials that test separate treatments against a common control group is required.
Methods for the correction of vascular artifacts in PET O-15 water brain-mapping studies
NASA Astrophysics Data System (ADS)
Chen, Kewei; Reiman, E. M.; Lawson, M.; Yun, Lang-sheng; Bandy, D.; Palant, A.
1996-12-01
While positron emission tomographic (PET) measurements of regional cerebral blood flow (rCBF) can be used to map brain regions that are involved in normal and pathological human behaviors, measurements in the anteromedial temporal lobe can be confounded by the combined effects of radiotracer activity in neighboring arteries and partial-volume averaging. The authors now describe two simple methods to address this vascular artifact. One method utilizes the early frames of a dynamic PET study, while the other method utilizes a coregistered magnetic resonance image (MRI) to characterize the vascular region of interest (VROI). Both methods subsequently assign a common value to each pixel in the VROI for the control (baseline) scan and the activation scan. To study the vascular artifact and to demonstrate the ability of the proposed methods correcting the vascular artifact, four dynamic PET scans were performed in a single subject during the same behavioral state. For each of the four scans, a vascular scan containing vascular activity was computed as the summation of the images acquired 0-60 s after radiotracer administration, and a control scan containing minimal vascular activity was computed as the summation of the images acquired 20-80 s after radiotracer administration. t-score maps calculated from the four pairs of vascular and control scans were used to characterize regional blood flow differences related to vascular activity before and after the application of each vascular artifact correction method. Both methods eliminated the observed differences in vascular activity, as well as the vascular artifact observed in the anteromedial temporal lobes. Using PET data from a study of normal human emotion, these methods permitted the authors to identify rCBF increases in the anteromedial temporal lobe free from the potentially confounding, combined effects of vascular activity and partial-volume averaging.
NASA Astrophysics Data System (ADS)
Fang, G. H.; Yang, J.; Chen, Y. N.; Zammit, C.
2015-06-01
Water resources are essential to the ecosystem and social economy in the desert and oasis of the arid Tarim River basin, northwestern China, and expected to be vulnerable to climate change. It has been demonstrated that regional climate models (RCMs) provide more reliable results for a regional impact study of climate change (e.g., on water resources) than general circulation models (GCMs). However, due to their considerable bias it is still necessary to apply bias correction before they are used for water resources research. In this paper, after a sensitivity analysis on input meteorological variables based on the Sobol' method, we compared five precipitation correction methods and three temperature correction methods in downscaling RCM simulations applied over the Kaidu River basin, one of the headwaters of the Tarim River basin. Precipitation correction methods applied include linear scaling (LS), local intensity scaling (LOCI), power transformation (PT), distribution mapping (DM) and quantile mapping (QM), while temperature correction methods are LS, variance scaling (VARI) and DM. The corrected precipitation and temperature were compared to the observed meteorological data, prior to being used as meteorological inputs of a distributed hydrologic model to study their impacts on streamflow. The results show (1) streamflows are sensitive to precipitation, temperature and solar radiation but not to relative humidity and wind speed; (2) raw RCM simulations are heavily biased from observed meteorological data, and its use for streamflow simulations results in large biases from observed streamflow, and all bias correction methods effectively improved these simulations; (3) for precipitation, PT and QM methods performed equally best in correcting the frequency-based indices (e.g., standard deviation, percentile values) while the LOCI method performed best in terms of the time-series-based indices (e.g., Nash-Sutcliffe coefficient, R2); (4) for temperature, all correction methods performed equally well in correcting raw temperature; and (5) for simulated streamflow, precipitation correction methods have more significant influence than temperature correction methods and the performances of streamflow simulations are consistent with those of corrected precipitation; i.e., the PT and QM methods performed equally best in correcting flow duration curve and peak flow while the LOCI method performed best in terms of the time-series-based indices. The case study is for an arid area in China based on a specific RCM and hydrologic model, but the methodology and some results can be applied to other areas and models.
Method and apparatus for reliable inter-antenna baseline determination
NASA Technical Reports Server (NTRS)
Wilson, John M. (Inventor)
2001-01-01
Disclosed is a method for inter-antenna baseline determination that uses an antenna configuration comprising a pair of relatively closely spaced antennas and other pairs of distant antennas. The closely spaced pair provides a short baseline having an integer ambiguity that may be searched exhaustively to identify the correct set of integers. This baseline is then used as a priori information to aid the determination of longer baselines that, once determined, may be used for accurate run time attitude determination.
A method for evaluating the relation between sound source segregation and masking
Lutfi, Robert A.; Liu, Ching-Ju
2011-01-01
Sound source segregation refers to the ability to hear as separate entities two or more sound sources comprising a mixture. Masking refers to the ability of one sound to make another sound difficult to hear. Often in studies, masking is assumed to result from a failure of segregation, but this assumption may not always be correct. Here a method is offered to identify the relation between masking and sound source segregation in studies and an example is given of its application. PMID:21302979
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.
Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In thismore » paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.« less
Runner's knowledge of their foot type: do they really know?
Hohmann, Erik; Reaburn, Peter; Imhoff, Andreas
2012-09-01
The use of correct individually selected running shoes may reduce the incidence of running injuries. However, the runner needs to be aware of their foot anatomy to ensure the "correct" footwear is chosen. The purpose of this study was to compare the individual runner's knowledge of their arch type to the arch index derived from a static footprint. We examined 92 recreational runners with a mean age of 35.4±11.4 (12-63) years. A questionnaire was used to investigate the knowledge of the runners about arch height and overpronation. A clinical examination was undertaken using defined criteria and the arch index was analysed using weight-bearing footprints. Forty-five runners (49%) identified their foot arch correctly. Eighteen of the 41 flat-arched runners (44%) identified their arch correctly. Twenty-four of the 48 normal-arched athletes (50%) identified their arch correctly. Three subjects with a high arch identified their arch correctly. Thirty-eight runners assessed themselves as overpronators; only four (11%) of these athletes were positively identified. Of the 34 athletes who did not categorize themselves as overpronators, four runners (12%) had clinical overpronation. The findings of this research suggest that runners possess poor knowledge of both their foot arch and dynamic pronation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Power System Transient Stability Based on Data Mining Theory
NASA Astrophysics Data System (ADS)
Cui, Zhen; Shi, Jia; Wu, Runsheng; Lu, Dan; Cui, Mingde
2018-01-01
In order to study the stability of power system, a power system transient stability based on data mining theory is designed. By introducing association rules analysis in data mining theory, an association classification method for transient stability assessment is presented. A mathematical model of transient stability assessment based on data mining technology is established. Meanwhile, combining rule reasoning with classification prediction, the method of association classification is proposed to perform transient stability assessment. The transient stability index is used to identify the samples that cannot be correctly classified in association classification. Then, according to the critical stability of each sample, the time domain simulation method is used to determine the state, so as to ensure the accuracy of the final results. The results show that this stability assessment system can improve the speed of operation under the premise that the analysis result is completely correct, and the improved algorithm can find out the inherent relation between the change of power system operation mode and the change of transient stability degree.
NASA Astrophysics Data System (ADS)
Medjoubi, K.; Dawiec, A.
2017-12-01
A simple method is proposed in this work for quantitative evaluation of the quality of the threshold adjustment and the flat-field correction of Hybrid Photon Counting pixel (HPC) detectors. This approach is based on the Photon Transfer Curve (PTC) corresponding to the measurement of the standard deviation of the signal in flat field images. Fixed pattern noise (FPN), easily identifiable in the curve, is linked to the residual threshold dispersion, sensor inhomogeneity and the remnant errors in flat fielding techniques. The analytical expression of the signal to noise ratio curve is developed for HPC and successfully used as a fit function applied to experimental data obtained with the XPAD detector. The quantitative evaluation of the FPN, described by the photon response non-uniformity (PRNU), is measured for different configurations (threshold adjustment method and flat fielding technique) and is demonstrated to be used in order to evaluate the best setting for having the best image quality from a commercial or a R&D detector.
Aligning observed and modelled behaviour based on workflow decomposition
NASA Astrophysics Data System (ADS)
Wang, Lu; Du, YuYue; Liu, Wei
2017-09-01
When business processes are mostly supported by information systems, the availability of event logs generated from these systems, as well as the requirement of appropriate process models are increasing. Business processes can be discovered, monitored and enhanced by extracting process-related information. However, some events cannot be correctly identified because of the explosion of the amount of event logs. Therefore, a new process mining technique is proposed based on a workflow decomposition method in this paper. Petri nets (PNs) are used to describe business processes, and then conformance checking of event logs and process models is investigated. A decomposition approach is proposed to divide large process models and event logs into several separate parts that can be analysed independently; while an alignment approach based on a state equation method in PN theory enhances the performance of conformance checking. Both approaches are implemented in programmable read-only memory (ProM). The correctness and effectiveness of the proposed methods are illustrated through experiments.
The location and recognition of anti-counterfeiting code image with complex background
NASA Astrophysics Data System (ADS)
Ni, Jing; Liu, Quan; Lou, Ping; Han, Ping
2017-07-01
The order of cigarette market is a key issue in the tobacco business system. The anti-counterfeiting code, as a kind of effective anti-counterfeiting technology, can identify counterfeit goods, and effectively maintain the normal order of market and consumers' rights and interests. There are complex backgrounds, light interference and other problems in the anti-counterfeiting code images obtained by the tobacco recognizer. To solve these problems, the paper proposes a locating method based on Susan operator, combined with sliding window and line scanning,. In order to reduce the interference of background and noise, we extract the red component of the image and convert the color image into gray image. For the confusing characters, recognition results correction based on the template matching method has been adopted to improve the recognition rate. In this method, the anti-counterfeiting code can be located and recognized correctly in the image with complex background. The experiment results show the effectiveness and feasibility of the approach.
Olayan, Rawan S; Ashoor, Haitham; Bajic, Vladimir B
2018-04-01
Finding computationally drug-target interactions (DTIs) is a convenient strategy to identify new DTIs at low cost with reasonable accuracy. However, the current DTI prediction methods suffer the high false positive prediction rate. We developed DDR, a novel method that improves the DTI prediction accuracy. DDR is based on the use of a heterogeneous graph that contains known DTIs with multiple similarities between drugs and multiple similarities between target proteins. DDR applies non-linear similarity fusion method to combine different similarities. Before fusion, DDR performs a pre-processing step where a subset of similarities is selected in a heuristic process to obtain an optimized combination of similarities. Then, DDR applies a random forest model using different graph-based features extracted from the DTI heterogeneous graph. Using 5-repeats of 10-fold cross-validation, three testing setups, and the weighted average of area under the precision-recall curve (AUPR) scores, we show that DDR significantly reduces the AUPR score error relative to the next best start-of-the-art method for predicting DTIs by 34% when the drugs are new, by 23% when targets are new and by 34% when the drugs and the targets are known but not all DTIs between them are not known. Using independent sources of evidence, we verify as correct 22 out of the top 25 DDR novel predictions. This suggests that DDR can be used as an efficient method to identify correct DTIs. The data and code are provided at https://bitbucket.org/RSO24/ddr/. vladimir.bajic@kaust.edu.sa. Supplementary data are available at Bioinformatics online.
Rosenthal, Mark; Ugele, Bernhard; Lipowsky, Gerd; Küster, Helmut
2006-02-01
The aim of this prospective observational study was to compare a bedside test with the reference laboratory method in routine postnatal glucose monitoring. Term newborns with increased risk or clinical signs of hypoglycemia were screened with a bedside test. In case of a glucose value below 2.25 mmol/L, a second blood sample was taken and a duplicate glucose measurement done in the laboratory using a bedside test (Accutrend sensor) and the reference laboratory method (hexokinase method) at the same time and from the same sample. From 110 term newborns, 122 blood samples were obtained for duplicate measurements (median 1.69 mmol/L, SD 0.45 mmol/L). Of these 122, Accutrend correctly identified 97% as being <2.25 mmol/L by the laboratory method. A Bland-Altman plot revealed a mean underestimation of the Accutrend of only -0.09 mmol/L. However, due to high scattering, the maximal over- and underestimation was 0.89 and 1.39 mmol/L, respectively. Only 75% of the results from the Accutrend were within +/-20% of the result of the laboratory method. If the cut-off for low glucose concentrations was set 0.6 mmol/L higher for the bedside test as compared to the laboratory method, all patients except one would have been correctly identified as hypoglycemic. When using the Accutrend sensor, single infants with even marked hypoglycemia might be missed. Some delay in receiving accurate measurements might be more helpful for clinical decisions and long-term outcome than immediate but potentially misleading results.
Statistical tests and identifiability conditions for pooling and analyzing multisite datasets.
Zhou, Hao Henry; Singh, Vikas; Johnson, Sterling C; Wahba, Grace
2018-02-13
When sample sizes are small, the ability to identify weak (but scientifically interesting) associations between a set of predictors and a response may be enhanced by pooling existing datasets. However, variations in acquisition methods and the distribution of participants or observations between datasets, especially due to the distributional shifts in some predictors, may obfuscate real effects when datasets are combined. We present a rigorous statistical treatment of this problem and identify conditions where we can correct the distributional shift. We also provide an algorithm for the situation where the correction is identifiable. We analyze various properties of the framework for testing model fit, constructing confidence intervals, and evaluating consistency characteristics. Our technical development is motivated by Alzheimer's disease (AD) studies, and we present empirical results showing that our framework enables harmonizing of protein biomarkers, even when the assays across sites differ. Our contribution may, in part, mitigate a bottleneck that researchers face in clinical research when pooling smaller sized datasets and may offer benefits when the subjects of interest are difficult to recruit or when resources prohibit large single-site studies. Copyright © 2018 the Author(s). Published by PNAS.
Educational audit on drug dose calculation learning in a Tanzanian school of nursing.
Savage, Angela Ruth
2015-06-01
Patient safety is a key concern for nurses; ability to calculate drug doses correctly is an essential skill to prevent and reduce medication errors. Literature suggests that nurses' drug calculation skills should be monitored. The aim of the study was to conduct an educational audit on drug dose calculation learning in a Tanzanian school of nursing. Specific objectives were to assess learning from targeted teaching, to identify problem areas in performance and to identify ways in which these problem areas might be addressed. A total of 268 registered nurses and nursing students in two year groups of a nursing degree programme were the subjects for the audit; they were given a pretest, then four hours of teaching, a post-test after two weeks and a second post-test after eight weeks. There was a statistically significant improvement in correct answers in the first post-test, but none between the first and second post-tests. Particular problems with drug calculations were identified by the nurses / students, and the teacher; these identified problems were not congruent. Further studies in different settings using different methods of teaching, planned continuing education for all qualified nurses, and appropriate pass marks for students in critical skills are recommended.
Defense Logistics Agency Disposition Services Afghanistan Disposal Process Needed Improvement
2013-11-08
audit, and management was proactive in correcting the deficiencies we identified. DLA DS eliminated backlogs, identified and corrected system ...problems, provided additional system training, corrected coding errors, added personnel to key positions, addressed scale issues, submitted debit...Service Automated Information System to the Reutilization Business Integration2 (RBI) solution. The implementation of RBI in Afghanistan occurred in
Eye-motion-corrected optical coherence tomography angiography using Lissajous scanning.
Chen, Yiwei; Hong, Young-Joo; Makita, Shuichi; Yasuno, Yoshiaki
2018-03-01
To correct eye motion artifacts in en face optical coherence tomography angiography (OCT-A) images, a Lissajous scanning method with subsequent software-based motion correction is proposed. The standard Lissajous scanning pattern is modified to be compatible with OCT-A and a corresponding motion correction algorithm is designed. The effectiveness of our method was demonstrated by comparing en face OCT-A images with and without motion correction. The method was further validated by comparing motion-corrected images with scanning laser ophthalmoscopy images, and the repeatability of the method was evaluated using a checkerboard image. A motion-corrected en face OCT-A image from a blinking case is presented to demonstrate the ability of the method to deal with eye blinking. Results show that the method can produce accurate motion-free en face OCT-A images of the posterior segment of the eye in vivo .
Kaur, Ravinder; Dhakad, Megh Singh; Goyal, Ritu; Haque, Absarul; Mukhopadhyay, Gauranga
2016-01-01
Candida infection is a major cause of morbidity and mortality in immunocompromised patients; an accurate and early identification is a prerequisite need to be taken as an effective measure for the management of patients. The purpose of this study was to compare the conventional identification of Candida species with identification by Vitek-2 system and the antifungal susceptibility testing (AST) by broth microdilution method with Vitek-2 AST system. A total of 172 Candida isolates were subjected for identification by the conventional methods, Vitek-2 system, restriction fragment length polymorphism, and random amplified polymorphic DNA analysis. AST was carried out as per the Clinical and Laboratory Standards Institute M27-A3 document and by Vitek-2 system. Candida albicans (82.51%) was the most common Candida species followed by Candida tropicalis (6.29%), Candida krusei (4.89%), Candida parapsilosis (3.49%), and Candida glabrata (2.79%). With Vitek-2 system, of the 172 isolates, 155 Candida isolates were correctly identified, 13 were misidentified, and four were with low discrimination. Whereas with conventional methods, 171 Candida isolates were correctly identified and only a single isolate of C. albicans was misidentified as C. tropicalis . The average measurement of agreement between the Vitek-2 system and conventional methods was >94%. Most of the isolates were susceptible to fluconazole (88.95%) and amphotericin B (97.67%). The measurement of agreement between the methods of AST was >94% for fluconazole and >99% for amphotericin B, which was statistically significant ( P < 0.01). The study confirmed the importance and reliability of conventional and molecular methods, and the acceptable agreements suggest Vitek-2 system an alternative method for speciation and sensitivity testing of Candida species infections.
Ensemble stacking mitigates biases in inference of synaptic connectivity.
Chambers, Brendan; Levy, Maayan; Dechery, Joseph B; MacLean, Jason N
2018-01-01
A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.
Finding the Genomic Basis of Local Adaptation: Pitfalls, Practical Solutions, and Future Directions.
Hoban, Sean; Kelley, Joanna L; Lotterhos, Katie E; Antolin, Michael F; Bradburd, Gideon; Lowry, David B; Poss, Mary L; Reed, Laura K; Storfer, Andrew; Whitlock, Michael C
2016-10-01
Uncovering the genetic and evolutionary basis of local adaptation is a major focus of evolutionary biology. The recent development of cost-effective methods for obtaining high-quality genome-scale data makes it possible to identify some of the loci responsible for adaptive differences among populations. Two basic approaches for identifying putatively locally adaptive loci have been developed and are broadly used: one that identifies loci with unusually high genetic differentiation among populations (differentiation outlier methods) and one that searches for correlations between local population allele frequencies and local environments (genetic-environment association methods). Here, we review the promises and challenges of these genome scan methods, including correcting for the confounding influence of a species' demographic history, biases caused by missing aspects of the genome, matching scales of environmental data with population structure, and other statistical considerations. In each case, we make suggestions for best practices for maximizing the accuracy and efficiency of genome scans to detect the underlying genetic basis of local adaptation. With attention to their current limitations, genome scan methods can be an important tool in finding the genetic basis of adaptive evolutionary change.
Jamal, Wafaa; Saleem, Rola; Rotimi, Vincent O
2013-08-01
The use of matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) for identification of microorganisms directly from blood culture is an exciting dimension to the microbiologists. We evaluated the performance of Bruker SepsiTyper kit™ (STK) for direct identification of bacteria from positive blood culture. This was done in parallel with conventional methods. Nonrepetitive positive blood cultures from 160 consecutive patients were prospectively evaluated by both methods. Of 160 positive blood cultures, the STK identified 114 (75.6%) isolates and routine conventional method 150 (93%). Thirty-six isolates were misidentified or not identified by the kit. Of these, 5 had score of >2.000 and 31 had an unreliable low score of <1.7. Four of 8 yeasts were identified correctly. The average turnaround time using the STK was 35 min, including extraction steps and 30:12 to 36:12 h with routine method. The STK holds promise for timely management of bacteremic patients. Copyright © 2013 Elsevier Inc. All rights reserved.
Cundy, K V; Willard, K E; Valeri, L J; Shanholtzer, C J; Singh, J; Peterson, L R
1991-01-01
Three gas chromatography (GC) methods were compared for the identification of 52 clinical Clostridium difficile isolates, as well as 17 non-C. difficile Clostridium isolates. Headspace GC and Microbial Identification System (MIS) GC, an automated system which utilizes a software library developed at the Virginia Polytechnic Institute to identify organisms based on the fatty acids extracted from the bacterial cell wall, were compared against the reference method of traditional GC. Headspace GC and MIS were of approximately equivalent accuracy in identifying the 52 C. difficile isolates (52 of 52 versus 51 of 52, respectively). However, 7 of 52 organisms required repeated sample preparation before an identification was achieved by the MIS method. Both systems effectively differentiated C. difficile from non-C. difficile clostridia, although the MIS method correctly identified only 9 of 17. We conclude that the headspace GC system is an accurate method of C. difficile identification, which requires only one-fifth of the sample preparation time of MIS GC and one-half of the sample preparation time of traditional GC. PMID:2007632
Machine Learned Replacement of N-Labels for Basecalled Sequences in DNA Barcoding.
Ma, Eddie Y T; Ratnasingham, Sujeevan; Kremer, Stefan C
2018-01-01
This study presents a machine learning method that increases the number of identified bases in Sanger Sequencing. The system post-processes a KB basecalled chromatogram. It selects a recoverable subset of N-labels in the KB-called chromatogram to replace with basecalls (A,C,G,T). An N-label correction is defined given an additional read of the same sequence, and a human finished sequence. Corrections are added to the dataset when an alignment determines the additional read and human agree on the identity of the N-label. KB must also rate the replacement with quality value of in the additional read. Corrections are only available during system training. Developing the system, nearly 850,000 N-labels are obtained from Barcode of Life Datasystems, the premier database of genetic markers called DNA Barcodes. Increasing the number of correct bases improves reference sequence reliability, increases sequence identification accuracy, and assures analysis correctness. Keeping with barcoding standards, our system maintains an error rate of percent. Our system only applies corrections when it estimates low rate of error. Tested on this data, our automation selects and recovers: 79 percent of N-labels from COI (animal barcode); 80 percent from matK and rbcL (plant barcodes); and 58 percent from non-protein-coding sequences (across eukaryotes).
Non‐parametric combination and related permutation tests for neuroimaging
Webster, Matthew A.; Brooks, Jonathan C.; Tracey, Irene; Smith, Stephen M.; Nichols, Thomas E.
2016-01-01
Abstract In this work, we show how permutation methods can be applied to combination analyses such as those that include multiple imaging modalities, multiple data acquisitions of the same modality, or simply multiple hypotheses on the same data. Using the well‐known definition of union‐intersection tests and closed testing procedures, we use synchronized permutations to correct for such multiplicity of tests, allowing flexibility to integrate imaging data with different spatial resolutions, surface and/or volume‐based representations of the brain, including non‐imaging data. For the problem of joint inference, we propose and evaluate a modification of the recently introduced non‐parametric combination (NPC) methodology, such that instead of a two‐phase algorithm and large data storage requirements, the inference can be performed in a single phase, with reasonable computational demands. The method compares favorably to classical multivariate tests (such as MANCOVA), even when the latter is assessed using permutations. We also evaluate, in the context of permutation tests, various combining methods that have been proposed in the past decades, and identify those that provide the best control over error rate and power across a range of situations. We show that one of these, the method of Tippett, provides a link between correction for the multiplicity of tests and their combination. Finally, we discuss how the correction can solve certain problems of multiple comparisons in one‐way ANOVA designs, and how the combination is distinguished from conjunctions, even though both can be assessed using permutation tests. We also provide a common algorithm that accommodates combination and correction. Hum Brain Mapp 37:1486‐1511, 2016. © 2016 Wiley Periodicals, Inc. PMID:26848101
Root Cause Analysis Webinar: Q&A with Roni Silverstein. REL Mid-Atlantic Webinar
ERIC Educational Resources Information Center
Regional Educational Laboratory Mid-Atlantic, 2014
2014-01-01
Root cause analysis is a powerful method schools use to analyze data to solve problems; it aims to identify and correct the root causes of problems or events, rather than simply addressing their symptoms. In this webinar, veteran practitioner, Roni Silverstein, talked about the value of this process and practical ways to use it in your school or…
A Phenomenological Study on Turkish Language Teachers' Views on Characters in Children's Books
ERIC Educational Resources Information Center
Yilmaz, Oguzhan
2016-01-01
One of the indirect functions of the books is to help children discern the good, the nice and the correct through characters or protagonists to be self-identified. This study is to reveal what Turkish language teachers think about the character traits in children's books. One of the qualitative methods, phenomonological design was used in the…
Chihara, Shingo; Hayden, Mary K.; Minogue-Corbett, Eileen; Singh, Kamaljit
2009-01-01
The ability to rapidly differentiate coagulase-negative staphylococcus (CoNS) from Staphylococcus aureus and to determine methicillin resistance is important as it affects the decision to treat empiric antibiotic selection. The objective of this study was to evaluate CHROMagar S. aureus and CHROMagar MRSA (Becton Dickinson) for rapid identification of Staphylococcus spp. directly from blood cultures. Consecutive blood culture bottles (BacT Alert 3D SA and SN, bioMérieux) growing gram-positive cocci in clusters were evaluated. An aliquot was plated onto CHROMagar MRSA (C-MRSA) and CHROMagar S. aureus (C-SA) plates, which were read at 12 to 16 hours. C-SA correctly identified 147/147 S. aureus (100% sensitivity); 2 CoNS were misidentified as S. aureus (98% specificity). C-MRSA correctly identified 74/77 MRSA (96% sensitivity). None of the MSSA isolates grew on C-MRSA (100% specificity). In conclusion, CHROMagar is a rapid and sensitive method to distinguish MRSA, MSSA, and coagulase-negative Staphylococcus and may decrease time of reporting positive results. PMID:20016679
Yeast identification: reassessment of assimilation tests as sole universal identifiers.
Spencer, J; Rawling, S; Stratford, M; Steels, H; Novodvorska, M; Archer, D B; Chandra, S
2011-11-01
To assess whether assimilation tests in isolation remain a valid method of identification of yeasts, when applied to a wide range of environmental and spoilage isolates. Seventy-one yeast strains were isolated from a soft drinks factory. These were identified using assimilation tests and by D1/D2 rDNA sequencing. When compared to sequencing, assimilation test identifications (MicroLog™) were 18·3% correct, a further 14·1% correct within the genus and 67·6% were incorrectly identified. The majority of the latter could be attributed to the rise in newly reported yeast species. Assimilation tests alone are unreliable as a universal means of yeast identification, because of numerous new species, variability of strains and increasing coincidence of assimilation profiles. Assimilation tests still have a useful role in the identification of common species, such as the majority of clinical isolates. It is probable, based on these results, that many yeast identifications reported in older literature are incorrect. This emphasizes the crucial need for accurate identification in present and future publications. © 2011 The Authors. Letters in Applied Microbiology © 2011 The Society for Applied Microbiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
2017-06-13
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less
Silvester, Jocelyn A; Weiten, Dayna; Graff, Lesley A; Walker, John R; Duerksen, Donald R
2017-01-01
Objectives To assess the relationship between self-reported adherence to a gluten-free diet (GFD) and the ability to determine correctly the appropriateness of particular foods in a GFD. Research Methods & Procedures Persons with celiac disease were recruited through clinics and support groups. Participants completed a questionnaire with items related to GFD information sources, gluten content of 17 common foods (food to avoid, food allowed, food to question), GFD adherence and demographics. Diagnosis was self-reported. Results The 82 respondents (88% female) had a median of 6 years GFD experience. Most (55%) reported strict adherence, 18% reported intentional gluten consumption and 21% acknowledged rare unintentional gluten consumption. Cookbooks, advocacy groups and print media were the most commonly used GFD information sources (85–92%). No participant identified correctly the gluten content of all 17 foods; only 30% identified at least 14 foods correctly. The median score on the Gluten-Free Diet Knowledge Scale (GFD-KS) was 11.5 (IQR 10–13). One in five incorrect responses put the respondent at risk of consuming gluten. GFD-KS scores did not correlate with self-reported adherence or GFD duration. Patient advocacy group members scored significantly higher on the GFD-KS than non-members (12.3 vs. 10.6; p<0.005). Conclusions Self-report measures which do not account for the possibility of unintentional gluten ingestion overestimate GFD adherence. Individuals who believe they are following a GFD are not readily able to correctly identify foods that are GF, which suggests ongoing gluten consumption may be occurring, even among patients who believe they are “strictly” adherent. The role of patient advocacy groups and education to improve outcomes through improved adherence to a GFD requires further research. PMID:27131408
Selecting foils for identification lineups: matching suspects or descriptions?
Tunnicliff, J L; Clark, S E
2000-04-01
Two experiments directly compare two methods of selecting foils for identification lineups. The suspect-matched method selects foils based on their match to the suspect, whereas the description-matched method selects foils based on their match to the witness's description of the perpetrator. Theoretical analyses and previous results predict an advantage for description-matched lineups both in terms of correctly identifying the perpetrator and minimizing false identification of innocent suspects. The advantage for description-matched lineups should be particularly pronounced if the foils selected in suspect-matched lineups are too similar to the suspect. In Experiment 1, the lineups were created by trained police officers, and in Experiment 2, the lineups were constructed by undergraduate college students. The results of both experiments showed higher suspect-to-foil similarity for suspect-matched lineups than for description-matched lineups. However, neither experiment showed a difference in correct or false identification rates. Both experiments did, however, show that there may be an advantage for suspect-matched lineups in terms of no-pick and rejection responses. From these results, the endorsement of one method over the other seems premature.
Methylation analysis of polysaccharides: Technical advice.
Sims, Ian M; Carnachan, Susan M; Bell, Tracey J; Hinkley, Simon F R
2018-05-15
Glycosyl linkage (methylation) analysis is used widely for the structural determination of oligo- and poly-saccharides. The procedure involves derivatisation of the individual component sugars of a polysaccharide to partially methylated alditol acetates which are analysed and quantified by gas chromatography-mass spectrometry. The linkage positions for each component sugar can be determined by correctly identifying the partially methylated alditol acetates. Although the methods are well established, there are many technical aspects to this procedure and both careful attention to detail and considerable experience are required to achieve a successful methylation analysis and to correctly interpret the data generated. The aim of this article is to provide the technical details and critical procedural steps necessary for a successful methylation analysis and to assist researchers (a) with interpreting data correctly and (b) in providing the comprehensive data required for reviewers to fully assess the work. Copyright © 2018 Elsevier Ltd. All rights reserved.
Dynamic Black-Level Correction and Artifact Flagging for Kepler Pixel Time Series
NASA Technical Reports Server (NTRS)
Kolodziejczak, J. J.; Clarke, B. D.; Caldwell, D. A.
2011-01-01
Methods applied to the calibration stage of Kepler pipeline data processing [1] (CAL) do not currently use all of the information available to identify and correct several instrument-induced artifacts. These include time-varying crosstalk from the fine guidance sensor (FGS) clock signals, and manifestations of drifting moire pattern as locally correlated nonstationary noise, and rolling bands in the images which find their way into the time series [2], [3]. As the Kepler Mission continues to improve the fidelity of its science data products, we are evaluating the benefits of adding pipeline steps to more completely model and dynamically correct the FGS crosstalk, then use the residuals from these model fits to detect and flag spatial regions and time intervals of strong time-varying black-level which may complicate later processing or lead to misinterpretation of instrument behavior as stellar activity.
NASA Astrophysics Data System (ADS)
Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish
2017-07-01
Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.
Improving Global Net Surface Heat Flux with Ocean Reanalysis
NASA Astrophysics Data System (ADS)
Carton, J.; Chepurin, G. A.; Chen, L.; Grodsky, S.
2017-12-01
This project addresses the current level of uncertainty in surface heat flux estimates. Time mean surface heat flux estimates provided by atmospheric reanalyses differ by 10-30W/m2. They are generally unbalanced globally, and have been shown by ocean simulation studies to be incompatible with ocean temperature and velocity measurements. Here a method is presented 1) to identify the spatial and temporal structure of the underlying errors and 2) to reduce them by exploiting hydrographic observations and the analysis increments produced by an ocean reanalysis using sequential data assimilation. The method is applied to fluxes computed from daily state variables obtained from three widely used reanalyses: MERRA2, ERA-Interim, and JRA-55, during an eight year period 2007-2014. For each of these seasonal heat flux errors/corrections are obtained. In a second set of experiments the heat fluxes are corrected and the ocean reanalysis experiments are repeated. This second round of experiments shows that the time mean error in the corrected fluxes is reduced to within ±5W/m2 over the interior subtropical and midlatitude oceans, with the most significant changes occuring over the Southern Ocean. The global heat flux imbalance of each reanalysis is reduced to within a few W/m2 with this single correction. Encouragingly, the corrected forms of the three sets of fluxes are also shown to converge. In the final discussion we present experiments beginning with a modified form of the ERA-Int reanalysis, produced by the DAKKAR program, in which state variables have been individually corrected based on independent measurements. Finally, we discuss the separation of flux error from model error.
Young, Robin L; Weinberg, Janice; Vieira, Verónica; Ozonoff, Al; Webster, Thomas F
2010-07-19
A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. The GAM permutation testing methods provide a regression-based alternative to the spatial scan statistic. Across all hypotheses examined in this research, the GAM methods had competing or greater power estimates and sensitivities exceeding that of the spatial scan statistic.
2010-01-01
Background A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. Results This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. Conclusions The GAM permutation testing methods provide a regression-based alternative to the spatial scan statistic. Across all hypotheses examined in this research, the GAM methods had competing or greater power estimates and sensitivities exceeding that of the spatial scan statistic. PMID:20642827
Automatic identification of inertial sensor placement on human body segments during walking
2013-01-01
Background Current inertial motion capture systems are rarely used in biomedical applications. The attachment and connection of the sensors with cables is often a complex and time consuming task. Moreover, it is prone to errors, because each sensor has to be attached to a predefined body segment. By using wireless inertial sensors and automatic identification of their positions on the human body, the complexity of the set-up can be reduced and incorrect attachments are avoided. We present a novel method for the automatic identification of inertial sensors on human body segments during walking. This method allows the user to place (wireless) inertial sensors on arbitrary body segments. Next, the user walks for just a few seconds and the segment to which each sensor is attached is identified automatically. Methods Walking data was recorded from ten healthy subjects using an Xsens MVN Biomech system with full-body configuration (17 inertial sensors). Subjects were asked to walk for about 6 seconds at normal walking speed (about 5 km/h). After rotating the sensor data to a global coordinate frame with x-axis in walking direction, y-axis pointing left and z-axis vertical, RMS, mean, and correlation coefficient features were extracted from x-, y- and z-components and magnitudes of the accelerations, angular velocities and angular accelerations. As a classifier, a decision tree based on the C4.5 algorithm was developed using Weka (Waikato Environment for Knowledge Analysis). Results and conclusions After testing the algorithm with 10-fold cross-validation using 31 walking trials (involving 527 sensors), 514 sensors were correctly classified (97.5%). When a decision tree for a lower body plus trunk configuration (8 inertial sensors) was trained and tested using 10-fold cross-validation, 100% of the sensors were correctly identified. This decision tree was also tested on walking trials of 7 patients (17 walking trials) after anterior cruciate ligament reconstruction, which also resulted in 100% correct identification, thus illustrating the robustness of the method. PMID:23517757
Three-dimensional ray tracing for refractive correction of human eye ametropies
NASA Astrophysics Data System (ADS)
Jimenez-Hernandez, J. A.; Diaz-Gonzalez, G.; Trujillo-Romero, F.; Iturbe-Castillo, M. D.; Juarez-Salazar, R.; Santiago-Alvarado, A.
2016-09-01
Ametropies of the human eye, are refractive defects hampering the correct imaging on the retina. The most common ways to correct them is by means of spectacles, contact lenses, and modern methods as laser surgery. However, in any case it is very important to identify the ametropia grade for designing the optimum correction action. In the case of laser surgery, it is necessary to define a new shape of the cornea in order to obtain the wanted refractive correction. Therefore, a computational tool to calculate the focal length of the optical system of the eye versus variations on its geometrical parameters is required. Additionally, a clear and understandable visualization of the evaluation process is desirable. In this work, a model of the human eye based on geometrical optics principles is presented. Simulations of light rays coming from a punctual source at six meter from the cornea are shown. We perform a ray-tracing in three dimensions in order to visualize the focusing regions and estimate the power of the optical system. The common parameters of ametropies can be easily modified and analyzed in the simulation by an intuitive graphic user interface.
Stokes, Ashley M.; Semmineh, Natenael; Quarles, C. Chad
2015-01-01
Purpose A combined biophysical- and pharmacokinetic-based method is proposed to separate, quantify, and correct for both T1 and T2* leakage effects using dual-echo DSC acquisitions to provide more accurate hemodynamic measures, as validated by a reference intravascular contrast agent (CA). Methods Dual-echo DSC-MRI data were acquired in two rodent glioma models. The T1 leakage effects were removed and also quantified in order to subsequently correct for the remaining T2* leakage effects. Pharmacokinetic, biophysical, and combined biophysical and pharmacokinetic models were used to obtain corrected cerebral blood volume (CBV) and cerebral blood flow (CBF), and these were compared with CBV and CBF from an intravascular CA. Results T1-corrected CBV was significantly overestimated compared to MION CBV, while T1+T2*-correction yielded CBV values closer to the reference values. The pharmacokinetic and simplified biophysical methods showed similar results and underestimated CBV in tumors exhibiting strong T2* leakage effects. The combined method was effective for correcting T1 and T2* leakage effects across tumor types. Conclusions Correcting for both T1 and T2* leakage effects yielded more accurate measures of CBV. The combined correction method yields more reliable CBV measures than either correction method alone, but for certain brain tumor types (e.g., gliomas) the simplified biophysical method may provide a robust and computationally efficient alternative. PMID:26362714
NASA Astrophysics Data System (ADS)
Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.
2018-01-01
Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological simulations forced using the bias corrected rainfall (distribution mapping and modified power transformation methods that used the proposed daily correction factors) was similar to those simulated by the IMD rainfall. The results demonstrate that the methods and the time scales used for bias correction of RCM rainfall data have a larger impact on the accuracy of the daily rainfall and consequently the simulated streamflow. The analysis suggests that the distribution mapping with daily correction factors can be preferred for adjusting RCM rainfall data irrespective of seasons or climate zones for realistic simulation of streamflow.
Rapid identification of group JK and other corynebacteria with the Minitek system.
Slifkin, M; Gil, G M; Engwall, C
1986-01-01
Forty primary clinical isolates and 50 stock cultures of corynebacteria and coryneform bacteria were tested with the Minitek system (BBL Microbiology Systems, Cockeysville, Md.). The Minitek correctly identified all of these organisms, including JK group isolates, within 12 to 18 h of incubation. The method does not require serum supplements for testing carbohydrate utilization by the bacteria. The Minitek system is an extremely simple and rapid way to identify the JK group, as well as many other corynebacteria, by established identification schemata for these bacteria. PMID:3091632
An Eulerian/Lagrangian coupling procedure for three-dimensional vortical flows
NASA Technical Reports Server (NTRS)
Felici, Helene M.; Drela, Mark
1993-01-01
A coupled Eulerian/Lagrangian method is presented for the reduction of numerical diffusion observed in solutions of 3D vortical flows using standard Eulerian finite-volume time-marching procedures. A Lagrangian particle tracking method, added to the Eulerian time-marching procedure, provides a correction of the Eulerian solution. In turn, the Eulerian solution is used to integrate the Lagrangian state-vector along the particles trajectories. While the Eulerian solution ensures the conservation of mass and sets the pressure field, the particle markers describe accurately the convection properties and enhance the vorticity and entropy capturing capabilities of the Eulerian solver. The Eulerian/Lagrangian coupling strategies are discussed and the combined scheme is tested on a constant stagnation pressure flow in a 90 deg bend and on a swirling pipe flow. As the numerical diffusion is reduced when using the Lagrangian correction, a vorticity gradient augmentation is identified as a basic problem of this inviscid calculation.
Choi, Jang-Hwan; Maier, Andreas; Keil, Andreas; Pal, Saikat; McWalter, Emily J; Beaupré, Gary S; Gold, Garry E; Fahrig, Rebecca
2014-06-01
A C-arm CT system has been shown to be capable of scanning a single cadaver leg under loaded conditions by virtue of its highly flexible acquisition trajectories. In Part I of this study, using the 4D XCAT-based numerical simulation, the authors predicted that the involuntary motion in the lower body of subjects in weight-bearing positions would seriously degrade image quality and the authors suggested three motion compensation methods by which the reconstructions could be corrected to provide diagnostic image quality. Here, the authors demonstrate that a flat-panel angiography system is appropriate for scanning both legs of subjects in vivo under weight-bearing conditions and further evaluate the three motion-correction algorithms using in vivo data. The geometry of a C-arm CT system for a horizontal scan trajectory was calibrated using the PDS-2 phantom. The authors acquired images of two healthy volunteers while lying supine on a table, standing, and squatting at several knee flexion angles. In order to identify the involuntary motion of the lower body, nine 1-mm-diameter tantalum fiducial markers were attached around the knee. The static mean marker position in 3D, a reference for motion compensation, was estimated by back-projecting detected markers in multiple projections using calibrated projection matrices and identifying the intersection points in 3D of the back-projected rays. Motion was corrected using three different methods (described in detail previously): (1) 2D projection shifting, (2) 2D deformable projection warping, and (3) 3D rigid body warping. For quantitative image quality analysis, SSIM indices for the three methods were compared using the supine data as a ground truth. A 2D Euclidean distance-based metric of subjects' motion ranged from 0.85 mm (±0.49 mm) to 3.82 mm (±2.91 mm) (corresponding to 2.76 to 12.41 pixels) resulting in severe motion artifacts in 3D reconstructions. Shifting in 2D, 2D warping, and 3D warping improved the SSIM in the central slice by 20.22%, 16.83%, and 25.77% in the data with the largest motion among the five datasets (SCAN5); improvement in off-center slices was 18.94%, 29.14%, and 36.08%, respectively. The authors showed that C-arm CT control can be implemented for nonstandard horizontal trajectories which enabled us to scan and successfully reconstruct both legs of volunteers in weight-bearing positions. As predicted using theoretical models, the proposed motion correction methods improved image quality by reducing motion artifacts in reconstructions; 3D warping performed better than the 2D methods, especially in off-center slices.
Pärn, Jaan; Verhoeven, Jos T A; Butterbach-Bahl, Klaus; Dise, Nancy B; Ullah, Sami; Aasa, Anto; Egorov, Sergey; Espenberg, Mikk; Järveoja, Järvi; Jauhiainen, Jyrki; Kasak, Kuno; Klemedtsson, Leif; Kull, Ain; Laggoun-Défarge, Fatima; Lapshina, Elena D; Lohila, Annalea; Lõhmus, Krista; Maddison, Martin; Mitsch, William J; Müller, Christoph; Niinemets, Ülo; Osborne, Bruce; Pae, Taavi; Salm, Jüri-Ott; Sgouridis, Fotis; Sohar, Kristina; Soosaar, Kaido; Storey, Kathryn; Teemusk, Alar; Tenywa, Moses M; Tournebize, Julien; Truu, Jaak; Veber, Gert; Villa, Jorge A; Zaw, Seint Sann; Mander, Ülo
2018-04-26
The original version of this Article contained an error in the first sentence of the Acknowledgements section, which incorrectly referred to the Estonian Research Council grant identifier as "PUTJD618". The correct version replaces the grant identifier with "PUTJD619". This has been corrected in both the PDF and HTML versions of the Article.
NASA Astrophysics Data System (ADS)
Balthazar, Vincent; Vanacker, Veerle; Lambin, Eric F.
2012-08-01
A topographic correction of optical remote sensing data is necessary to improve the quality of quantitative forest cover change analyses in mountainous terrain. The implementation of semi-empirical correction methods requires the calibration of model parameters that are empirically defined. This study develops a method to improve the performance of topographic corrections for forest cover change detection in mountainous terrain through an iterative tuning method of model parameters based on a systematic evaluation of the performance of the correction. The latter was based on: (i) the general matching of reflectances between sunlit and shaded slopes and (ii) the occurrence of abnormal reflectance values, qualified as statistical outliers, in very low illuminated areas. The method was tested on Landsat ETM+ data for rough (Ecuadorian Andes) and very rough mountainous terrain (Bhutan Himalayas). Compared to a reference level (no topographic correction), the ATCOR3 semi-empirical correction method resulted in a considerable reduction of dissimilarities between reflectance values of forested sites in different topographic orientations. Our results indicate that optimal parameter combinations are depending on the site, sun elevation and azimuth and spectral conditions. We demonstrate that the results of relatively simple topographic correction methods can be greatly improved through a feedback loop between parameter tuning and evaluation of the performance of the correction model.
[Evaluation of four dark object atmospheric correction methods based on ZY-3 CCD data].
Guo, Hong; Gu, Xing-fa; Xie, Yong; Yu, Tao; Gao, Hai-liang; Wei, Xiang-qin; Liu, Qi-yue
2014-08-01
The present paper performed the evaluation of four dark-object subtraction(DOS) atmospheric correction methods based on 2012 Inner Mongolia experimental data The authors analyzed the impacts of key parameters of four DOS methods when they were applied to ZY-3 CCD data The results showed that (1) All four DOS methods have significant atmospheric correction effect at band 1, 2 and 3. But as for band 4, the atmospheric correction effect of DOS4 is the best while DOS2 is the worst; both DOS1 and DOS3 has no obvious atmospheric correction effect. (2) The relative error (RE) of DOS1 atmospheric correction method is larger than 10% at four bands; The atmospheric correction effect of DOS2 works the best at band 1(AE (absolute error)=0.0019 and RE=4.32%) and the worst error appears at band 4(AE=0.0464 and RE=19.12%); The RE of DOS3 is about 10% for all bands. (3) The AE of atmospheric correction results for DOS4 method is less than 0. 02 and the RE is less than 10% for all bands. Therefore, the DOS4 method provides the best accuracy of atmospheric correction results for ZY-3 image.
Image registration assessment in radiotherapy image guidance based on control chart monitoring.
Xia, Wenyao; Breen, Stephen L
2018-04-01
Image guidance with cone beam computed tomography in radiotherapy can guarantee the precision and accuracy of patient positioning prior to treatment delivery. During the image guidance process, operators need to take great effort to evaluate the image guidance quality before correcting a patient's position. This work proposes an image registration assessment method based on control chart monitoring to reduce the effort taken by the operator. According to the control chart plotted by daily registration scores of each patient, the proposed method can quickly detect both alignment errors and image quality inconsistency. Therefore, the proposed method can provide a clear guideline for the operators to identify unacceptable image quality and unacceptable image registration with minimal effort. Experimental results demonstrate that by using control charts from a clinical database of 10 patients undergoing prostate radiotherapy, the proposed method can quickly identify out-of-control signals and find special cause of out-of-control registration events.
Chen, Weitian; Sica, Christopher T; Meyer, Craig H
2008-11-01
Off-resonance effects can cause image blurring in spiral scanning and various forms of image degradation in other MRI methods. Off-resonance effects can be caused by both B0 inhomogeneity and concomitant gradient fields. Previously developed off-resonance correction methods focus on the correction of a single source of off-resonance. This work introduces a computationally efficient method of correcting for B0 inhomogeneity and concomitant gradients simultaneously. The method is a fast alternative to conjugate phase reconstruction, with the off-resonance phase term approximated by Chebyshev polynomials. The proposed algorithm is well suited for semiautomatic off-resonance correction, which works well even with an inaccurate or low-resolution field map. The proposed algorithm is demonstrated using phantom and in vivo data sets acquired by spiral scanning. Semiautomatic off-resonance correction alone is shown to provide a moderate amount of correction for concomitant gradient field effects, in addition to B0 imhomogeneity effects. However, better correction is provided by the proposed combined method. The best results were produced using the semiautomatic version of the proposed combined method.
Bastin, Benjamin; Bird, Patrick; Benzinger, M Joseph; Crowley, Erin; Agin, James; Goins, David; Sohier, Daniele; Timke, Markus; Shi, Gongyi; Kostrzewa, Markus
2018-04-27
The Bruker MALDI Biotyper ® method utilizes matrix-assisted laser desorption/ionizationtime-of-flight (MALDI-TOF) MS for the rapid and accurate identification and confirmation of Gram-negative bacteria from select media types. The alternative method was evaluated using nonselective and selective agars to identify Cronobacter spp., Salmonella spp., and select Gram-negative bacteria. Results obtained by the Bruker MALDI Biotyper were compared to the traditional biochemical methods as prescribed in the appropriate reference methods. Two collaborative studies were organized, one in the United States focusing on Cronobacter spp. and other Gram-negative bacteria, and one in Europe focusing on Salmonella spp. and other Gram-negative bacteria. Fourteen collaborators from seven laboratories located within the United States participated in the first collaborative study for Cronobacter spp. Fifteen collaborators from 15 service laboratories located within Europe participated in the second collaborative study for Salmonella spp. For each target organism (either Salmonella spp. or Cronobacter spp.), a total of 24 blind-coded isolates were evaluated. In each set of 24 organisms, there were 16 inclusivity organisms ( Cronobacter spp. or Salmonella spp.) and 8 exclusivity organisms (closely related non- Cronobacter spp. and non- Salmonella spp. Gram-negative organisms). After testing was completed, the total percentage of correct identifications from each agar type for each strain was determined at a percentage of 100.0% to the genus level for the Cronobacter study and a percentage of 100.0% to the genus level for the Salmonella study. For both non- Cronobacter and non- Salmonella organisms, a percentage of 100.0% was correctly identified. The results indicated that the alternative method produced equivalent results when compared to the confirmatory procedures specified by each reference method.
DNA Barcoding of Recently Diverged Species: Relative Performance of Matching Methods
van Velzen, Robin; Weitschek, Emanuel; Felici, Giovanni; Bakker, Freek T.
2012-01-01
Recently diverged species are challenging for identification, yet they are frequently of special interest scientifically as well as from a regulatory perspective. DNA barcoding has proven instrumental in species identification, especially in insects and vertebrates, but for the identification of recently diverged species it has been reported to be problematic in some cases. Problems are mostly due to incomplete lineage sorting or simply lack of a ‘barcode gap’ and probably related to large effective population size and/or low mutation rate. Our objective was to compare six methods in their ability to correctly identify recently diverged species with DNA barcodes: neighbor joining and parsimony (both tree-based), nearest neighbor and BLAST (similarity-based), and the diagnostic methods DNA-BAR, and BLOG. We analyzed simulated data assuming three different effective population sizes as well as three selected empirical data sets from published studies. Results show, as expected, that success rates are significantly lower for recently diverged species (∼75%) than for older species (∼97%) (P<0.00001). Similarity-based and diagnostic methods significantly outperform tree-based methods, when applied to simulated DNA barcode data (P<0.00001). The diagnostic method BLOG had highest correct query identification rate based on simulated (86.2%) as well as empirical data (93.1%), indicating that it is a consistently better method overall. Another advantage of BLOG is that it offers species-level information that can be used outside the realm of DNA barcoding, for instance in species description or molecular detection assays. Even though we can confirm that identification success based on DNA barcoding is generally high in our data, recently diverged species remain difficult to identify. Nevertheless, our results contribute to improved solutions for their accurate identification. PMID:22272356
DNA barcoding of recently diverged species: relative performance of matching methods.
van Velzen, Robin; Weitschek, Emanuel; Felici, Giovanni; Bakker, Freek T
2012-01-01
Recently diverged species are challenging for identification, yet they are frequently of special interest scientifically as well as from a regulatory perspective. DNA barcoding has proven instrumental in species identification, especially in insects and vertebrates, but for the identification of recently diverged species it has been reported to be problematic in some cases. Problems are mostly due to incomplete lineage sorting or simply lack of a 'barcode gap' and probably related to large effective population size and/or low mutation rate. Our objective was to compare six methods in their ability to correctly identify recently diverged species with DNA barcodes: neighbor joining and parsimony (both tree-based), nearest neighbor and BLAST (similarity-based), and the diagnostic methods DNA-BAR, and BLOG. We analyzed simulated data assuming three different effective population sizes as well as three selected empirical data sets from published studies. Results show, as expected, that success rates are significantly lower for recently diverged species (∼75%) than for older species (∼97%) (P<0.00001). Similarity-based and diagnostic methods significantly outperform tree-based methods, when applied to simulated DNA barcode data (P<0.00001). The diagnostic method BLOG had highest correct query identification rate based on simulated (86.2%) as well as empirical data (93.1%), indicating that it is a consistently better method overall. Another advantage of BLOG is that it offers species-level information that can be used outside the realm of DNA barcoding, for instance in species description or molecular detection assays. Even though we can confirm that identification success based on DNA barcoding is generally high in our data, recently diverged species remain difficult to identify. Nevertheless, our results contribute to improved solutions for their accurate identification.
On the Yakhot-Orszag renormalization group method for deriving turbulence statistics and models
NASA Technical Reports Server (NTRS)
Smith, L. M.; Reynolds, W. C.
1992-01-01
An independent, comprehensive, critical review of the 'renormalization group' (RNG) theory of turbulence developed by Yakhot and Orszag (1986) is provided. Their basic theory for the Navier-Stokes equations is confirmed, and approximations in the scale removal procedure are discussed. The YO derivations of the velocity-derivative skewness and the transport equation for the energy dissipation rate are examined. An algebraic error in the derivation of the skewness is corrected. The corrected RNG skewness value of -0.59 is in agreement with experiments at moderate Reynolds numbers. Several problems are identified in the derivation of the energy dissipation rate equations which suggest that the derivation should be reformulated.
Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies
NASA Technical Reports Server (NTRS)
Gordon, Howard R.
1997-01-01
The following accomplishments were made during the present reporting period: (1) We expanded our new method, for identifying the presence of absorbing aerosols and simultaneously performing atmospheric correction, to the point where it could be added as a subroutine to the MODIS water-leaving radiance algorithm; (2) We successfully acquired micro pulse lidar (MPL) data at sea during a cruise in February; (3) We developed a water-leaving radiance algorithm module for an approximate correction of the MODIS instrument polarization sensitivity; and (4) We participated in one cruise to the Gulf of Maine, a well known region for mesoscale coccolithophore blooms. We measured coccolithophore abundance, production and optical properties.
Ferrazzi, Giulio; Kuklisova Murgasova, Maria; Arichi, Tomoki; Malamateniou, Christina; Fox, Matthew J; Makropoulos, Antonios; Allsop, Joanna; Rutherford, Mary; Malik, Shaihan; Aljabar, Paul; Hajnal, Joseph V
2014-11-01
There is growing interest in exploring fetal functional brain development, particularly with Resting State fMRI. However, during a typical fMRI acquisition, the womb moves due to maternal respiration and the fetus may perform large-scale and unpredictable movements. Conventional fMRI processing pipelines, which assume that brain movements are infrequent or at least small, are not suitable. Previous published studies have tackled this problem by adopting conventional methods and discarding as much as 40% or more of the acquired data. In this work, we developed and tested a processing framework for fetal Resting State fMRI, capable of correcting gross motion. The method comprises bias field and spin history corrections in the scanner frame of reference, combined with slice to volume registration and scattered data interpolation to place all data into a consistent anatomical space. The aim is to recover an ordered set of samples suitable for further analysis using standard tools such as Group Independent Component Analysis (Group ICA). We have tested the approach using simulations and in vivo data acquired at 1.5 T. After full motion correction, Group ICA performed on a population of 8 fetuses extracted 20 networks, 6 of which were identified as matching those previously observed in preterm babies. Copyright © 2014 Elsevier Inc. All rights reserved.
Wang, Li; Li, Gang; Adeli, Ehsan; Liu, Mingxia; Wu, Zhengwang; Meng, Yu; Lin, Weili; Shen, Dinggang
2018-06-01
Tissue segmentation of infant brain MRIs with risk of autism is critically important for characterizing early brain development and identifying biomarkers. However, it is challenging due to low tissue contrast caused by inherent ongoing myelination and maturation. In particular, at around 6 months of age, the voxel intensities in both gray matter and white matter are within similar ranges, thus leading to the lowest image contrast in the first postnatal year. Previous studies typically employed intensity images and tentatively estimated tissue probabilities to train a sequence of classifiers for tissue segmentation. However, the important prior knowledge of brain anatomy is largely ignored during the segmentation. Consequently, the segmentation accuracy is still limited and topological errors frequently exist, which will significantly degrade the performance of subsequent analyses. Although topological errors could be partially handled by retrospective topological correction methods, their results may still be anatomically incorrect. To address these challenges, in this article, we propose an anatomy-guided joint tissue segmentation and topological correction framework for isointense infant MRI. Particularly, we adopt a signed distance map with respect to the outer cortical surface as anatomical prior knowledge, and incorporate such prior information into the proposed framework to guide segmentation in ambiguous regions. Experimental results on the subjects acquired from National Database for Autism Research demonstrate the effectiveness to topological errors and also some levels of robustness to motion. Comparisons with the state-of-the-art methods further demonstrate the advantages of the proposed method in terms of both segmentation accuracy and topological correctness. © 2018 Wiley Periodicals, Inc.
Maikusa, Norihide; Yamashita, Fumio; Tanaka, Kenichiro; Abe, Osamu; Kawaguchi, Atsushi; Kabasawa, Hiroyuki; Chiba, Shoma; Kasahara, Akihiro; Kobayashi, Nobuhisa; Yuasa, Tetsuya; Sato, Noriko; Matsuda, Hiroshi; Iwatsubo, Takeshi
2013-06-01
Serial magnetic resonance imaging (MRI) images acquired from multisite and multivendor MRI scanners are widely used in measuring longitudinal structural changes in the brain. Precise and accurate measurements are important in understanding the natural progression of neurodegenerative disorders such as Alzheimer's disease. However, geometric distortions in MRI images decrease the accuracy and precision of volumetric or morphometric measurements. To solve this problem, the authors suggest a commercially available phantom-based distortion correction method that accommodates the variation in geometric distortion within MRI images obtained with multivendor MRI scanners. The authors' method is based on image warping using a polynomial function. The method detects fiducial points within a phantom image using phantom analysis software developed by the Mayo Clinic and calculates warping functions for distortion correction. To quantify the effectiveness of the authors' method, the authors corrected phantom images obtained from multivendor MRI scanners and calculated the root-mean-square (RMS) of fiducial errors and the circularity ratio as evaluation values. The authors also compared the performance of the authors' method with that of a distortion correction method based on a spherical harmonics description of the generic gradient design parameters. Moreover, the authors evaluated whether this correction improves the test-retest reproducibility of voxel-based morphometry in human studies. A Wilcoxon signed-rank test with uncorrected and corrected images was performed. The root-mean-square errors and circularity ratios for all slices significantly improved (p < 0.0001) after the authors' distortion correction. Additionally, the authors' method was significantly better than a distortion correction method based on a description of spherical harmonics in improving the distortion of root-mean-square errors (p < 0.001 and 0.0337, respectively). Moreover, the authors' method reduced the RMS error arising from gradient nonlinearity more than gradwarp methods. In human studies, the coefficient of variation of voxel-based morphometry analysis of the whole brain improved significantly from 3.46% to 2.70% after distortion correction of the whole gray matter using the authors' method (Wilcoxon signed-rank test, p < 0.05). The authors proposed a phantom-based distortion correction method to improve reproducibility in longitudinal structural brain analysis using multivendor MRI. The authors evaluated the authors' method for phantom images in terms of two geometrical values and for human images in terms of test-retest reproducibility. The results showed that distortion was corrected significantly using the authors' method. In human studies, the reproducibility of voxel-based morphometry analysis for the whole gray matter significantly improved after distortion correction using the authors' method.
Murata, Fernando Henrique Antunes; Ferreira, Marina Neves; Pereira-Chioccola, Vera Lucia; Spegiorin, Lígia Cosentino Junqueira Franco; Meira-Strejevitch, Cristina da Silva; Gava, Ricardo; Silveira-Carvalho, Aparecida Perpétuo; de Mattos, Luiz Carlos; Brandão de Mattos, Cinara Cássia
2017-09-01
Toxoplasmosis during pregnancy can have severe consequences. The use of sensitive and specific serological and molecular methods is extremely important for the correct diagnosis of the disease. We compared the ELISA and ELFA serological methods, conventional PCR (cPCR), Nested PCR and quantitative PCR (qPCR) in the diagnosis of Toxoplasma gondii infection in pregnant women without clinical suspicion of toxoplasmosis (G1=94) and with clinical suspicion of toxoplasmosis (G2=53). The results were compared using the Kappa index, and the sensitivity, specificity, positive predictive value and negative predictive value were calculated. The results of the serological methods showed concordance between the ELISA and ELFA methods even though ELFA identified more positive cases than ELISA. Molecular methods were discrepant with cPCR using B22/23 primers having greater sensitivity and lower specificity compared to the other molecular methods. Copyright © 2017 Elsevier Inc. All rights reserved.
Babaloukas, Georgios; Tentolouris, Nicholas; Liatis, Stavros; Sklavounou, Alexandra; Perrea, Despoina
2011-12-01
Correction of vignetting on images obtained by a digital camera mounted on a microscope is essential before applying image analysis. The aim of this study is to evaluate three methods for retrospective correction of vignetting on medical microscopy images and compare them with a prospective correction method. One digital image from four different tissues was used and a vignetting effect was applied on each of these images. The resulted vignetted image was replicated four times and in each replica a different method for vignetting correction was applied with fiji and gimp software tools. The highest peak signal-to-noise ratio from the comparison of each method to the original image was obtained from the prospective method in all tissues. The morphological filtering method provided the highest peak signal-to-noise ratio value amongst the retrospective methods. The prospective method is suggested as the method of choice for correction of vignetting and if it is not applicable, then the morphological filtering may be suggested as the retrospective alternative method. © 2011 The Authors Journal of Microscopy © 2011 Royal Microscopical Society.
A UMLS-based spell checker for natural language processing in vaccine safety
Tolentino, Herman D; Matters, Michael D; Walop, Wikke; Law, Barbara; Tong, Wesley; Liu, Fang; Fontelo, Paul; Kohl, Katrin; Payne, Daniel C
2007-01-01
Background The Institute of Medicine has identified patient safety as a key goal for health care in the United States. Detecting vaccine adverse events is an important public health activity that contributes to patient safety. Reports about adverse events following immunization (AEFI) from surveillance systems contain free-text components that can be analyzed using natural language processing. To extract Unified Medical Language System (UMLS) concepts from free text and classify AEFI reports based on concepts they contain, we first needed to clean the text by expanding abbreviations and shortcuts and correcting spelling errors. Our objective in this paper was to create a UMLS-based spelling error correction tool as a first step in the natural language processing (NLP) pipeline for AEFI reports. Methods We developed spell checking algorithms using open source tools. We used de-identified AEFI surveillance reports to create free-text data sets for analysis. After expansion of abbreviated clinical terms and shortcuts, we performed spelling correction in four steps: (1) error detection, (2) word list generation, (3) word list disambiguation and (4) error correction. We then measured the performance of the resulting spell checker by comparing it to manual correction. Results We used 12,056 words to train the spell checker and tested its performance on 8,131 words. During testing, sensitivity, specificity, and positive predictive value (PPV) for the spell checker were 74% (95% CI: 74–75), 100% (95% CI: 100–100), and 47% (95% CI: 46%–48%), respectively. Conclusion We created a prototype spell checker that can be used to process AEFI reports. We used the UMLS Specialist Lexicon as the primary source of dictionary terms and the WordNet lexicon as a secondary source. We used the UMLS as a domain-specific source of dictionary terms to compare potentially misspelled words in the corpus. The prototype sensitivity was comparable to currently available tools, but the specificity was much superior. The slow processing speed may be improved by trimming it down to the most useful component algorithms. Other investigators may find the methods we developed useful for cleaning text using lexicons specific to their area of interest. PMID:17295907
Staircase-scene-based nonuniformity correction in aerial point target detection systems.
Huo, Lijun; Zhou, Dabiao; Wang, Dejiang; Liu, Rang; He, Bin
2016-09-01
Focal-plane arrays (FPAs) are often interfered by heavy fixed-pattern noise, which severely degrades the detection rate and increases the false alarms in airborne point target detection systems. Thus, high-precision nonuniformity correction is an essential preprocessing step. In this paper, a new nonuniformity correction method is proposed based on a staircase scene. This correction method can compensate for the nonlinear response of the detector and calibrate the entire optical system with computational efficiency and implementation simplicity. Then, a proof-of-concept point target detection system is established with a long-wave Sofradir FPA. Finally, the local standard deviation of the corrected image and the signal-to-clutter ratio of the Airy disk of a Boeing B738 are measured to evaluate the performance of the proposed nonuniformity correction method. Our experimental results demonstrate that the proposed correction method achieves high-quality corrections.
Ferreira, Adriano Martison; Bonesso, Mariana Fávero; Mondelli, Alessandro Lia; da Cunha, Maria de Lourdes Ribeiro de Souza
2012-12-01
The emergence of Staphylococcus spp. not only as human pathogens, but also as reservoirs of antibiotic resistance determinants, requires the development of methods for their rapid and reliable identification in medically important samples. The aim of this study was to compare three phenotypic methods for the identification of Staphylococcus spp. isolated from patients with urinary tract infection using the PCR of the 16S-23S interspace region generating molecular weight patterns (ITR-PCR) as reference. All 57 S. saprophyticus studied were correctly identified using only the novobiocin disk. A rate of agreement of 98.0% was obtained for the simplified battery of biochemical tests in relation to ITR-PCR, whereas the Vitek I system and novobiocin disk showed 81.2% and 89.1% agreement, respectively. No other novobiocin-resistant non-S. saprophyticus strain was identified. Thus, the novobiocin disk is a feasible alternative for the identification of S. saprophyticus in urine samples in laboratories with limited resources. ITR-PCR and the simplified battery of biochemical tests were more reliable than the commercial systems currently available. This study confirms that automated systems are still unable to correctly differentiate CoNS species and that simple, reliable and inexpensive methods can be used for routine identification. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mlynarczuk, Mariusz; Skiba, Marta
2017-06-01
The correct and consistent identification of the petrographic properties of coal is an important issue for researchers in the fields of mining and geology. As part of the study described in this paper, investigations concerning the application of artificial intelligence methods for the identification of the aforementioned characteristics were carried out. The methods in question were used to identify the maceral groups of coal, i.e. vitrinite, inertinite, and liptinite. Additionally, an attempt was made to identify some non-organic minerals. The analyses were performed using pattern recognition techniques (NN, kNN), as well as artificial neural network techniques (a multilayer perceptron - MLP). The classification process was carried out using microscopy images of polished sections of coals. A multidimensional feature space was defined, which made it possible to classify the discussed structures automatically, based on the methods of pattern recognition and algorithms of the artificial neural networks. Also, from the study we assessed the impact of the parameters for which the applied methods proved effective upon the final outcome of the classification procedure. The result of the analyses was a high percentage (over 97%) of correct classifications of maceral groups and mineral components. The paper discusses also an attempt to analyze particular macerals of the inertinite group. It was demonstrated that using artificial neural networks to this end makes it possible to classify the macerals properly in over 91% of cases. Thus, it was proved that artificial intelligence methods can be successfully applied for the identification of selected petrographic features of coal.
Identification and analysis of student conceptions used to solve chemical equilibrium problems
NASA Astrophysics Data System (ADS)
Voska, Kirk William
This study identified and quantified chemistry conceptions students use when solving chemical equilibrium problems requiring the application of Le Chatelier's principle, and explored the feasibility of designing a paper and pencil test for this purpose. It also demonstrated the utility of conditional probabilities to assess test quality. A 10-item pencil-and-paper, two-tier diagnostic instrument, the Test to Identify Student Conceptualizations (TISC) was developed and administered to 95 second-semester university general chemistry students after they received regular course instruction concerning equilibrium in homogeneous aqueous, heterogeneous aqueous, and homogeneous gaseous systems. The content validity of TISC was established through a review of TISC by a panel of experts; construct validity was established through semi-structured interviews and conditional probabilities. Nine students were then selected from a stratified random sample for interviews to validate TISC. The probability that TISC correctly identified an answer given by a student in an interview was p = .64, while the probability that TISC correctly identified a reason given by a student in an interview was p=.49. Each TISC item contained two parts. In the first part the student selected the correct answer to a problem from a set of four choices. In the second part students wrote reasons for their answer to the first part. TISC questions were designed to identify students' conceptions concerning the application of Le Chatelier's principle, the constancy of the equilibrium constant, K, and the effect of a catalyst. Eleven prevalent incorrect conceptions were identified. This study found students consistently selected correct answers more frequently (53% of the time) than they provided correct reasons (33% of the time). The association between student answers and respective reasons on each TISC item was quantified using conditional probabilities calculated from logistic regression coefficients. The probability a student provided correct reasoning (B) when the student selected a correct answer (A) ranged from P(B| A) =.32 to P(B| A) =.82. However, the probability a student selected a correct answer when they provided correct reasoning ranged from P(A| B) =.96 to P(A| B) = 1. The K-R 20 reliability for TISC was found to be.79.
NASA Astrophysics Data System (ADS)
Li, Da-Wei; Meng, Dan; Brüschweiler, Rafael
2015-05-01
A robust NMR resonance assignment method is introduced for proteins whose 3D structure has previously been determined by X-ray crystallography. The goal of the method is to obtain a subset of correct assignments from a parsimonious set of 3D NMR experiments of 15N, 13C labeled proteins. Chemical shifts of sequential residue pairs are predicted from static protein structures using PPM_One, which are then compared with the corresponding experimental shifts. Globally optimized weighted matching identifies the assignments that are robust with respect to small changes in NMR cross-peak positions. The method, termed PASSPORT, is demonstrated for 4 proteins with 100-250 amino acids using 3D NHCA and a 3D CBCA(CO)NH experiments as input producing correct assignments with high reliability for 22% of the residues. The method, which works best for Gly, Ala, Ser, and Thr residues, provides assignments that serve as anchor points for additional assignments by both manual and semi-automated methods or they can be directly used for further studies, e.g. on ligand binding, protein dynamics, or post-translational modification, such as phosphorylation.
Li, Da-Wei; Meng, Dan; Brüschweiler, Rafael
2015-01-01
A robust NMR resonance assignment method is introduced for proteins whose 3D structure has previously been determined by X-ray crystallography. The goal of the method is to obtain a subset of correct assignments from a parsimonious set of 3D NMR experiments of 15N, 13C labeled proteins. Chemical shifts of sequential residue pairs are predicted from static protein structures using PPM_One, which are then compared with the corresponding experimental shifts. Globally optimized weighted matching identifies the assignments that are robust with respect to small changes in NMR cross-peak positions. The method, termed PASSPORT, is demonstrated for 4 proteins with 100 – 250 amino acids using 3D NHCA and a 3D CBCA(CO)NH experiments as input producing correct assignments with high reliability for 22% of the residues. The method, which works best for Gly, Ala, Ser, and Thr residues, provides assignments that serve as anchor points for additional assignments by both manual and semi-automated methods or they can be directly used for further studies, e.g. on ligand binding, protein dynamics, or post-translational modification, such as phosphorylation. PMID:25863893
Qiao, Xiaojun; Jiang, Jinbao; Qi, Xiaotong; Guo, Haiqiang; Yuan, Deshuai
2017-04-01
It's well-known fungi-contaminated peanuts contain potent carcinogen. Efficiently identifying and separating the contaminated can help prevent aflatoxin entering in food chain. In this study, shortwave infrared (SWIR) hyperspectral images for identifying the prepared contaminated kernels. Feature selection method of analysis of variance (ANOVA) and feature extraction method of nonparametric weighted feature extraction (NWFE) were used to concentrate spectral information into a subspace where contaminated and healthy peanuts can have favorable separability. Then, peanut pixels were classified using SVM. Moreover, image segmentation method of region growing was applied to segment the image as kernel-scale patches and meanwhile to number the kernels. The result shows that pixel-wise classification accuracies are 99.13% for breed A, 96.72% for B and 99.73% for C in learning images, and are 96.32%, 94.2% and 97.51% in validation images. Contaminated peanuts were correctly marked as aberrant kernels in both learning images and validation images. Copyright © 2016 Elsevier Ltd. All rights reserved.
Alonso, Joan Francesc; Romero, Sergio; Mañanas, Miguel Ángel; Rojas, Mónica; Riba, Jordi; Barbanoj, Manel José
2015-10-01
The identification of the brain regions involved in the neuropharmacological action is a potential procedure for drug development. These regions are commonly determined by the voxels showing significant statistical differences after comparing placebo-induced effects with drug-elicited effects. LORETA is an electroencephalography (EEG) source imaging technique frequently used to identify brain structures affected by the drug. The aim of the present study was to evaluate different methods for the correction of multiple comparisons in the LORETA maps. These methods which have been commonly used in neuroimaging and also simulated studies have been applied on a real case of pharmaco-EEG study where the effects of increasing benzodiazepine doses on the central nervous system measured by LORETA were investigated. Data consisted of EEG recordings obtained from nine volunteers who received single oral doses of alprazolam 0.25, 0.5, and 1 mg, and placebo in a randomized crossover double-blind design. The identification of active regions was highly dependent on the selected multiple test correction procedure. The combined criteria approach known as cluster mass was useful to reveal that increasing drug doses led to higher intensity and spread of the pharmacologically induced changes in intracerebral current density.
False fame prevented: avoiding fluency effects without judgmental correction.
Topolinski, Sascha; Strack, Fritz
2010-05-01
Three studies show a way to prevent fluency effects independently of judgmental correction strategies by identifying and procedurally blocking the sources of fluency variations, which are assumed to be embodied in nature. For verbal stimuli, covert pronunciations are assumed to be the crucial source of fluency gains. As a consequence, blocking such pronunciation simulations through a secondary oral motor task decreased the false-fame effect for repeatedly presented names of actors (Experiment 1) as well as prevented increases in trust due to repetition for brand names and names of shares in the stock market (Experiment 2). Extending this evidence beyond repeated exposure, we demonstrated that blocking oral motor simulations also prevented fluency effects of word pronunciation on judgments of hazardousness (Experiment 3). Concerning the realm of judgment correction, this procedural blocking of (biasing) associative processes is a decontamination method not considered before in the literature, because it is independent of exposure control, mood, motivation, and post hoc correction strategies. The present results also have implications for applied issues, such as advertising and investment decisions. 2010 APA, all rights reserved
NASA Astrophysics Data System (ADS)
Jayasekera, D. L.; Kaluarachchi, J.; Kim, U.
2011-12-01
Rural river basins with sufficient water availability to maintain economic livelihoods can be affected with seasonal fluctuations of precipitation and sometimes by droughts. In addition, climate change impacts can also alter future water availability. General Circulation Models (GCMs) provide credible quantitative estimates of future climate conditions but such estimates are often characterized by bias and coarse scale resolution making it necessary to downscale the outputs for use in regional hydrologic models. This study develops a methodology to downscale and project future monthly precipitation in moderate scale basins where data are limited. A stochastic framework for single-site and multi-site generation of weekly rainfall is developed while preserving the historical temporal and spatial correlation structures. The spatial correlations in the simulated occurrences and the amounts are induced using spatially correlated yet serially independent random numbers. This method is applied to generate weekly precipitation data for a 100-year period in the Nam Ngum River Basin (NNRB) that has a land area of 16,780 km2 located in Lao P.D.R. This method is developed and applied using precipitation data from 1961 to 2000 for 10 selected weather stations that represents the basin rainfall characteristics. Bias-correction method, based on fitted theoretical probability distribution transformations, is applied to improve monthly mean frequency, intensity and the amount of raw GCM precipitation predicted at a given weather station using CGCM3.1 and ECHAM5 for SRES A2 emission scenario. Bias-correction procedure adjusts GCM precipitation to approximate the long-term frequency and the intensity distribution observed at a given weather station. Index of agreement and mean absolute error are determined to assess the overall ability and performance of the bias correction method. The generated precipitation series aggregated at monthly time step was perturbed by the change factors estimated using the corrected GCM and baseline scenarios for future time periods of 2011-2050 and 2051-2090. A network based hydrologic and water resources model, WEAP, was used to simulate the current water allocation and management practices to identify the impacts of climate change in the 20th century. The results of this work are used to identify the multiple challenges faced by stakeholders and planners in water allocation for competing demands in the presence of climate change impacts.
NASA Astrophysics Data System (ADS)
Lovejoy, McKenna R.; Wickert, Mark A.
2017-05-01
A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.
External quality assessment of dengue and chikungunya diagnostics in the Asia Pacific region, 2015
Soh, Li Ting; Squires, Raynal C; Tan, Li Kiang; Pok, Kwoon Yong; Yang, HuiTing; Liew, Christina; Shah, Aparna Singh; Aaskov, John; Abubakar, Sazaly; Hasabe, Futoshi; Ng, Lee Ching
2016-01-01
Objective To conduct an external quality assessment (EQA) of dengue and chikungunya diagnostics among national-level public health laboratories in the Asia Pacific region following the first round of EQA for dengue diagnostics in 2013. Methods Twenty-four national-level public health laboratories performed routine diagnostic assays on a proficiency testing panel consisting of two modules. Module A contained serum samples spiked with cultured dengue virus (DENV) or chikungunya virus (CHIKV) for the detection of nucleic acid and DENV non-structural protein 1 (NS1) antigen. Module B contained human serum samples for the detection of anti-DENV antibodies. Results Among 20 laboratories testing Module A, 17 (85%) correctly detected DENV RNA by reverse transcription polymerase chain reaction (RT–PCR), 18 (90%) correctly determined serotype and 19 (95%) correctly identified CHIKV by RT–PCR. Ten of 15 (66.7%) laboratories performing NS1 antigen assays obtained the correct results. In Module B, 18/23 (78.3%) and 20/20 (100%) of laboratories correctly detected anti-DENV IgM and IgG, respectively. Detection of acute/recent DENV infection by both molecular (RT–PCR) and serological methods (IgM) was available in 19/24 (79.2%) participating laboratories. Discussion Accurate laboratory testing is a critical component of dengue and chikungunya surveillance and control. This second round of EQA reveals good proficiency in molecular and serological diagnostics of these diseases in the Asia Pacific region. Further comprehensive diagnostic testing, including testing for Zika virus, should comprise future iterations of the EQA. PMID:27508088
Jensen, Christian Salgård; Dam-Nielsen, Casper; Arpi, Magnus
2015-08-01
The aim of this study was to investigate whether large colony beta-hemolytic streptococci containing Lancefield groups A, C, and G can be adequately identified using matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-ToF). Previous studies show varying results, with an identification rate from below 50% to 100%. Large colony beta-hemolytic streptococci containing Lancefield groups A, C, and G isolated from blood cultures between January 1, 2007 and May 1, 2012 were included in the study. Isolates were identified to the species level using a combination of phenotypic characteristics and 16s rRNA sequencing. The isolates were subjected to MALDI-ToF analysis. We used a two-stage approach starting with the direct method. If no valid result was obtained we proceeded to an extraction protocol. Scores above 2 were considered valid identification at the species level. A total of 97 Streptococcus pyogenes, 133 Streptococcus dysgalactiae, and 2 Streptococcus canis isolates were tested; 94%, 66%, and 100% of S. pyogenes, S. dysgalactiae, and S. canis, respectively, were correctly identified by MALDI-ToF. In most instances when the isolates were not identified by MALDI-ToF this was because MALDI-ToF was unable to differentiate between S. pyogenes and S. dysgalactiae. By removing two S. pyogenes reference spectra from the MALDI-ToF database the proportion of correctly identified isolates increased to 96% overall. MALDI-ToF is a promising method for discriminating between S. dysgalactiae, S. canis, and S. equi, although more strains need to be tested to clarify this.
System and method for aligning heliostats of a solar power tower
Convery, Mark R.
2013-01-01
Disclosed is a solar power tower heliostat alignment system and method that includes a solar power tower with a focal area, a plurality of heliostats that each reflect sunlight towards the focal area of the solar power tower, an off-focal area location substantially close to the focal area of the solar power tower, a communication link between the off-focal area location and a misaligned heliostat, and a processor that interprets the communication between the off-focal area location and the misaligned heliostat to identify the misaligned heliostat from the plurality of heliostats and that determines a correction for the identified misaligned heliostat to realign the misaligned heliostat to reflect sunlight towards the focal area of the solar power tower.
Jackman, Patrick; Sun, Da-Wen; Allen, Paul; Valous, Nektarios A; Mendoza, Fernando; Ward, Paddy
2010-04-01
A method to discriminate between various grades of pork and turkey ham was developed using colour and wavelet texture features. Image analysis methods originally developed for predicting the palatability of beef were applied to rapidly identify the ham grade. With high quality digital images of 50-94 slices per ham it was possible to identify the greyscale that best expressed the differences between the various ham grades. The best 10 discriminating image features were then found with a genetic algorithm. Using the best 10 image features, simple linear discriminant analysis models produced 100% correct classifications for both pork and turkey on both calibration and validation sets. 2009 Elsevier Ltd. All rights reserved.
Recognizing explosion sites with a self-organizing network for unsupervised learning
NASA Astrophysics Data System (ADS)
Tarvainen, Matti
1999-06-01
A self-organizing neural network model has been developed for identifying mining explosion locations in different environments in Finland and adjacent areas. The main advantage of the method is its ability to automatically find a suitable network structure and naturally correctly identify explosions as such. The explosion site recognition was done using extracted waveform attributes of various kind event records from the small-aperture array FINESS in Finland. The recognition was done by using P-S phase arrival differences and rough azimuth estimates to provide a first robust epicentre location. This, in turn, leads to correct mining district identification where more detailed tuning was performed using different phase amplitude and signal-to-noise attributes. The explosions studied here originated in mines and quarries located in Finland, coast of Estonia and in the St. Petersburg area, Russia. Although the Helsinki bulletins in 1995 and 1996 listed 1649 events in these areas, analysis was restricted to the 380 (ML≥2) events which, besides, were found in the reviewed event bulletins (REB) of the CTBTO/UN prototype international data centre (pIDC) in Arlington, VA, USA. These 380 events with different attributes were selected for the learning stage. Because no `ground-truth' information was available the corresponding mining, `code' coordinates used earlier to compile Helsinki bulletins were utilized instead. The novel self-organizing method was tested on 18 new event recordings in the mentioned area in January-February 1997, out of which 15 were connected to correct mines. The misconnected three events were those which did not have all matching attributes in the self-organizing maps (SOMs) network.
NASA Astrophysics Data System (ADS)
Teuho, J.; Johansson, J.; Linden, J.; Saunavaara, V.; Tolvanen, T.; Teräs, M.
2014-01-01
Selection of reconstruction parameters has an effect on the image quantification in PET, with an additional contribution from a scanner-specific attenuation correction method. For achieving comparable results in inter- and intra-center comparisons, any existing quantitative differences should be identified and compensated for. In this study, a comparison between PET, PET/CT and PET/MR is performed by using an anatomical brain phantom, to identify and measure the amount of bias caused due to differences in reconstruction and attenuation correction methods especially in PET/MR. Differences were estimated by using visual, qualitative and quantitative analysis. The qualitative analysis consisted of a line profile analysis for measuring the reproduction of anatomical structures and the contribution of the amount of iterations to image contrast. The quantitative analysis consisted of measurement and comparison of 10 anatomical VOIs, where the HRRT was considered as the reference. All scanners reproduced the main anatomical structures of the phantom adequately, although the image contrast on the PET/MR was inferior when using a default clinical brain protocol. Image contrast was improved by increasing the amount of iterations from 2 to 5 while using 33 subsets. Furthermore, a PET/MR-specific bias was detected, which resulted in underestimation of the activity values in anatomical structures closest to the skull, due to the MR-derived attenuation map that ignores the bone. Thus, further improvements for the PET/MR reconstruction and attenuation correction could be achieved by optimization of RAMLA-specific reconstruction parameters and implementation of bone to the attenuation template.
ERIC Educational Resources Information Center
Tursz, Anne; Crost, Monique; Gerbouin-Rerolle, Pascale; Cook, Jon M.
2010-01-01
Objectives: Test the hypothesis of an underestimation of infant homicides in mortality statistics in France; identify its causes; examine data from the judicial system and their contribution in correcting this underestimation. Methods: A retrospective, cross-sectional study was carried out in 26 courts in three regions of France of cases of infant…
Uncertainty Analysis Principles and Methods
2007-09-01
error source . The Data Processor converts binary coded numbers to values, performs D/A curve fitting and applies any correction factors that may be...describes the stages or modules involved in the measurement process. We now need to identify all relevant error sources and develop the mathematical... sources , gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden
ERIC Educational Resources Information Center
Silverstein, Roni
2014-01-01
Root cause analysis is a powerful method schools use to analyze data to solve problems; it aims to identify and correct the root causes of problems or events, rather than simply addressing their symptoms. Veteran practitioner, Roni Silverstein, presented the value of this process and practical ways to use it in your school or district. This…
Shao, Yongni; Li, Yuan; Jiang, Linjun; Pan, Jian; He, Yong; Dou, Xiaoming
2016-11-01
The main goal of this research is to examine the feasibility of applying Visible/Near-infrared hyperspectral imaging (Vis/NIR-HSI) and Raman microspectroscopy technology for non-destructive identification of pesticide varieties (glyphosate and butachlor). Both mentioned technologies were explored to investigate how internal elements or characteristics of Chlorella pyrenoidosa change when pesticides are applied, and in the meantime, to identify varieties of the pesticides during this procedure. Successive projections algorithm (SPA) was introduced to our study to identify seven most effective wavelengths. With those wavelengths suggested by SPA, a model of the linear discriminant analysis (LDA) was established to classify the pesticide varieties, and the correct classification rate of the SPA-LDA model reached as high as 100%. For the Raman technique, a few partial least squares discriminant analysis models were established with different preprocessing methods from which we also identified one processing approach that achieved the most optimal result. The sensitive wavelengths (SWs) which are related to algae's pigment were chosen, and a model of LDA was established with the correct identification reached a high level of 90.0%. The results showed that both Vis/NIR-HSI and Raman microspectroscopy techniques are capable to identify pesticide varieties in an indirect but effective way, and SPA is an effective wavelength extracting method. The SWs corresponding to microalgae pigments, which were influenced by pesticides, could also help to characterize different pesticide varieties and benefit the variety identification. Copyright © 2016 Elsevier Ltd. All rights reserved.
Body, Barbara A; Beard, Melodie A; Slechta, E Susan; Hanson, Kimberly E; Barker, Adam P; Babady, N Esther; McMillen, Tracy; Tang, Yi-Wei; Brown-Elliott, Barbara A; Iakhiaeva, Elena; Vasireddy, Ravikiran; Vasireddy, Sruthi; Smith, Terry; Wallace, Richard J; Turner, S; Curtis, L; Butler-Wu, Susan; Rychert, Jenna
2018-06-01
This multicenter study was designed to assess the accuracy and reproducibility of the Vitek MS v3.0 matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry system for identification of Mycobacterium and Nocardia species compared to DNA sequencing. A total of 963 clinical isolates representing 51 taxa were evaluated. In all, 663 isolates were correctly identified to the species level (69%), with another 231 (24%) correctly identified to the complex or group level. Fifty-five isolates (6%) could not be identified despite repeat testing. All of the tuberculous mycobacteria (45/45; 100%) and most of the nontuberculous mycobacteria (569/606; 94%) were correctly identified at least to the group or complex level. However, not all species or subspecies within the M. tuberculosis , M. abscessus , and M. avium complexes and within the M. fortuitum and M. mucogenicum groups could be differentiated. Among the 312 Nocardia isolates tested, 236 (76%) were correctly identified to the species level, with an additional 44 (14%) correctly identified to the complex level. Species within the N. nova and N. transvalensis complexes could not always be differentiated. Eleven percent of the isolates (103/963) underwent repeat testing in order to get a final result. Identification of a representative set of Mycobacterium and Nocardia species was highly reproducible, with 297 of 300 (99%) replicates correctly identified using multiple kit lots, instruments, analysts, and sites. These findings demonstrate that the system is robust and has utility for the routine identification of mycobacteria and Nocardia in clinical practice. Copyright © 2018 American Society for Microbiology.
NASA Astrophysics Data System (ADS)
Silenko, Alexander J.
2016-02-01
General properties of the Foldy-Wouthuysen transformation which is widely used in quantum mechanics and quantum chemistry are considered. Merits and demerits of the original Foldy-Wouthuysen transformation method are analyzed. While this method does not satisfy the Eriksen condition of the Foldy-Wouthuysen transformation, it can be corrected with the use of the Baker-Campbell-Hausdorff formula. We show a possibility of such a correction and propose an appropriate algorithm of calculations. An applicability of the corrected Foldy-Wouthuysen method is restricted by the condition of convergence of a series of relativistic corrections.
Haeussinger, F B; Dresler, T; Heinzel, S; Schecklmann, M; Fallgatter, A J; Ehlis, A-C
2014-07-15
Functional near-infrared spectroscopy (fNIRS) is an optical neuroimaging method that detects temporal concentration changes of oxygenated and deoxygenated hemoglobin within the cortex, so that neural activation can be inferred. However, even though fNIRS is a very practical and well-tolerated method with several advantages particularly in methodically challenging measurement situations (e.g., during tasks involving movement or open speech), it has been shown to be confounded by systemic compounds of non-cerebral, extra-cranial origin (e.g. changes in blood pressure, heart rate). Especially event-related signal patterns induced by dilation or constriction of superficial forehead and temple veins impair the detection of frontal brain activation elicited by cognitive tasks. To further investigate this phenomenon, we conducted a simultaneous fNIRS-fMRI study applying a working memory paradigm (n-back). Extra-cranial signals were obtained by extracting the BOLD signal from fMRI voxels within the skin. To develop a filter method that corrects for extra-cranial skin blood flow, particularly intended for fNIRS data sets recorded by widely used continuous wave systems with fixed optode distances, we identified channels over the forehead with probable major extra-cranial signal contributions. The averaged signal from these channels was then subtracted from all fNIRS channels of the probe set. Additionally, the data were corrected for motion and non-evoked systemic artifacts. Applying these filters, we can show that measuring brain activation in frontal brain areas with fNIRS was substantially improved. The resulting signal resembled the fMRI parameters more closely than before the correction. Future fNIRS studies measuring functional brain activation in the forehead region need to consider the use of different filter options to correct for interfering extra-cranial signals. Copyright © 2014 Elsevier Inc. All rights reserved.
Chen, Weitian; Sica, Christopher T.; Meyer, Craig H.
2008-01-01
Off-resonance effects can cause image blurring in spiral scanning and various forms of image degradation in other MRI methods. Off-resonance effects can be caused by both B0 inhomogeneity and concomitant gradient fields. Previously developed off-resonance correction methods focus on the correction of a single source of off-resonance. This work introduces a computationally efficient method of correcting for B0 inhomogeneity and concomitant gradients simultaneously. The method is a fast alternative to conjugate phase reconstruction, with the off-resonance phase term approximated by Chebyshev polynomials. The proposed algorithm is well suited for semiautomatic off-resonance correction, which works well even with an inaccurate or low-resolution field map. The proposed algorithm is demonstrated using phantom and in vivo data sets acquired by spiral scanning. Semiautomatic off-resonance correction alone is shown to provide a moderate amount of correction for concomitant gradient field effects, in addition to B0 imhomogeneity effects. However, better correction is provided by the proposed combined method. The best results were produced using the semiautomatic version of the proposed combined method. PMID:18956462
A new digitized reverse correction method for hypoid gears based on a one-dimensional probe
NASA Astrophysics Data System (ADS)
Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo
2017-12-01
In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.
Aligning Metabolic Pathways Exploiting Binary Relation of Reactions.
Huang, Yiran; Zhong, Cheng; Lin, Hai Xiang; Huang, Jing
2016-01-01
Metabolic pathway alignment has been widely used to find one-to-one and/or one-to-many reaction mappings to identify the alternative pathways that have similar functions through different sets of reactions, which has important applications in reconstructing phylogeny and understanding metabolic functions. The existing alignment methods exhaustively search reaction sets, which may become infeasible for large pathways. To address this problem, we present an effective alignment method for accurately extracting reaction mappings between two metabolic pathways. We show that connected relation between reactions can be formalized as binary relation of reactions in metabolic pathways, and the multiplications of zero-one matrices for binary relations of reactions can be accomplished in finite steps. By utilizing the multiplications of zero-one matrices for binary relation of reactions, we efficiently obtain reaction sets in a small number of steps without exhaustive search, and accurately uncover biologically relevant reaction mappings. Furthermore, we introduce a measure of topological similarity of nodes (reactions) by comparing the structural similarity of the k-neighborhood subgraphs of the nodes in aligning metabolic pathways. We employ this similarity metric to improve the accuracy of the alignments. The experimental results on the KEGG database show that when compared with other state-of-the-art methods, in most cases, our method obtains better performance in the node correctness and edge correctness, and the number of the edges of the largest common connected subgraph for one-to-one reaction mappings, and the number of correct one-to-many reaction mappings. Our method is scalable in finding more reaction mappings with better biological relevance in large metabolic pathways.
Jeddi, Fakhri; Yapo-Kouadio, Gisèle Cha; Normand, Anne-Cécile; Cassagne, Carole; Marty, Pierre; Piarroux, Renaud
2017-02-01
In cases of fungal infection of the bloodstream, rapid species identification is crucial to provide adapted therapy and thereby ameliorate patient outcome. Currently, the commercial Sepsityper kit and the sodium-dodecyl sulfate (SDS) method coupled with MALDI-TOF mass spectrometry are the most commonly reported lysis protocols for direct identification of fungi from positive blood culture vials. However, the performance of these two protocols has never been compared on clinical samples. Accordingly, we performed a two-step survey on two distinct panels of clinical positive blood culture vials to identify the most efficient protocol, establish an appropriate log score (LS) cut-off, and validate the best method. We first compared the performance of the Sepsityper and the SDS protocols on 71 clinical samples. For 69 monomicrobial samples, mass spectrometry LS values were significantly higher with the SDS protocol than with the Sepsityper method (P < .0001), especially when the best score of four deposited spots was considered. Next, we established the LS cut-off for accurate identification at 1.7, based on specimen DNA sequence data. Using this LS cut-off, 66 (95.6%) and 46 (66.6%) isolates were correctly identified at the species level with the SDS and the Sepsityper protocols, respectively. In the second arm of the survey, we validated the SDS protocol on an additional panel of 94 clinical samples. Ninety-two (98.9%) of 93 monomicrobial samples were correctly identified at the species level (median LS = 2.061). Overall, our data suggest that the SDS method yields more accurate species identification of yeasts, than the Sepsityper protocol. © The Author 2016. Published by Oxford University Press on behalf of The International Society for Human and Animal Mycology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Can quantile mapping improve precipitation extremes from regional climate models?
NASA Astrophysics Data System (ADS)
Tani, Satyanarayana; Gobiet, Andreas
2015-04-01
The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.
Kodak, Tiffany; Campbell, Vincent; Bergmann, Samantha; LeBlanc, Brittany; Kurtz-Nelson, Eva; Cariveau, Tom; Haq, Shaji; Zemantic, Patricia; Mahon, Jacob
2016-09-01
Prior research shows that learners have idiosyncratic responses to error-correction procedures during instruction. Thus, assessments that identify error-correction strategies to include in instruction can aid practitioners in selecting individualized, efficacious, and efficient interventions. The current investigation conducted an assessment to compare 5 error-correction procedures that have been evaluated in the extant literature and are common in instructional practice for children with autism spectrum disorder (ASD). Results showed that the assessment identified efficacious and efficient error-correction procedures for all participants, and 1 procedure was efficient for 4 of the 5 participants. To examine the social validity of error-correction procedures, participants selected among efficacious and efficient interventions in a concurrent-chains assessment. We discuss the results in relation to prior research on error-correction procedures and current instructional practices for learners with ASD. © 2016 Society for the Experimental Analysis of Behavior.
Costa-Alcalde, José Javier; Barbeito-Castiñeiras, Gema; González-Alba, José María; Aguilera, Antonio; Galán, Juan Carlos; Pérez-Del-Molino, María Luisa
2018-06-02
The American Thoracic Society and the Infectious Diseases Society of America recommend that clinically significant non-tuberculous mycobacteria (NTM) should be identified to the species level in order to determine their clinical significance. The aim of this study was to evaluate identification of rapidly growing NTM (RGM) isolated from clinical samples by using MALDI-TOF MS and a commercial molecular system. The results were compared with identification using a reference method. We included 46 clinical isolates of RGM and identified them using the commercial molecular system GenoType ® CM/AS (Hain, Lifescience, Germany), MALDI-TOF MS (Bruker) and, as reference method, partial rpoβ gene sequencing followed by BLAST and phylogenetic analysis with the 1093 sequences available in the GeneBank. The degree of agreement between GenoType ® and MALDI-TOF MS and the reference method, partial rpoβ sequencing, was 27/43 (62.8%) and 38/43 cases (88.3%) respectively. For all the samples correctly classified by GenoType ® , we obtained the same result with MALDI-TOF MS (27/27). However, MALDI-TOF MS also correctly identified 68.75% (11/16) of the samples that GenoType ® had misclassified (p=0.005). MALDI-TOF MS classified significantly better than GenoType ® . When a MALDI-TOF MS score >1.85 was achieved, MALDI-TOF MS and partial rpoβ gene sequencing were equivalent. GenoType ® was not able to distinguish between species belonging to the M. fortuitum complex. MALDI-TOF MS methodology is simple, rapid and associated with lower consumable costs than GenoType ® . The partial rpoβ sequencing methods with BLAST and phylogenetic analysis were not able to identify some RGM unequivocally. Therefore, sequencing of additional regions would be indicated in these cases. Copyright © 2018 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.
NASA Astrophysics Data System (ADS)
Schoenberg, Ronny; von Blanckenburg, Friedhelm
2005-04-01
Multicollector ICP-MS-based stable isotope procedures provide the capability to determine small variations in metal isotope composition of materials, but they are prone to substantial bias introduced by inadequate sample preparation. Such a "cryptic" bias is not necessarily identifiable from the measured isotope ratios. The analytical protocol for Fe isotope analyses of organic and inorganic materials described here identifies and avoids such pitfalls. In medium-mass resolution mode of the ThermoFinnigan Neptune MC-ICP-MS, a 1-ppm Fe solution with an uptake rate of 50-70 [mu]L min-1 yielded 3 × 10-11 A on 56Fe for the ThermoFinnigan stable introduction system and 1.2-1.8 × 10-10 A for the ESI Apex-Q uptake system. Sensitivity was increased again 3-5-fold when using Finnigan X-cones instead of the standard H-cones. The combination of the ESI Apex-Q apparatus and X-cones allowed the determination of the isotope composition on as little as 50 ng of Fe. Fe isotope compositions were corrected for mass bias with both the standard-sample bracketing (SSB) method, and by using the 65Cu/63Cu ratio of added synthetic copper (Cu-doping) as internal monitor of mass discrimination. Both methods provide identical results on high-purity Fe solutions of either synthetic or natural samples. We prefer the SSB method because of its shorter analysis time and more straightforward correction of instrumental mass bias compared to Cu-doping. Strong error correlations of the data are observed in three isotope diagrams. Thus, we suggest that the quality assessment in such diagrams should be performed with error ellipses rather than error bars. Reproducibility of [delta]56Fe, [delta]57Fe and [delta]58Fe values of natural samples alone is not a sufficient criterion for accuracy. A set of tests is lined out that identify cryptic matrix effects and ensure a reproducible level of quality control. Using these criteria and the SSB correction method, we determined the external reproducibilities for [delta]56Fe, [delta]57Fe and [delta]58Fe at the 95% confidence interval from 318 measurements of 95 natural samples to be 0.049, 0.071 and 0.28[per mille sign], respectively.
Boundary pint corrections for variable radius plots - simulation results
Margaret Penner; Sam Otukol
2000-01-01
The boundary plot problem is encountered when a forest inventory plot includes two or more forest conditions. Depending on the correction method used, the resulting estimates can be biased. The various correction alternatives are reviewed. No correction, area correction, half sweep, and toss-back methods are evaluated using simulation on an actual data set. Based on...
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Camara, G.; Dias, L. A. V.; Mascarenhas, N. D. D.; Desouza, R. C. M.; Pereira, A. E. C.
1982-01-01
Earth's atmosphere reduces a sensors ability in currently discriminating targets. Using radiometric correction to reduce the atmospheric effects may improve considerably the performance of an automatic image interpreter. Several methods for radiometric correction from the open literature are compared leading to the development of an atmospheric correction system.
Gibbons, Raymond J; Thorsteinsson, Einar B; Loi, Natasha M
2015-01-01
Objectives. The current study investigated mental health literacy in an Australian sample to examine sex differences in the identification of and attitudes towards various aspects of mental illness. Method. An online questionnaire was completed by 373 participants (M = 34.87 years). Participants were randomly assigned either a male or female version of a vignette depicting an individual exhibiting the symptoms of one of three types of mental illness (depression, anxiety, or psychosis) and asked to answer questions relating to aspects of mental health literacy. Results. Males exhibited poorer mental health literacy skills compared to females. Males were less likely to correctly identify the type of mental illness, more likely to rate symptoms as less serious, to perceive the individual as having greater personal control over such symptoms, and less likely to endorse the need for treatment for anxiety or psychosis. Conclusion. Generally, the sample was relatively proficient at correctly identifying mental illness but overall males displayed poorer mental health literacy skills than females.
Phonons in two-dimensional soft colloidal crystals.
Chen, Ke; Still, Tim; Schoenholz, Samuel; Aptowicz, Kevin B; Schindler, Michael; Maggs, A C; Liu, Andrea J; Yodh, A G
2013-08-01
The vibrational modes of pristine and polycrystalline monolayer colloidal crystals composed of thermosensitive microgel particles are measured using video microscopy and covariance matrix analysis. At low frequencies, the Debye relation for two-dimensional harmonic crystals is observed in both crystal types; at higher frequencies, evidence for van Hove singularities in the phonon density of states is significantly smeared out by experimental noise and measurement statistics. The effects of these errors are analyzed using numerical simulations. We introduce methods to correct for these limitations, which can be applied to disordered systems as well as crystalline ones, and we show that application of the error correction procedure to the experimental data leads to more pronounced van Hove singularities in the pristine crystal. Finally, quasilocalized low-frequency modes in polycrystalline two-dimensional colloidal crystals are identified and demonstrated to correlate with structural defects such as dislocations, suggesting that quasilocalized low-frequency phonon modes may be used to identify local regions vulnerable to rearrangements in crystalline as well as amorphous solids.
Error, Marc; Ashby, Shaelene; Orlandi, Richard R; Alt, Jeremiah A
2018-01-01
Objective To determine if the introduction of a systematic preoperative sinus computed tomography (CT) checklist improves identification of critical anatomic variations in sinus anatomy among patients undergoing endoscopic sinus surgery. Study Design Single-blinded prospective cohort study. Setting Tertiary care hospital. Subjects and Methods Otolaryngology residents were asked to identify critical surgical sinus anatomy on preoperative CT scans before and after introduction of a systematic approach to reviewing sinus CT scans. The percentage of correctly identified structures was documented and compared with a 2-sample t test. Results A total of 57 scans were reviewed: 28 preimplementation and 29 postimplementation. Implementation of the sinus CT checklist improved identification of critical sinus anatomy from 24% to 84% correct ( P < .001). All residents, junior and senior, demonstrated significant improvement in identification of sinus anatomic variants, including those not directly included in the systematic review implemented. Conclusion The implementation of a preoperative endoscopic sinus surgery radiographic checklist improves identification of critical anatomic sinus variations in a training population.
Analysis and control of hourglass instabilities in underintegrated linear and nonlinear elasticity
NASA Technical Reports Server (NTRS)
Jacquotte, Olivier P.; Oden, J. Tinsley
1994-01-01
Methods are described to identify and correct a bad finite element approximation of the governing operator obtained when under-integration is used in numerical code for several model problems: the Poisson problem, the linear elasticity problem, and for problems in the nonlinear theory of elasticity. For each of these problems, the reason for the occurrence of instabilities is given, a way to control or eliminate them is presented, and theorems of existence, uniqueness, and convergence for the given methods are established. Finally, numerical results are included which illustrate the theory.
The recovery and analysis of mitochondrial DNA from exploded pipe bombs.
Foran, David R; Gehring, Michael E; Stallworth, Shawn E
2009-01-01
Improvised explosive devices (IEDs) represent one of the most common modes of arbitrarily injuring or killing human beings. Because of the heat generated by, and destruction to, an IED postconflagration, most methods for identifying who assembled the device are ineffective. In the research presented, steel pipe bombs were mock-assembled by volunteers, and the bombs detonated under controlled conditions. The resultant shrapnel was collected and swabbed for residual cellular material. Mitochondrial DNA profiles were generated and compared blind to the pool of individuals who assembled the bombs. Assemblers were correctly identified 50% of the time, while another 19% could be placed into a group of three individuals with shared haplotypes. Only one bomb was assigned incorrectly. In some instances a contaminating profile (mixture) was also observed. Taken together, the results speak to the extreme sensitivity the methods have for identifying those who assemble IEDs, along with precautions needed when collecting and processing such evidence.
Mifsud, Borbala; Martincorena, Inigo; Darbo, Elodie; Sugar, Robert; Schoenfelder, Stefan; Fraser, Peter; Luscombe, Nicholas M
2017-01-01
Hi-C is one of the main methods for investigating spatial co-localisation of DNA in the nucleus. However, the raw sequencing data obtained from Hi-C experiments suffer from large biases and spurious contacts, making it difficult to identify true interactions. Existing methods use complex models to account for biases and do not provide a significance threshold for detecting interactions. Here we introduce a simple binomial probabilistic model that resolves complex biases and distinguishes between true and false interactions. The model corrects biases of known and unknown origin and yields a p-value for each interaction, providing a reliable threshold based on significance. We demonstrate this experimentally by testing the method against a random ligation dataset. Our method outperforms previous methods and provides a statistical framework for further data analysis, such as comparisons of Hi-C interactions between different conditions. GOTHiC is available as a BioConductor package (http://www.bioconductor.org/packages/release/bioc/html/GOTHiC.html).
An efficient graph theory based method to identify every minimal reaction set in a metabolic network
2014-01-01
Background Development of cells with minimal metabolic functionality is gaining importance due to their efficiency in producing chemicals and fuels. Existing computational methods to identify minimal reaction sets in metabolic networks are computationally expensive. Further, they identify only one of the several possible minimal reaction sets. Results In this paper, we propose an efficient graph theory based recursive optimization approach to identify all minimal reaction sets. Graph theoretical insights offer systematic methods to not only reduce the number of variables in math programming and increase its computational efficiency, but also provide efficient ways to find multiple optimal solutions. The efficacy of the proposed approach is demonstrated using case studies from Escherichia coli and Saccharomyces cerevisiae. In case study 1, the proposed method identified three minimal reaction sets each containing 38 reactions in Escherichia coli central metabolic network with 77 reactions. Analysis of these three minimal reaction sets revealed that one of them is more suitable for developing minimal metabolism cell compared to other two due to practically achievable internal flux distribution. In case study 2, the proposed method identified 256 minimal reaction sets from the Saccharomyces cerevisiae genome scale metabolic network with 620 reactions. The proposed method required only 4.5 hours to identify all the 256 minimal reaction sets and has shown a significant reduction (approximately 80%) in the solution time when compared to the existing methods for finding minimal reaction set. Conclusions Identification of all minimal reactions sets in metabolic networks is essential since different minimal reaction sets have different properties that effect the bioprocess development. The proposed method correctly identified all minimal reaction sets in a both the case studies. The proposed method is computationally efficient compared to other methods for finding minimal reaction sets and useful to employ with genome-scale metabolic networks. PMID:24594118
Evaluation of the new Vitek 2 ANC card for identification of medically relevant anaerobic bacteria.
Mory, Francine; Alauzet, Corentine; Matuszeswski, Céline; Riegel, Philippe; Lozniewski, Alain
2009-06-01
Of 261 anaerobic clinical isolates tested with the new Vitek 2 ANC card, 257 (98.5%) were correctly identified at the genus level. Among the 251 strains for which identification at the species level is possible with regard to the ANC database, 217 (86.5%) were correctly identified at the species level. Two strains (0.8%) were not identified, and eight were misidentified (3.1%). Of the 21 strains (8.1%) with low-level discrimination results, 14 were correctly identified at the species level by using the recommended additional tests. This system is a satisfactory new automated tool for the rapid identification of most anaerobic bacteria isolated in clinical laboratories.
Cohen, Andrew R; Bjornsson, Christopher S; Temple, Sally; Banker, Gary; Roysam, Badrinath
2009-08-01
An algorithmic information-theoretic method is presented for object-level summarization of meaningful changes in image sequences. Object extraction and tracking data are represented as an attributed tracking graph (ATG). Time courses of object states are compared using an adaptive information distance measure, aided by a closed-form multidimensional quantization. The notion of meaningful summarization is captured by using the gap statistic to estimate the randomness deficiency from algorithmic statistics. The summary is the clustering result and feature subset that maximize the gap statistic. This approach was validated on four bioimaging applications: 1) It was applied to a synthetic data set containing two populations of cells differing in the rate of growth, for which it correctly identified the two populations and the single feature out of 23 that separated them; 2) it was applied to 59 movies of three types of neuroprosthetic devices being inserted in the brain tissue at three speeds each, for which it correctly identified insertion speed as the primary factor affecting tissue strain; 3) when applied to movies of cultured neural progenitor cells, it correctly distinguished neurons from progenitors without requiring the use of a fixative stain; and 4) when analyzing intracellular molecular transport in cultured neurons undergoing axon specification, it automatically confirmed the role of kinesins in axon specification.
Mahowald, Madeline K.; Scharff, Nicholas; Flanigan, Timothy P.; Beckwith, Curt G.; Zaller, Nickolas D.
2014-01-01
Objectives. We described hepatitis C virus antibody (anti-HCV) prevalence in a state prison system and retrospectively evaluated the case-finding performance of targeted testing of the 1945 to 1965 birth cohort in this population. Methods. We used observational data from universal testing of Pennsylvania state prison entrants (June 2004–December 2012) to determine anti-HCV prevalence by birth cohort. We compared anti-HCV prevalence and the burden of anti-HCV in the 1945 to 1965 birth cohort with that in all other birth years. Results. Anti-HCV prevalence among 101 727 adults entering prison was 18.1%. Prevalence was highest among those born from 1945 to 1965, but most anti-HCV cases were in people born after 1965. Targeted testing of the 1945 to 1965 birth cohort would have identified a decreasing proportion of cases with time. Conclusions. HCV is endemic in correctional populations. Targeted testing of the 1945 to 1965 birth cohort would produce a high yield of positive test results but would identify only a minority of cases. We recommend universal anti-HCV screening in correctional settings to allow for maximum case identification, secondary prevention, and treatment of affected prisoners. PMID:24825235
Zakharova, Irina B; Lopasteyskaya, Yana A; Toporkov, Andrey V; Viktorov, Dmitry V
2018-01-01
Background: Burkholderia pseudomallei is a Gram-negative saprophytic soil bacterium that causes melioidosis, a potentially fatal disease endemic in wet tropical areas. The currently available biochemical identification systems can misidentify some strains of B. pseudomallei. The aim of the present study was to identify the biochemical features of B. pseudomallei, which can affect its correct identification by Vitek 2 system. Materials and Methods: The biochemical patterns of 40 B. pseudomallei strains were obtained using Vitek 2 GN cards. The average contribution of biochemical tests in overall dissimilarities between correctly and incorrectly identified strains was assessed using nonmetric multidimensional scaling. Results: It was found (R statistic of 0.836, P = 0.001) that a combination of negative N-acetyl galactosaminidase, β-N-acetyl glucosaminidase, phosphatase, and positive D-cellobiase (dCEL), tyrosine arylamidase (TyrA), and L-proline arylamidase (ProA) tests leads to low discrimination of B. pseudomallei, whereas a set of positive dCEL and negative N-acetyl galactosaminidase, TyrA, and ProA determines the wrong identification of B. pseudomallei as Burkholderia cepacia complex. Conclusion: The further expansion of the Vitek 2 identification keys is needed for correct identification of atypical or regionally distributed biochemical profiles of B. pseudomallei. PMID:29563716
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damato, Antonio L., E-mail: adamato@lroc.harvard.edu; Viswanathan, Akila N.; Don, Sarah M.
2014-10-15
Purpose: To investigate the use of a system using electromagnetic tracking (EMT), post-processing and an error-detection algorithm for detecting errors and resolving uncertainties in high-dose-rate brachytherapy catheter digitization for treatment planning. Methods: EMT was used to localize 15 catheters inserted into a phantom using a stepwise acquisition technique. Five distinct acquisition experiments were performed. Noise associated with the acquisition was calculated. The dwell location configuration was extracted from the EMT data. A CT scan of the phantom was performed, and five distinct catheter digitization sessions were performed. No a priori registration of the CT scan coordinate system with the EMTmore » coordinate system was performed. CT-based digitization was automatically extracted from the brachytherapy plan DICOM files (CT), and rigid registration was performed between EMT and CT dwell positions. EMT registration error was characterized in terms of the mean and maximum distance between corresponding EMT and CT dwell positions per catheter. An algorithm for error detection and identification was presented. Three types of errors were systematically simulated: swap of two catheter numbers, partial swap of catheter number identification for parts of the catheters (mix), and catheter-tip shift. Error-detection sensitivity (number of simulated scenarios correctly identified as containing an error/number of simulated scenarios containing an error) and specificity (number of scenarios correctly identified as not containing errors/number of correct scenarios) were calculated. Catheter identification sensitivity (number of catheters correctly identified as erroneous across all scenarios/number of erroneous catheters across all scenarios) and specificity (number of catheters correctly identified as correct across all scenarios/number of correct catheters across all scenarios) were calculated. The mean detected and identified shift was calculated. Results: The maximum noise ±1 standard deviation associated with the EMT acquisitions was 1.0 ± 0.1 mm, and the mean noise was 0.6 ± 0.1 mm. Registration of all the EMT and CT dwell positions was associated with a mean catheter error of 0.6 ± 0.2 mm, a maximum catheter error of 0.9 ± 0.4 mm, a mean dwell error of 1.0 ± 0.3 mm, and a maximum dwell error of 1.3 ± 0.7 mm. Error detection and catheter identification sensitivity and specificity of 100% were observed for swap, mix and shift (≥2.6 mm for error detection; ≥2.7 mm for catheter identification) errors. A mean detected shift of 1.8 ± 0.4 mm and a mean identified shift of 1.9 ± 0.4 mm were observed. Conclusions: Registration of the EMT dwell positions to the CT dwell positions was possible with a residual mean error per catheter of 0.6 ± 0.2 mm and a maximum error for any dwell of 1.3 ± 0.7 mm. These low residual registration errors show that quality assurance of the general characteristics of the catheters and of possible errors affecting one specific dwell position is possible. The sensitivity and specificity of the catheter digitization verification algorithm was 100% for swap and mix errors and for shifts ≥2.6 mm. On average, shifts ≥1.8 mm were detected, and shifts ≥1.9 mm were detected and identified.« less
Probabilistic Component Mode Synthesis of Nondeterministic Substructures
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1996-01-01
Standard methods of structural dynamic analysis assume that the structural characteristics are deterministic. Recognizing that these characteristics are actually statistical in nature researchers have recently developed a variety of methods that use this information to determine probabilities of a desired response characteristic, such as natural frequency, without using expensive Monte Carlo simulations. One of the problems in these methods is correctly identifying the statistical properties of primitive variables such as geometry, stiffness, and mass. We present a method where the measured dynamic properties of substructures are used instead as the random variables. The residual flexibility method of component mode synthesis is combined with the probabilistic methods to determine the cumulative distribution function of the system eigenvalues. A simple cantilever beam test problem is presented that illustrates the theory.
MRI-alone radiation therapy planning for prostate cancer: Automatic fiducial marker detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghose, Soumya, E-mail: soumya.ghose@case.edu; Mitra, Jhimli; Rivest-Hénault, David
Purpose: The feasibility of radiation therapy treatment planning using substitute computed tomography (sCT) generated from magnetic resonance images (MRIs) has been demonstrated by a number of research groups. One challenge with an MRI-alone workflow is the accurate identification of intraprostatic gold fiducial markers, which are frequently used for prostate localization prior to each dose delivery fraction. This paper investigates a template-matching approach for the detection of these seeds in MRI. Methods: Two different gradient echo T1 and T2* weighted MRI sequences were acquired from fifteen prostate cancer patients and evaluated for seed detection. For training, seed templates from manual contoursmore » were selected in a spectral clustering manifold learning framework. This aids in clustering “similar” gold fiducial markers together. The marker with the minimum distance to a cluster centroid was selected as the representative template of that cluster during training. During testing, Gaussian mixture modeling followed by a Markovian model was used in automatic detection of the probable candidates. The probable candidates were rigidly registered to the templates identified from spectral clustering, and a similarity metric is computed for ranking and detection. Results: A fiducial detection accuracy of 95% was obtained compared to manual observations. Expert radiation therapist observers were able to correctly identify all three implanted seeds on 11 of the 15 scans (the proposed method correctly identified all seeds on 10 of the 15). Conclusions: An novel automatic framework for gold fiducial marker detection in MRI is proposed and evaluated with detection accuracies comparable to manual detection. When radiation therapists are unable to determine the seed location in MRI, they refer back to the planning CT (only available in the existing clinical framework); similarly, an automatic quality control is built into the automatic software to ensure that all gold seeds are either correctly detected or a warning is raised for further manual intervention.« less
NASA Astrophysics Data System (ADS)
Asadi Haroni, Hooshang; Hassan Tabatabaei, Seyed
2016-04-01
Muteh gold mining area is located in 160 km NW of Isfahan town. Gold mineralization is meso-thermal type and associated with silisic, seresitic and carbonate alterations as well as with hematite and goethite. Image processing and interpretation were applied on the ASTER satellite imagery data of about 400 km2 at the Muteh gold mining area to identify hydrothermal alterations and iron oxides associated with gold mineralization. After applying preprocessing methods such as radiometric and geometric corrections, image processing methods of Principal Components Analysis (PCA), Least Square Fit (Ls-Fit) and Spectral Angle Mapper (SAM) were applied on the ASTER data to identify hydrothermal alterations and iron oxides. In this research reference spectra of minerals such as chlorite, hematite, clay minerals and phengite identified from laboratory spectral analysis of collected samples were used to map the hydrothermal alterations. Finally, identified hydrothermal alteration and iron oxides were validated by visiting and sampling some of the mapped hydrothermal alterations.
30 CFR 250.1452 - What if I correct the violation?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 2 2011-07-01 2011-07-01 false What if I correct the violation? 250.1452... Continental Shelf Civil Penalties Penalties After A Period to Correct § 250.1452 What if I correct the violation? The matter will be closed if you correct all of the violations identified in the Notice of...
NASA Technical Reports Server (NTRS)
Au, C. K.
1989-01-01
The Breit correction only accounts for part of the transverse photon exchange correction in the calculation of the energy levels in helium Rydberg states. The remaining leading corrections are identified and each is expressed in an effective potential form. The relevance to the Casimir correction potential in various limits is also discussed.
Training Correctional Educators: A Needs Assessment Study.
ERIC Educational Resources Information Center
Jurich, Sonia; Casper, Marta; Hull, Kim A.
2001-01-01
Focus groups and a training needs survey of Virginia correctional educators identified educational philosophy, communication skills, human behavior, and teaching techniques as topics of interest. Classroom observations identified additional areas: teacher isolation, multiple challenges, absence of grade structure, and safety constraints. (Contains…
Karbalaie, Abdolamir; Abtahi, Farhad; Fatemi, Alimohammad; Etehadtavakol, Mahnaz; Emrani, Zahra; Erlandsson, Björn-Erik
2017-09-01
Nailfold capillaroscopy is a practical method for identifying and obtaining morphological changes in capillaries which might reveal relevant information about diseases and health. Capillaroscopy is harmless, and seems simple and repeatable. However, there is lack of established guidelines and instructions for acquisition as well as the interpretation of the obtained images; which might lead to various ambiguities. In addition, assessment and interpretation of the acquired images are very subjective. In an attempt to overcome some of these problems, in this study a new modified technique for assessment of nailfold capillary density is introduced. The new method is named elliptic broken line (EBL) which is an extension of the two previously known methods by defining clear criteria for finding the apex of capillaries in different scenarios by using a fitted elliptic. A graphical user interface (GUI) is developed for pre-processing, manual assessment of capillary apexes and automatic correction of selected apexes based on 90° rule. Intra- and inter-observer reliability of EBL and corrected EBL is evaluated in this study. Four independent observers familiar with capillaroscopy performed the assessment for 200 nailfold videocapillaroscopy images, form healthy subject and systemic lupus erythematosus patients, in two different sessions. The results show elevation from moderate (ICC=0.691) and good (ICC=0.753) agreements to good (ICC=0.750) and good (ICC=0.801) for intra- and inter-observer reliability after automatic correction of EBL. This clearly shows the potential of this method to improve the reliability and repeatability of assessment which motivates us for further development of automatic tool for EBL method. Copyright © 2017 Elsevier Inc. All rights reserved.
Evaluation of clinical methods for peroneal muscle testing.
Sarig-Bahat, Hilla; Krasovsky, Andrei; Sprecher, Elliot
2013-03-01
Manual muscle testing of the peroneal muscles is well accepted as a testing method in musculoskeletal physiotherapy for the assessment of the foot and ankle. The peroneus longus and brevis are primary evertors and secondary plantar flexors of the ankle joint. However, some international textbooks describe them as dorsi flexors, when instructing peroneal muscle testing. The identified variability raised a question whether these educational texts are reflected in the clinical field. The purposes of this study were to investigate what are the methods commonly used in the clinical field for peroneal muscle testing and to evaluate their compatibility with functional anatomy. A cross-sectional study was conducted, using an electronic questionnaire sent to 143 Israeli physiotherapists in the musculoskeletal field. The survey questioned on the anatomical location of manual resistance and the combination of motions resisted. Ninety-seven responses were received. The majority (69%) of respondents related correctly to the peronei as evertors, but asserted that resistance should be located over the dorsal aspect of the fifth metatarsus, thereby disregarding the peroneus longus. Moreover, 38% of the respondents described the peronei as dorsi flexors, rather than plantar flexors. Only 2% selected the correct method of resisting plantarflexion and eversion at the base of the first metatarsus. We consider this technique to be the most compatible with the anatomy of the peroneus longus and brevis. The Fisher-Freeman-Halton test indicated that there was a significant relationship between responses on the questions (P = 0.0253, 95% CI 0.0249-0.0257), thus justifying further correspondence analysis. The correspondence analysis found no clustering of the answers that were compatible with anatomical evidence and were applied in the correct technique, but did demonstrate a common error, resisting dorsiflexion rather than plantarflexion, which was in agreement with the described frequencies. Inconsistencies were identified between the instruction method commonly provided for peroneal muscle testing in textbook and the functional anatomy of these muscles. Results reflect the lack of accuracy in applying functional anatomy to peroneal testing. This may be due to limited use of peroneal muscle testing or to inadequate investigation of the existing evaluation methods and their validity. Accordingly, teaching materials and clinical methods used for this test should be re-evaluated. Further research should investigate the value of peroneal muscle testing in clinical ankle evaluation. Copyright © 2012 John Wiley & Sons, Ltd.
Smith, Jason F.; Chen, Kewei; Pillai, Ajay S.; Horwitz, Barry
2013-01-01
The number and variety of connectivity estimation methods is likely to continue to grow over the coming decade. Comparisons between methods are necessary to prune this growth to only the most accurate and robust methods. However, the nature of connectivity is elusive with different methods potentially attempting to identify different aspects of connectivity. Commonalities of connectivity definitions across methods upon which base direct comparisons can be difficult to derive. Here, we explicitly define “effective connectivity” using a common set of observation and state equations that are appropriate for three connectivity methods: dynamic causal modeling (DCM), multivariate autoregressive modeling (MAR), and switching linear dynamic systems for fMRI (sLDSf). In addition while deriving this set, we show how many other popular functional and effective connectivity methods are actually simplifications of these equations. We discuss implications of these connections for the practice of using one method to simulate data for another method. After mathematically connecting the three effective connectivity methods, simulated fMRI data with varying numbers of regions and task conditions is generated from the common equation. This simulated data explicitly contains the type of the connectivity that the three models were intended to identify. Each method is applied to the simulated data sets and the accuracy of parameter identification is analyzed. All methods perform above chance levels at identifying correct connectivity parameters. The sLDSf method was superior in parameter estimation accuracy to both DCM and MAR for all types of comparisons. PMID:23717258
Error analysis of motion correction method for laser scanning of moving objects
NASA Astrophysics Data System (ADS)
Goel, S.; Lohani, B.
2014-05-01
The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.
Moore, D F; Harwood, V J; Ferguson, D M; Lukasik, J; Hannah, P; Getrich, M; Brownell, M
2005-01-01
The accuracy of ribotyping and antibiotic resistance analysis (ARA) for prediction of sources of faecal bacterial pollution in an urban southern California watershed was determined using blinded proficiency samples. Antibiotic resistance patterns and HindIII ribotypes of Escherichia coli (n = 997), and antibiotic resistance patterns of Enterococcus spp. (n = 3657) were used to construct libraries from sewage samples and from faeces of seagulls, dogs, cats, horses and humans within the watershed. The three libraries were analysed to determine the accuracy of host source prediction. The internal accuracy of the libraries (average rate of correct classification, ARCC) with six source categories was 44% for E. coli ARA, 69% for E. coli ribotyping and 48% for Enterococcus ARA. Each library's predictive ability towards isolates that were not part of the library was determined using a blinded proficiency panel of 97 E. coli and 99 Enterococcus isolates. Twenty-eight per cent (by ARA) and 27% (by ribotyping) of the E. coli proficiency isolates were assigned to the correct source category. Sixteen per cent were assigned to the same source category by both methods, and 6% were assigned to the correct category. Addition of 2480 E. coli isolates to the ARA library did not improve the ARCC or proficiency accuracy. In contrast, 45% of Enterococcus proficiency isolates were correctly identified by ARA. None of the methods performed well enough on the proficiency panel to be judged ready for application to environmental samples. Most microbial source tracking (MST) studies published have demonstrated library accuracy solely by the internal ARCC measurement. Low rates of correct classification for E. coli proficiency isolates compared with the ARCCs of the libraries indicate that testing of bacteria from samples that are not represented in the library, such as blinded proficiency samples, is necessary to accurately measure predictive ability. The library-based MST methods used in this study may not be suited for determination of the source(s) of faecal pollution in large, urban watersheds.
Network-level accident-mapping: Distance based pattern matching using artificial neural network.
Deka, Lipika; Quddus, Mohammed
2014-04-01
The objective of an accident-mapping algorithm is to snap traffic accidents onto the correct road segments. Assigning accidents onto the correct segments facilitate to robustly carry out some key analyses in accident research including the identification of accident hot-spots, network-level risk mapping and segment-level accident risk modelling. Existing risk mapping algorithms have some severe limitations: (i) they are not easily 'transferable' as the algorithms are specific to given accident datasets; (ii) they do not perform well in all road-network environments such as in areas of dense road network; and (iii) the methods used do not perform well in addressing inaccuracies inherent in and type of road environment. The purpose of this paper is to develop a new accident mapping algorithm based on the common variables observed in most accident databases (e.g. road name and type, direction of vehicle movement before the accident and recorded accident location). The challenges here are to: (i) develop a method that takes into account uncertainties inherent to the recorded traffic accident data and the underlying digital road network data, (ii) accurately determine the type and proportion of inaccuracies, and (iii) develop a robust algorithm that can be adapted for any accident set and road network of varying complexity. In order to overcome these challenges, a distance based pattern-matching approach is used to identify the correct road segment. This is based on vectors containing feature values that are common in the accident data and the network data. Since each feature does not contribute equally towards the identification of the correct road segments, an ANN approach using the single-layer perceptron is used to assist in "learning" the relative importance of each feature in the distance calculation and hence the correct link identification. The performance of the developed algorithm was evaluated based on a reference accident dataset from the UK confirming that the accuracy is much better than other methods. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Evaluation of Bias Correction Method for Satellite-Based Rainfall Data
Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter
2016-01-01
With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW’s) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach. PMID:27314363
Evaluation of Bias Correction Method for Satellite-Based Rainfall Data.
Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter
2016-06-15
With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration's (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003-2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW's) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.
Intensity-corrected Herschel Observations of Nearby Isolated Low-mass Clouds
NASA Astrophysics Data System (ADS)
Sadavoy, Sarah I.; Keto, Eric; Bourke, Tyler L.; Dunham, Michael M.; Myers, Philip C.; Stephens, Ian W.; Di Francesco, James; Webb, Kristi; Stutz, Amelia M.; Launhardt, Ralf; Tobin, John J.
2018-01-01
We present intensity-corrected Herschel maps at 100, 160, 250, 350, and 500 μm for 56 isolated low-mass clouds. We determine the zero-point corrections for Herschel Photodetector Array Camera and Spectrometer (PACS) and Spectral Photometric Imaging Receiver (SPIRE) maps from the Herschel Science Archive (HSA) using Planck data. Since these HSA maps are small, we cannot correct them using typical methods. Here we introduce a technique to measure the zero-point corrections for small Herschel maps. We use radial profiles to identify offsets between the observed HSA intensities and the expected intensities from Planck. Most clouds have reliable offset measurements with this technique. In addition, we find that roughly half of the clouds have underestimated HSA-SPIRE intensities in their outer envelopes relative to Planck, even though the HSA-SPIRE maps were previously zero-point corrected. Using our technique, we produce corrected Herschel intensity maps for all 56 clouds and determine their line-of-sight average dust temperatures and optical depths from modified blackbody fits. The clouds have typical temperatures of ∼14–20 K and optical depths of ∼10‑5–10‑3. Across the whole sample, we find an anticorrelation between temperature and optical depth. We also find lower temperatures than what was measured in previous Herschel studies, which subtracted out a background level from their intensity maps to circumvent the zero-point correction. Accurate Herschel observations of clouds are key to obtaining accurate density and temperature profiles. To make such future analyses possible, intensity-corrected maps for all 56 clouds are publicly available in the electronic version. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
Concussion Knowledge in High School Football Players
Cournoyer, Janie; Tripp, Brady L.
2014-01-01
Context: Participating in sports while experiencing symptoms of a concussion can be dangerous. An athlete's lack of knowledge may be one factor influencing his or her decision to report symptoms. In an effort to enhance concussion education among high school athletes, legislation in Florida has attempted to address the issue through parental consent forms. Objective: To survey high school varsity football players to determine their level of knowledge about concussions after the initiation of new concussion-education legislation. Design: Cross-sectional study. Setting: Descriptive survey administered in person during a team meeting. Patients or Other Participants: A total of 334 varsity football players from 11 high schools in Florida. Main Outcome Measure(s): Participants completed a survey and identified the symptoms and consequences of a concussion among distractors. They also indicated whether they had received education about concussions from a parent, formal education, neither, or both. Results: The most correctly identified symptoms were headache (97%), dizziness (93%), and confusion (90%), and the most correctly identified consequence was persistent headache (93%). Participants reported receiving education from their parents (54%) or from a formal source (60%). Twenty-five percent reported never receiving any education regarding concussions. No correlations were found between the method of education and the knowledge of symptoms or consequences of concussion. Conclusions: The high school football players we surveyed did not have appropriate knowledge of the symptoms and consequences of concussions. Nausea or vomiting, neck pain, grogginess, difficulty concentrating, and personality or behavioral changes were often missed by participants, and only a small proportion correctly identified brain hemorrhage, coma, and death as possible consequences of inappropriate care after a concussion. Even with parents or guardians signing a consent form indicating they discussed concussion awareness with their child, 46% of athletes suggested they had not. PMID:25162779
Concussion knowledge in high school football players.
Cournoyer, Janie; Tripp, Brady L
2014-01-01
Participating in sports while experiencing symptoms of a concussion can be dangerous. An athlete's lack of knowledge may be one factor influencing his or her decision to report symptoms. In an effort to enhance concussion education among high school athletes, legislation in Florida has attempted to address the issue through parental consent forms. To survey high school varsity football players to determine their level of knowledge about concussions after the initiation of new concussion-education legislation. Cross-sectional study. Descriptive survey administered in person during a team meeting. A total of 334 varsity football players from 11 high schools in Florida. Participants completed a survey and identified the symptoms and consequences of a concussion among distractors. They also indicated whether they had received education about concussions from a parent, formal education, neither, or both. The most correctly identified symptoms were headache (97%), dizziness (93%), and confusion (90%), and the most correctly identified consequence was persistent headache (93%). Participants reported receiving education from their parents (54%) or from a formal source (60%). Twenty-five percent reported never receiving any education regarding concussions. No correlations were found between the method of education and the knowledge of symptoms or consequences of concussion. The high school football players we surveyed did not have appropriate knowledge of the symptoms and consequences of concussions. Nausea or vomiting, neck pain, grogginess, difficulty concentrating, and personality or behavioral changes were often missed by participants, and only a small proportion correctly identified brain hemorrhage, coma, and death as possible consequences of inappropriate care after a concussion. Even with parents or guardians signing a consent form indicating they discussed concussion awareness with their child, 46% of athletes suggested they had not.
Using stable isotopes to associate migratory shorebirds with their wintering locations in Argentina
Farmer, A.H.; Abril, M.; Fernandez, M.; Torres, J.; Kester, C.; Bern, C.
2004-01-01
We are evaluating the use of stable isotopes to identify the wintering areas of Neotropical migratory shorebirds in Argentina. Our goal is to associate individual birds, captured on the breeding grounds or in migration with specific winter sites, thereby helping to identify distinct areas used by different subpopulations. In January and February 2002 and 2003, we collected flight feathers from shorebirds at 23 wintering sites distributed across seven province s in Argentina (n = 170). Feathers samples were pre- pared and analyzed for δ13C, δ15N, δ34S, δ18O and δD by continuous flow methods. A discriminant function based on deuterium alone was not an accurate predictor of a shorebird’s province of origin, ranging from 8% correct (Santiago del Estero) to 80% correct (San ta Cruz). When other isotopes were included, the prediction accuracy increased substantially (from 56% in Buenos Aires to 100% in Tucumán). The improvement in accuracy was due to C/N, which separated D-depleted sites in the Andes from those in the south, and the inclusion of S separated sites with respect to their distance from the Atlantic. We also were able to correctly discriminate shorebirds from among two closely spaced sites within the province of Tierra del Fuego. These results suggest the feasibility of identifying the origin of a shorebird at a provincial level of accuracy, as well as uniquely identifying birds from some closely spaced sites. There is a high degree of intra- and inter-bird variability, especially in the Pampas region, where there is wide variety of wetland/water conditions. In that important shorebird region, the variability itself may in fact be the “signature.” Future addition of trace elements to the analyses may improve predictions based solely on stable isotopes.
Identification and Correction of Sample Mix-Ups in Expression Genetic Data: A Case Study
Broman, Karl W.; Keller, Mark P.; Broman, Aimee Teo; Kendziorski, Christina; Yandell, Brian S.; Sen, Śaunak; Attie, Alan D.
2015-01-01
In a mouse intercross with more than 500 animals and genome-wide gene expression data on six tissues, we identified a high proportion (18%) of sample mix-ups in the genotype data. Local expression quantitative trait loci (eQTL; genetic loci influencing gene expression) with extremely large effect were used to form a classifier to predict an individual’s eQTL genotype based on expression data alone. By considering multiple eQTL and their related transcripts, we identified numerous individuals whose predicted eQTL genotypes (based on their expression data) did not match their observed genotypes, and then went on to identify other individuals whose genotypes did match the predicted eQTL genotypes. The concordance of predictions across six tissues indicated that the problem was due to mix-ups in the genotypes (although we further identified a small number of sample mix-ups in each of the six panels of gene expression microarrays). Consideration of the plate positions of the DNA samples indicated a number of off-by-one and off-by-two errors, likely the result of pipetting errors. Such sample mix-ups can be a problem in any genetic study, but eQTL data allow us to identify, and even correct, such problems. Our methods have been implemented in an R package, R/lineup. PMID:26290572
Identification and Correction of Sample Mix-Ups in Expression Genetic Data: A Case Study.
Broman, Karl W; Keller, Mark P; Broman, Aimee Teo; Kendziorski, Christina; Yandell, Brian S; Sen, Śaunak; Attie, Alan D
2015-08-19
In a mouse intercross with more than 500 animals and genome-wide gene expression data on six tissues, we identified a high proportion (18%) of sample mix-ups in the genotype data. Local expression quantitative trait loci (eQTL; genetic loci influencing gene expression) with extremely large effect were used to form a classifier to predict an individual's eQTL genotype based on expression data alone. By considering multiple eQTL and their related transcripts, we identified numerous individuals whose predicted eQTL genotypes (based on their expression data) did not match their observed genotypes, and then went on to identify other individuals whose genotypes did match the predicted eQTL genotypes. The concordance of predictions across six tissues indicated that the problem was due to mix-ups in the genotypes (although we further identified a small number of sample mix-ups in each of the six panels of gene expression microarrays). Consideration of the plate positions of the DNA samples indicated a number of off-by-one and off-by-two errors, likely the result of pipetting errors. Such sample mix-ups can be a problem in any genetic study, but eQTL data allow us to identify, and even correct, such problems. Our methods have been implemented in an R package, R/lineup. Copyright © 2015 Broman et al.
Lee, Kyuhyun; Youn, Yong; Han, Seungwu
2017-01-01
Abstract We identify ground-state collinear spin ordering in various antiferromagnetic transition metal oxides by constructing the Ising model from first-principles results and applying a genetic algorithm to find its minimum energy state. The present method can correctly reproduce the ground state of well-known antiferromagnetic oxides such as NiO, Fe2O3, Cr2O3 and MnO2. Furthermore, we identify the ground-state spin ordering in more complicated materials such as Mn3O4 and CoCr2O4. PMID:28458746
Exploratory Mediation Analysis via Regularization
Serang, Sarfaraz; Jacobucci, Ross; Brimhall, Kim C.; Grimm, Kevin J.
2017-01-01
Exploratory mediation analysis refers to a class of methods used to identify a set of potential mediators of a process of interest. Despite its exploratory nature, conventional approaches are rooted in confirmatory traditions, and as such have limitations in exploratory contexts. We propose a two-stage approach called exploratory mediation analysis via regularization (XMed) to better address these concerns. We demonstrate that this approach is able to correctly identify mediators more often than conventional approaches and that its estimates are unbiased. Finally, this approach is illustrated through an empirical example examining the relationship between college acceptance and enrollment. PMID:29225454
A two-component Bayesian mixture model to identify implausible gestational age.
Mohammadian-Khoshnoud, Maryam; Moghimbeigi, Abbas; Faradmal, Javad; Yavangi, Mahnaz
2016-01-01
Background: Birth weight and gestational age are two important variables in obstetric research. The primary measure of gestational age is based on a mother's recall of her last menstrual period. This recall may cause random or systematic errors. Therefore, the objective of this study is to utilize Bayesian mixture model in order to identify implausible gestational age. Methods: In this cross-sectional study, medical documents of 502 preterm infants born and hospitalized in Hamadan Fatemieh Hospital from 2009 to 2013 were gathered. Preterm infants were classified to less than 28 weeks and 28 to 31 weeks. A two-component Bayesian mixture model was utilized to identify implausible gestational age; the first component shows the probability of correct and the second one shows the probability of incorrect classification of gestational ages. The data were analyzed through OpenBUGS 3.2.2 and 'coda' package of R 3.1.1. Results: The mean (SD) of the second component of less than 28 weeks and 28 to 31 weeks were 1179 (0.0123) and 1620 (0.0074), respectively. These values were larger than the mean of the first component for both groups which were 815.9 (0.0123) and 1061 (0.0074), respectively. Conclusion: Errors occurred in recording the gestational ages of these two groups of preterm infants included recording the gestational age less than the actual value at birth. Therefore, developing scientific methods to correct these errors is essential to providing desirable health services and adjusting accurate health indicators.
Treatment of hypophosphatemia in the intensive care unit: a review
2010-01-01
Introduction Currently no evidence-based guideline exists for the approach to hypophosphatemia in critically ill patients. Methods We performed a narrative review of the medical literature to identify the incidence, symptoms, and treatment of hypophosphatemia in critically ill patients. Specifically, we searched for answers to the questions whether correction of hypophosphatemia is associated with improved outcome, and whether a certain treatment strategy is superior. Results Incidence: hypophosphatemia is frequently encountered in the intensive care unit; and critically ill patients are at increased risk for developing hypophosphatemia due to the presence of multiple causal factors. Symptoms: hypophosphatemia may lead to a multitude of symptoms, including cardiac and respiratory failure. Treatment: hypophosphatemia is generally corrected when it is symptomatic or severe. However, although multiple studies confirm the efficacy and safety of intravenous phosphate administration, it remains uncertain when and how to correct hypophosphatemia. Outcome: in some studies, hypophosphatemia was associated with higher mortality; a paucity of randomized controlled evidence exists for whether correction of hypophosphatemia improves the outcome in critically ill patients. Conclusions Additional studies addressing the current approach to hypophosphatemia in critically ill patients are required. Studies should focus on the association between hypophosphatemia and morbidity and/or mortality, as well as the effect of correction of this electrolyte disorder. PMID:20682049
Persistent aerial video registration and fast multi-view mosaicing.
Molina, Edgardo; Zhu, Zhigang
2014-05-01
Capturing aerial imagery at high resolutions often leads to very low frame rate video streams, well under full motion video standards, due to bandwidth, storage, and cost constraints. Low frame rates make registration difficult when an aircraft is moving at high speeds or when global positioning system (GPS) contains large errors or it fails. We present a method that takes advantage of persistent cyclic video data collections to perform an online registration with drift correction. We split the persistent aerial imagery collection into individual cycles of the scene, identify and correct the registration errors on the first cycle in a batch operation, and then use the corrected base cycle as a reference pass to register and correct subsequent passes online. A set of multi-view panoramic mosaics is then constructed for each aerial pass for representation, presentation and exploitation of the 3D dynamic scene. These sets of mosaics are all in alignment to the reference cycle allowing their direct use in change detection, tracking, and 3D reconstruction/visualization algorithms. Stereo viewing with adaptive baselines and varying view angles is realized by choosing a pair of mosaics from a set of multi-view mosaics. Further, the mosaics for the second pass and later can be generated and visualized online as their is no further batch error correction.
Quantifying errors in trace species transport modeling.
Prather, Michael J; Zhu, Xin; Strahan, Susan E; Steenrod, Stephen D; Rodriguez, Jose M
2008-12-16
One expectation when computationally solving an Earth system model is that a correct answer exists, that with adequate physical approximations and numerical methods our solutions will converge to that single answer. With such hubris, we performed a controlled numerical test of the atmospheric transport of CO(2) using 2 models known for accurate transport of trace species. Resulting differences were unexpectedly large, indicating that in some cases, scientific conclusions may err because of lack of knowledge of the numerical errors in tracer transport models. By doubling the resolution, thereby reducing numerical error, both models show some convergence to the same answer. Now, under realistic conditions, we identify a practical approach for finding the correct answer and thus quantifying the advection error.
Subliminal understanding of negation: unconscious control by subliminal processing of word pairs.
Armstrong, Anna-Marie; Dienes, Zoltan
2013-09-01
A series of five experiments investigated the extent of subliminal processing of negation. Participants were presented with a subliminal instruction to either pick or not pick an accompanying noun, followed by a choice of two nouns. By employing subjective measures to determine individual thresholds of subliminal priming, the results of these studies indicated that participants were able to identify the correct noun of the pair--even when the correct noun was specified by negation. Furthermore, using a grey-scale contrast method of masking, Experiment 5 confirmed that these priming effects were evidenced in the absence of partial awareness, and without the effect being attributed to the retrieval of stimulus-response links established during conscious rehearsal. Copyright © 2013 Elsevier Inc. All rights reserved.
Correction of elevation offsets in multiple co-located lidar datasets
Thompson, David M.; Dalyander, P. Soupy; Long, Joseph W.; Plant, Nathaniel G.
2017-04-07
IntroductionTopographic elevation data collected with airborne light detection and ranging (lidar) can be used to analyze short- and long-term changes to beach and dune systems. Analysis of multiple lidar datasets at Dauphin Island, Alabama, revealed systematic, island-wide elevation differences on the order of 10s of centimeters (cm) that were not attributable to real-world change and, therefore, were likely to represent systematic sampling offsets. These offsets vary between the datasets, but appear spatially consistent within a given survey. This report describes a method that was developed to identify and correct offsets between lidar datasets collected over the same site at different times so that true elevation changes over time, associated with sediment accumulation or erosion, can be analyzed.
Vector boson production in pPb and PbPb collisions at the LHC and its impact on nCTEQ15 PDFs
NASA Astrophysics Data System (ADS)
Kusina, A.; Lyonnet, F.; Clark, D. B.; Godat, E.; Ježo, T.; Kovařík, K.; Olness, F. I.; Schienbein, I.; Yu, J. Y.
2017-07-01
We provide a comprehensive comparison of W^± / Z vector boson production data in pPb and PbPb collisions at the LHC with predictions obtained using the nCTEQ15 PDFs. We identify the measurements which have the largest potential impact on the PDFs, and estimate the effect of including these data using a Bayesian reweighting method. We find this data set can provide information as regards both the nuclear corrections and the heavy flavor (strange quark) PDF components. As for the proton, the parton flavor determination/separation is dependent on nuclear corrections (from heavy target DIS, for example), this information can also help improve the proton PDFs.
A formal theory of feature binding in object perception.
Ashby, F G; Prinzmetal, W; Ivry, R; Maddox, W T
1996-01-01
Visual objects are perceived correctly only if their features are identified and then bound together. Illusory conjunctions result when feature identification is correct but an error occurs during feature binding. A new model is proposed that assumes feature binding errors occur because of uncertainty about the location of visual features. This model accounted for data from 2 new experiments better than a model derived from A. M. Treisman and H. Schmidt's (1982) feature integration theory. The traditional method for detecting the occurrence of true illusory conjunctions is shown to be fundamentally flawed. A reexamination of 2 previous studies provided new insights into the role of attention and location information in object perception and a reinterpretation of the deficits in patients who exhibit attentional disorders.
Automatic classification of bottles in crates
NASA Astrophysics Data System (ADS)
Aas, Kjersti; Eikvil, Line; Bremnes, Dag; Norbryhn, Andreas
1995-03-01
This paper presents a statistical method for classification of bottles in crates for use in automatic return bottle machines. For the automatons to reimburse the correct deposit, a reliable recognition is important. The images are acquired by a laser range scanner coregistering the distance to the object and the strength of the reflected signal. The objective is to identify the crate and the bottles from a library with a number of legal types. The bottles with significantly different size are separated using quite simple methods, while a more sophisticated recognizer is required to distinguish the more similar bottle types. Good results have been obtained when testing the method developed on bottle types which are difficult to distinguish using simple methods.
Global Surveillance of Emerging Influenza Virus Genotypes by Mass Spectrometry
Sampath, Rangarajan; Russell, Kevin L.; Massire, Christian; Eshoo, Mark W.; Harpin, Vanessa; Blyn, Lawrence B.; Melton, Rachael; Ivy, Cristina; Pennella, Thuy; Li, Feng; Levene, Harold; Hall, Thomas A.; Libby, Brian; Fan, Nancy; Walcott, Demetrius J.; Ranken, Raymond; Pear, Michael; Schink, Amy; Gutierrez, Jose; Drader, Jared; Moore, David; Metzgar, David; Addington, Lynda; Rothman, Richard; Gaydos, Charlotte A.; Yang, Samuel; St. George, Kirsten; Fuschino, Meghan E.; Dean, Amy B.; Stallknecht, David E.; Goekjian, Ginger; Yingst, Samuel; Monteville, Marshall; Saad, Magdi D.; Whitehouse, Chris A.; Baldwin, Carson; Rudnick, Karl H.; Hofstadler, Steven A.; Lemon, Stanley M.; Ecker, David J.
2007-01-01
Background Effective influenza surveillance requires new methods capable of rapid and inexpensive genomic analysis of evolving viral species for pandemic preparedness, to understand the evolution of circulating viral species, and for vaccine strain selection. We have developed one such approach based on previously described broad-range reverse transcription PCR/electrospray ionization mass spectrometry (RT-PCR/ESI-MS) technology. Methods and Principal Findings Analysis of base compositions of RT-PCR amplicons from influenza core gene segments (PB1, PB2, PA, M, NS, NP) are used to provide sub-species identification and infer influenza virus H and N subtypes. Using this approach, we detected and correctly identified 92 mammalian and avian influenza isolates, representing 30 different H and N types, including 29 avian H5N1 isolates. Further, direct analysis of 656 human clinical respiratory specimens collected over a seven-year period (1999–2006) showed correct identification of the viral species and subtypes with >97% sensitivity and specificity. Base composition derived clusters inferred from this analysis showed 100% concordance to previously established clades. Ongoing surveillance of samples from the recent influenza virus seasons (2005–2006) showed evidence for emergence and establishment of new genotypes of circulating H3N2 strains worldwide. Mixed viral quasispecies were found in approximately 1% of these recent samples providing a view into viral evolution. Conclusion/Significance Thus, rapid RT-PCR/ESI-MS analysis can be used to simultaneously identify all species of influenza viruses with clade-level resolution, identify mixed viral populations and monitor global spread and emergence of novel viral genotypes. This high-throughput method promises to become an integral component of influenza surveillance. PMID:17534439
Identification of causal genes for complex traits
Hormozdiari, Farhad; Kichaev, Gleb; Yang, Wen-Yun; Pasaniuc, Bogdan; Eskin, Eleazar
2015-01-01
Motivation: Although genome-wide association studies (GWAS) have identified thousands of variants associated with common diseases and complex traits, only a handful of these variants are validated to be causal. We consider ‘causal variants’ as variants which are responsible for the association signal at a locus. As opposed to association studies that benefit from linkage disequilibrium (LD), the main challenge in identifying causal variants at associated loci lies in distinguishing among the many closely correlated variants due to LD. This is particularly important for model organisms such as inbred mice, where LD extends much further than in human populations, resulting in large stretches of the genome with significantly associated variants. Furthermore, these model organisms are highly structured and require correction for population structure to remove potential spurious associations. Results: In this work, we propose CAVIAR-Gene (CAusal Variants Identification in Associated Regions), a novel method that is able to operate across large LD regions of the genome while also correcting for population structure. A key feature of our approach is that it provides as output a minimally sized set of genes that captures the genes which harbor causal variants with probability ρ. Through extensive simulations, we demonstrate that our method not only speeds up computation, but also have an average of 10% higher recall rate compared with the existing approaches. We validate our method using a real mouse high-density lipoprotein data (HDL) and show that CAVIAR-Gene is able to identify Apoa2 (a gene known to harbor causal variants for HDL), while reducing the number of genes that need to be tested for functionality by a factor of 2. Availability and implementation: Software is freely available for download at genetics.cs.ucla.edu/caviar. Contact: eeskin@cs.ucla.edu PMID:26072484
Identification of causal genes for complex traits.
Hormozdiari, Farhad; Kichaev, Gleb; Yang, Wen-Yun; Pasaniuc, Bogdan; Eskin, Eleazar
2015-06-15
Although genome-wide association studies (GWAS) have identified thousands of variants associated with common diseases and complex traits, only a handful of these variants are validated to be causal. We consider 'causal variants' as variants which are responsible for the association signal at a locus. As opposed to association studies that benefit from linkage disequilibrium (LD), the main challenge in identifying causal variants at associated loci lies in distinguishing among the many closely correlated variants due to LD. This is particularly important for model organisms such as inbred mice, where LD extends much further than in human populations, resulting in large stretches of the genome with significantly associated variants. Furthermore, these model organisms are highly structured and require correction for population structure to remove potential spurious associations. In this work, we propose CAVIAR-Gene (CAusal Variants Identification in Associated Regions), a novel method that is able to operate across large LD regions of the genome while also correcting for population structure. A key feature of our approach is that it provides as output a minimally sized set of genes that captures the genes which harbor causal variants with probability ρ. Through extensive simulations, we demonstrate that our method not only speeds up computation, but also have an average of 10% higher recall rate compared with the existing approaches. We validate our method using a real mouse high-density lipoprotein data (HDL) and show that CAVIAR-Gene is able to identify Apoa2 (a gene known to harbor causal variants for HDL), while reducing the number of genes that need to be tested for functionality by a factor of 2. Software is freely available for download at genetics.cs.ucla.edu/caviar. © The Author 2015. Published by Oxford University Press.
ERIC Educational Resources Information Center
Spahr, Anthony J.; Litvak, Leonid M.; Dorman, Michael F.; Bohanan, Ashley R.; Mishra, Lakshmi N.
2008-01-01
Purpose: To determine why, in a pilot study, only 1 of 11 cochlear implant listeners was able to reliably identify a frequency-to-electrode map where the intervals of a familiar melody were played on the correct musical scale. The authors sought to validate their method and to assess the effect of pitch strength on musical scale recognition in…
NASA Technical Reports Server (NTRS)
Moore, J. H.
1973-01-01
A model was developed for the switching radiometer utilizing a continuous method of calibration. Sources of system degradation were identified and include losses and voltage standing wave ratios in front of the receiver input. After computing the three modes of operation, expressions were developed for the normalized radiometer output, the minimum detectable signal (normalized RMS temperature fluctuation), sensitivity, and accuracy correction factors).
ERIC Educational Resources Information Center
le Clercq, Carlijn M. P.; van der Schroeff, Marc P.; Rispens, Judith E.; Ruytjens, Liesbet; Goedegebure, André; van Ingen, Gijs; Franken, Marie-Christine
2017-01-01
Purpose: The purpose of this research note was to validate a simplified version of the Dutch nonword repetition task (NWR; Rispens & Baker, 2012). The NWR was shortened and scoring was transformed to correct/incorrect nonwords, resulting in the shortened NWR (NWR-S). Method: NWR-S and NWR performance were compared in the previously published…
Fault diagnosis of rolling element bearings with a spectrum searching method
NASA Astrophysics Data System (ADS)
Li, Wei; Qiu, Mingquan; Zhu, Zhencai; Jiang, Fan; Zhou, Gongbo
2017-09-01
Rolling element bearing faults in rotating systems are observed as impulses in the vibration signals, which are usually buried in noise. In order to effectively detect faults in bearings, a novel spectrum searching method is proposed in this paper. The structural information of the spectrum (SIOS) on a predefined frequency grid is constructed through a searching algorithm, such that the harmonics of the impulses generated by faults can be clearly identified and analyzed. Local peaks of the spectrum are projected onto certain components of the frequency grid, and then the SIOS can interpret the spectrum via the number and power of harmonics projected onto components of the frequency grid. Finally, bearings can be diagnosed based on the SIOS by identifying its dominant or significant components. The mathematical formulation is developed to guarantee the correct construction of the SIOS through searching. The effectiveness of the proposed method is verified with both simulated and experimental bearing signals.
Nessen, Merel A; van der Zwaan, Dennis J; Grevers, Sander; Dalebout, Hans; Staats, Martijn; Kok, Esther; Palmblad, Magnus
2016-05-11
Proteomics methodology has seen increased application in food authentication, including tandem mass spectrometry of targeted species-specific peptides in raw, processed, or mixed food products. We have previously described an alternative principle that uses untargeted data acquisition and spectral library matching, essentially spectral counting, to compare and identify samples without the need for genomic sequence information in food species populations. Here, we present an interlaboratory comparison demonstrating how a method based on this principle performs in a realistic context. We also increasingly challenge the method by using data from different types of mass spectrometers, by trying to distinguish closely related and commercially important flatfish, and by analyzing heavily contaminated samples. The method was found to be robust in different laboratories, and 94-97% of the analyzed samples were correctly identified, including all processed and contaminated samples.
Moura, P; Barraud, S; Baptista, M B; Malard, F
2011-01-01
Nowadays, stormwater infiltration systems are frequently used because of their ability to reduce flows and volumes in downstream sewers, decrease overflows in surface waters and make it possible to recharge groundwater. Moreover, they come in various forms with different uses. Despite these advantages the long term sustainability of these systems is questionable and their real performances have to be assessed taking into account various and sometimes conflicting aspects. To address this problem a decision support system is proposed. It is based on a multicriteria method built to help managers to evaluate the performance of an existing infiltration system at different stages of its lifespan and identify whether it performs correctly or not, according to environmental, socio-economic, technical and sanitary aspects. The paper presents successively: the performance indicators and the way they were built, the multicriteria method to identify if the system works properly and a case study.
Julian, B.R.; Evans, J.R.; Pritchard, M.J.; Foulger, G.R.
2000-01-01
Some computer programs based on the Aki-Christofferson-Husebye (ACH) method of teleseismic tomography contain an error caused by identifying local grid directions with azimuths on the spherical Earth. This error, which is most severe in high latitudes, introduces systematic errors into computed ray paths and distorts inferred Earth models. It is best dealt with by explicity correcting for the difference between true and grid directions. Methods for computing these directions are presented in this article and are likely to be useful in many other kinds of regional geophysical studies that use Cartesian coordinates and flat-earth approximations.
DNA-Based Methods in the Immunohematology Reference Laboratory
Denomme, Gregory A
2010-01-01
Although hemagglutination serves the immunohematology reference laboratory well, when used alone, it has limited capability to resolve complex problems. This overview discusses how molecular approaches can be used in the immunohematology reference laboratory. In order to apply molecular approaches to immunohematology, knowledge of genes, DNA-based methods, and the molecular bases of blood groups are required. When applied correctly, DNA-based methods can predict blood groups to resolve ABO/Rh discrepancies, identify variant alleles, and screen donors for antigen-negative units. DNA-based testing in immunohematology is a valuable tool used to resolve blood group incompatibilities and to support patients in their transfusion needs. PMID:21257350
Kurzhunov, Dmitry; Borowiak, Robert; Reisert, Marco; Özen, Ali Caglar; Bock, Michael
2018-05-16
To provide a data post-processing method that corrects for partial volume effects (PVE) and fast T2* decay in dynamic 17 O MRI for the mapping of cerebral metabolic rates of oxygen consumption (CMRO 2 ). CMRO 2 is altered in neurodegenerative diseases and tumors and can be measured after 17 O gas inhalation using dynamic 17 O MRI. CMRO 2 quantification is difficult because of PVE. To correct for PVE, a direct estimation of the MR images (DIESIS) method is proposed and used in 4 dynamic 17 O MRI data sets of a healthy volunteer acquired on a 3T MRI system. With DIESIS, 17 O MR signal time curves in selected regions were directly estimated based on parcellation of a coregistered 1 H MPRAGE image. Profile likelihood analysis of the DIESIS method showed identifiability of CMRO 2 . In white matter (WM), DIESES reduced CMRO 2 from 0.97 ± 0.25 µmol/g tissue /min with Kaiser-Bessel gridding reconstruction to 0.85 ± 0.21 µmol/g tissue /min, whereas in gray matter (GM) it increases from 1.3 ± 0.31 µmol/g tissue /min to 1.86 ± 0.36 µmol/g tissue /min; both values are closer to the literature values from the 15 O-PET studies. DIESIS provided an increased separation of CMRO 2 values in GM and WM brain regions and corrected for partial volume effects in 17 O-MRI inhalation experiments. DIESIS could also be applied to more heterogeneous tissues such as glioblastomas if subregions of the tumor can be represented as additional parcels. © 2018 International Society for Magnetic Resonance in Medicine.
Evaluation and updating of the Medical Malacology Collection (Fiocruz-CMM) using molecular taxonomy.
Aguiar-Silva, Cryslaine; Mendonça, Cristiane Lafetá Furtado; da Cunha Kellis Pinheiro, Pedro Henrique; Mesquita, Silvia Gonçalves; Carvalho, Omar Dos Santos; Caldeira, Roberta Lima
2014-01-01
The Medical Malacology Collection (Coleção de Malacologia Médica, Fiocruz-CMM) is a depository of medically relevant mollusks, especially from the genus Biomphalaria, which includes the hosts of Schistosoma mansoni. Taxonomic studies of these snails have traditionally focused on the morphology of the reproductive system. However, determination of some species is complicated by the similarity shown by these characters. Molecular techniques have been used to try to overcome this problem. The Fiocruz-CMM utilizes morphological and/or molecular method for species' identification. However, part of the collection has not been identified by molecular techniques and some points were unidentified. The present study employs polymerase chain reaction-based analysis of restriction fragment length polymorphisms (PCR-RFLP) to evaluate the identification of Biomphalaria in the Fiocruz-CMM, correct existing errors, assess the suitability of taxonomic synonyms, and identify unknown specimens. The results indicated that 56.7% of the mollusk specimens were correctly identified, 4.0% were wrongly identified, and 0.4% was identified under taxonomic synonyms. Additionally, the PCR-RFLP analysis identified for the first time 17.6% of the specimens in the Collection. However, 3.1% of the specimens could not be identified because the mollusk tissues were degraded, and 18.2% of the specimens were inconclusively identified, demonstrating the need for new taxonomic studies in this group. The data was utilized to update data of Environmental Information Reference Center (CRIA). These studies demonstrate the importance of using more than one technique in taxonomic confirmation and the good preservation of specimens' collection.
Cettomai, Deanna; Kwasa, Judith; Birbeck, Gretchen L.; Price, Richard W.; Bukusi, Elizabeth A.; Meyer, Ana-Claire
2011-01-01
Background Recent efforts to improve neurological care in resource-limited settings have focused on providing training to non-physician healthcare workers. Methods A one-day neuro-HIV training module emphasizing HIV-associated dementia (HAD) and peripheral neuropathy was provided to 71 health care workers in western Kenya. Pre- and post-tests were administered to 55 participants. Results Mean age of participants was 29 years, 53% were clinical officers and 40% were nurses. Self-reported comfort was significantly higher for treating medical versus neurologic conditions (p<0.001). After training, participants identified more neuropathy etiologies (pre=5.6/9 possible correct etiologies; post=8.0/9; p<0.001). Only 4% of participants at baseline and 6% (p=0.31) post-training could correctly identify HAD diagnostic criteria, though there were fewer mis-identified criteria such as abnormal level of consciousness (pre=82%; post=43%; p<0.001) and hallucinations (pre=57%; post=15%; p<0.001). Conclusions Healthcare workers were more comfortable treating medical than neurological conditions. This training significantly improved knowledge about etiologies of neuropathy and decreased some misconceptions about HAD. PMID:21652049
Comparison of answer-until-correct and full-credit assessments in a team-based learning course.
Farland, Michelle Z; Barlow, Patrick B; Levi Lancaster, T; Franks, Andrea S
2015-03-25
To assess the impact of awarding partial credit to team assessments on team performance and on quality of team interactions using an answer-until-correct method compared to traditional methods of grading (multiple-choice, full-credit). Subjects were students from 3 different offerings of an ambulatory care elective course, taught using team-based learning. The control group (full-credit) consisted of those enrolled in the course when traditional methods of assessment were used (2 course offerings). The intervention group consisted of those enrolled in the course when answer-until-correct method was used for team assessments (1 course offering). Study outcomes included student performance on individual and team readiness assurance tests (iRATs and tRATs), individual and team final examinations, and student assessment of quality of team interactions using the Team Performance Scale. Eighty-four students enrolled in the courses were included in the analysis (full-credit, n=54; answer-until-correct, n=30). Students who used traditional methods of assessment performed better on iRATs (full-credit mean 88.7 (5.9), answer-until-correct mean 82.8 (10.7), p<0.001). Students who used answer-until-correct method of assessment performed better on the team final examination (full-credit mean 45.8 (1.5), answer-until-correct 47.8 (1.4), p<0.001). There was no significant difference in performance on tRATs and the individual final examination. Students who used the answer-until-correct method had higher quality of team interaction ratings (full-credit 97.1 (9.1), answer-until-correct 103.0 (7.8), p=0.004). Answer-until-correct assessment method compared to traditional, full-credit methods resulted in significantly lower scores for iRATs, similar scores on tRATs and individual final examinations, improved scores on team final examinations, and improved perceptions of the quality of team interactions.
Automated general temperature correction method for dielectric soil moisture sensors
NASA Astrophysics Data System (ADS)
Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao
2017-08-01
An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a significant error factor comparable to ±1% manufacturer's accuracy.
9 CFR 417.3 - Corrective actions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Corrective actions. 417.3 Section 417... ANALYSIS AND CRITICAL CONTROL POINT (HACCP) SYSTEMS § 417.3 Corrective actions. (a) The written HACCP plan shall identify the corrective action to be followed in response to a deviation from a critical limit...
Darbani, Behrooz; Stewart, C Neal; Noeparvar, Shahin; Borg, Søren
2014-10-20
This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies. For maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce an analytical approach to examine the suitability of correction methods by considering the inter-treatment bias as well as the inter-replicate variance, which allows use of the best correction method with minimum residual bias. Analyses of RNA sequencing and microarray data showed that the efficiencies of correction methods are influenced by the inter-treatment bias as well as the inter-replicate variance. Therefore, we recommend inspecting both of the bias sources in order to apply the most efficient correction method. As an alternative correction strategy, sequential application of different correction approaches is also advised. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sawall, Mathias; von Harbou, Erik; Moog, Annekathrin; Behrens, Richard; Schröder, Henning; Simoneau, Joël; Steimers, Ellen; Neymeyr, Klaus
2018-04-01
Spectral data preprocessing is an integral and sometimes inevitable part of chemometric analyses. For Nuclear Magnetic Resonance (NMR) spectra a possible first preprocessing step is a phase correction which is applied to the Fourier transformed free induction decay (FID) signal. This preprocessing step can be followed by a separate baseline correction step. Especially if series of high-resolution spectra are considered, then automated and computationally fast preprocessing routines are desirable. A new method is suggested that applies the phase and the baseline corrections simultaneously in an automated form without manual input, which distinguishes this work from other approaches. The underlying multi-objective optimization or Pareto optimization provides improved results compared to consecutively applied correction steps. The optimization process uses an objective function which applies strong penalty constraints and weaker regularization conditions. The new method includes an approach for the detection of zero baseline regions. The baseline correction uses a modified Whittaker smoother. The functionality of the new method is demonstrated for experimental NMR spectra. The results are verified against gravimetric data. The method is compared to alternative preprocessing tools. Additionally, the simultaneous correction method is compared to a consecutive application of the two correction steps.
Drug exposure in register-based research—An expert-opinion based evaluation of methods
Taipale, Heidi; Koponen, Marjaana; Tolppanen, Anna-Maija; Hartikainen, Sirpa; Ahonen, Riitta; Tiihonen, Jari
2017-01-01
Background In register-based pharmacoepidemiological studies, construction of drug exposure periods from drug purchases is a major methodological challenge. Various methods have been applied but their validity is rarely evaluated. Our objective was to conduct an expert-opinion based evaluation of the correctness of drug use periods produced by different methods. Methods Drug use periods were calculated with three fixed methods: time windows, assumption of one Defined Daily Dose (DDD) per day and one tablet per day, and with PRE2DUP that is based on modelling of individual drug purchasing behavior. Expert-opinion based evaluation was conducted with 200 randomly selected purchase histories of warfarin, bisoprolol, simvastatin, risperidone and mirtazapine in the MEDALZ-2005 cohort (28,093 persons with Alzheimer’s disease). Two experts reviewed purchase histories and judged which methods had joined correct purchases and gave correct duration for each of 1000 drug exposure periods. Results The evaluated correctness of drug use periods was 70–94% for PRE2DUP, and depending on grace periods and time window lengths 0–73% for tablet methods, 0–41% for DDD methods and 0–11% for time window methods. The highest rate of evaluated correct solutions for each method class were observed for 1 tablet per day with 180 days grace period (TAB_1_180, 43–73%), and 1 DDD per day with 180 days grace period (1–41%). Time window methods produced at maximum only 11% correct solutions. The best performing fixed method TAB_1_180 reached highest correctness for simvastatin 73% (95% CI 65–81%) whereas 89% (95% CI 84–94%) of PRE2DUP periods were judged as correct. Conclusions This study shows inaccuracy of fixed methods and the urgent need for new data-driven methods. In the expert-opinion based evaluation, the lowest error rates were observed with data-driven method PRE2DUP. PMID:28886089
A study on scattering correction for γ-photon 3D imaging test method
NASA Astrophysics Data System (ADS)
Xiao, Hui; Zhao, Min; Liu, Jiantang; Chen, Hao
2018-03-01
A pair of 511KeV γ-photons is generated during a positron annihilation. Their directions differ by 180°. The moving path and energy information can be utilized to form the 3D imaging test method in industrial domain. However, the scattered γ-photons are the major factors influencing the imaging precision of the test method. This study proposes a γ-photon single scattering correction method from the perspective of spatial geometry. The method first determines possible scattering points when the scattered γ-photon pair hits the detector pair. The range of scattering angle can then be calculated according to the energy window. Finally, the number of scattered γ-photons denotes the attenuation of the total scattered γ-photons along its moving path. The corrected γ-photons are obtained by deducting the scattered γ-photons from the original ones. Two experiments are conducted to verify the effectiveness of the proposed scattering correction method. The results concluded that the proposed scattering correction method can efficiently correct scattered γ-photons and improve the test accuracy.
Suzuki, Ryo; Ito, Kohta; Lee, Taeyong; Ogihara, Naomichi
2017-12-01
Identifying the viscous properties of the plantar soft tissue is crucial not only for understanding the dynamic interaction of the foot with the ground during locomotion, but also for development of improved footwear products and therapeutic footwear interventions. In the present study, the viscous and hyperelastic material properties of the plantar soft tissue were experimentally identified using a spherical indentation test and an analytical contact model of the spherical indentation test. Force-relaxation curves of the heel pads were obtained from the indentation experiment. The curves were fit to the contact model incorporating a five-element Maxwell model to identify the viscous material parameters. The finite element method with the experimentally identified viscoelastic parameters could successfully reproduce the measured force-relaxation curves, indicating the material parameters were correctly estimated using the proposed method. Although there are some methodological limitations, the proposed framework to identify the viscous material properties may facilitate the development of subject-specific finite element modeling of the foot and other biological materials. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Pathway analysis with next-generation sequencing data.
Zhao, Jinying; Zhu, Yun; Boerwinkle, Eric; Xiong, Momiao
2015-04-01
Although pathway analysis methods have been developed and successfully applied to association studies of common variants, the statistical methods for pathway-based association analysis of rare variants have not been well developed. Many investigators observed highly inflated false-positive rates and low power in pathway-based tests of association of rare variants. The inflated false-positive rates and low true-positive rates of the current methods are mainly due to their lack of ability to account for gametic phase disequilibrium. To overcome these serious limitations, we develop a novel statistic that is based on the smoothed functional principal component analysis (SFPCA) for pathway association tests with next-generation sequencing data. The developed statistic has the ability to capture position-level variant information and account for gametic phase disequilibrium. By intensive simulations, we demonstrate that the SFPCA-based statistic for testing pathway association with either rare or common or both rare and common variants has the correct type 1 error rates. Also the power of the SFPCA-based statistic and 22 additional existing statistics are evaluated. We found that the SFPCA-based statistic has a much higher power than other existing statistics in all the scenarios considered. To further evaluate its performance, the SFPCA-based statistic is applied to pathway analysis of exome sequencing data in the early-onset myocardial infarction (EOMI) project. We identify three pathways significantly associated with EOMI after the Bonferroni correction. In addition, our preliminary results show that the SFPCA-based statistic has much smaller P-values to identify pathway association than other existing methods.
Accuracy of the VITEK 2 System To Detect Glycopeptide Resistance in Enterococci
van den Braak, Nicole; Goessens, Wil; van Belkum, Alex; Verbrugh, Henri A.; Endtz, Hubert P.
2001-01-01
We evaluated the accuracy of the VITEK 2 fully automated system to detect and identify glycopeptide-resistant enterococci (GRE) compared to a reference agar dilution method. The sensitivity of vancomycin susceptibility testing with VITEK 2 for the detection of vanA, vanB, and vanC1 strains was 100%. The sensitivity of vancomycin susceptibility testing of vanC2 strains was 77%. The sensitivity of teicoplanin susceptibility testing of vanA strains was 90%. Of 80 vanC enterococci, 78 (98%) were correctly identified by VITEK 2 as Enterococcus gallinarum/Enterococcus casseliflavus. Since the identification and susceptibility data are produced within 3 and 8 h, respectively, VITEK 2 appears a fast and reliable method for detection of GRE in microbiology laboratories. PMID:11136798
Bayesian Statistical Inference in Ion-Channel Models with Exact Missed Event Correction.
Epstein, Michael; Calderhead, Ben; Girolami, Mark A; Sivilotti, Lucia G
2016-07-26
The stochastic behavior of single ion channels is most often described as an aggregated continuous-time Markov process with discrete states. For ligand-gated channels each state can represent a different conformation of the channel protein or a different number of bound ligands. Single-channel recordings show only whether the channel is open or shut: states of equal conductance are aggregated, so transitions between them have to be inferred indirectly. The requirement to filter noise from the raw signal further complicates the modeling process, as it limits the time resolution of the data. The consequence of the reduced bandwidth is that openings or shuttings that are shorter than the resolution cannot be observed; these are known as missed events. Postulated models fitted using filtered data must therefore explicitly account for missed events to avoid bias in the estimation of rate parameters and therefore assess parameter identifiability accurately. In this article, we present the first, to our knowledge, Bayesian modeling of ion-channels with exact missed events correction. Bayesian analysis represents uncertain knowledge of the true value of model parameters by considering these parameters as random variables. This allows us to gain a full appreciation of parameter identifiability and uncertainty when estimating values for model parameters. However, Bayesian inference is particularly challenging in this context as the correction for missed events increases the computational complexity of the model likelihood. Nonetheless, we successfully implemented a two-step Markov chain Monte Carlo method that we called "BICME", which performs Bayesian inference in models of realistic complexity. The method is demonstrated on synthetic and real single-channel data from muscle nicotinic acetylcholine channels. We show that parameter uncertainty can be characterized more accurately than with maximum-likelihood methods. Our code for performing inference in these ion channel models is publicly available. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Automatic identification of inertial sensor placement on human body segments during walking.
Weenk, Dirk; van Beijnum, Bert-Jan F; Baten, Chris T M; Hermens, Hermie J; Veltink, Peter H
2013-03-21
Current inertial motion capture systems are rarely used in biomedical applications. The attachment and connection of the sensors with cables is often a complex and time consuming task. Moreover, it is prone to errors, because each sensor has to be attached to a predefined body segment. By using wireless inertial sensors and automatic identification of their positions on the human body, the complexity of the set-up can be reduced and incorrect attachments are avoided.We present a novel method for the automatic identification of inertial sensors on human body segments during walking. This method allows the user to place (wireless) inertial sensors on arbitrary body segments. Next, the user walks for just a few seconds and the segment to which each sensor is attached is identified automatically. Walking data was recorded from ten healthy subjects using an Xsens MVN Biomech system with full-body configuration (17 inertial sensors). Subjects were asked to walk for about 6 seconds at normal walking speed (about 5 km/h). After rotating the sensor data to a global coordinate frame with x-axis in walking direction, y-axis pointing left and z-axis vertical, RMS, mean, and correlation coefficient features were extracted from x-, y- and z-components and magnitudes of the accelerations, angular velocities and angular accelerations. As a classifier, a decision tree based on the C4.5 algorithm was developed using Weka (Waikato Environment for Knowledge Analysis). After testing the algorithm with 10-fold cross-validation using 31 walking trials (involving 527 sensors), 514 sensors were correctly classified (97.5%). When a decision tree for a lower body plus trunk configuration (8 inertial sensors) was trained and tested using 10-fold cross-validation, 100% of the sensors were correctly identified. This decision tree was also tested on walking trials of 7 patients (17 walking trials) after anterior cruciate ligament reconstruction, which also resulted in 100% correct identification, thus illustrating the robustness of the method.
NASA Astrophysics Data System (ADS)
Bhardwaj, Alok; Ziegler, Alan D.; Wasson, Robert J.; Chow, Winston; Sharma, Mukat L.
2017-04-01
Extreme monsoon rainfall is the primary reason of floods and other secondary hazards such as landslides in the Indian Himalaya. Understanding the phenomena of extreme monsoon rainfall is therefore required to study the natural hazards. In this work, we study the characteristics of extreme monsoon rainfall including its intensity and frequency in the Garhwal Himalaya in India, with a focus on the Mandakini River Catchment, the site of devastating flood and multiple large landslides in 2013. We have used two long term rainfall gridded data sets: the Asian Precipitation Highly Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE) product with daily rainfall data from 1951-2007 and the India Meteorological Department (IMD) product with daily rainfall data from 1901 to 2013. Two methods of Mann Kendall and Sen Slope estimator are used to identify the statistical significance and magnitude of trends in intensity and frequency of extreme monsoon rainfall respectively, at a significance level of 0.05. The autocorrelation in the time series of extreme monsoon rainfall is identified and reduced using the methods of: pre-whitening, trend-free pre-whitening, variance correction, and block bootstrap. We define extreme monsoon rainfall threshold as the 99th percentile of time series of rainfall values and any rainfall depth greater than 99th percentile is considered as extreme in nature. With the IMD data set, significant increasing trend in intensity and frequency of extreme rainfall with slope magnitude of 0.55 and 0.02 respectively was obtained in the north of the Mandakini Catchment as identified by all four methods. Significant increasing trend in intensity with a slope magnitude of 0.3 is found in the middle of the catchment as identified by all methods except block bootstrap. In the south of the catchment, significant increasing trend in intensity with a slope magnitude of 0.86 for pre-whitening method and 0.28 for trend-free pre-whitening and variance correction methods was obtained. Further, increasing trend in frequency with a slope magnitude of 0.01 was identified by three methods except block bootstrap in the south of the catchment. With the APHRODITE data set, we obtained significant increasing trend in intensity with a slope magnitude of 1.27 at the middle of the catchment as identified by all four methods. Collectively, both the datasets show signals of increasing intensity, and IMD shows results for increasing frequency in the Mandakini Catchment. The increasing occurrence of extreme events, as identified here, is becoming more disastrous because of rising human population and infrastructure in the Mandakini Catchment. For example, the 2013 flood due to extreme rainfall was catastrophic in terms of loss of human and animal lives and destruction of the local economy. We believe our results will help understand more about extreme rainfall events in the Mandakini Catchment and in the Indian Himalaya.
WE-AB-207A-07: A Planning CT-Guided Scatter Artifact Correction Method for CBCT Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, X; Liu, T; Dong, X
Purpose: Cone beam computed tomography (CBCT) imaging is on increasing demand for high-performance image-guided radiotherapy such as online tumor delineation and dose calculation. However, the current CBCT imaging has severe scatter artifacts and its current clinical application is therefore limited to patient setup based mainly on the bony structures. This study’s purpose is to develop a CBCT artifact correction method. Methods: The proposed scatter correction method utilizes the planning CT to improve CBCT image quality. First, an image registration is used to match the planning CT with the CBCT to reduce the geometry difference between the two images. Then, themore » planning CT-based prior information is entered into the Bayesian deconvolution framework to iteratively perform a scatter artifact correction for the CBCT mages. This technique was evaluated using Catphan phantoms with multiple inserts. Contrast-to-noise ratios (CNR) and signal-to-noise ratios (SNR), and the image spatial nonuniformity (ISN) in selected volume of interests (VOIs) were calculated to assess the proposed correction method. Results: Post scatter correction, the CNR increased by a factor of 1.96, 3.22, 3.20, 3.46, 3.44, 1.97 and 1.65, and the SNR increased by a factor 1.05, 2.09, 1.71, 3.95, 2.52, 1.54 and 1.84 for the Air, PMP, LDPE, Polystryrene, Acrylic, Delrin and Teflon inserts, respectively. The ISN decreased from 21.1% to 4.7% in the corrected images. All values of CNR, SNR and ISN in the corrected CBCT image were much closer to those in the planning CT images. The results demonstrated that the proposed method reduces the relevant artifacts and recovers CT numbers. Conclusion: We have developed a novel CBCT artifact correction method based on CT image, and demonstrated that the proposed CT-guided correction method could significantly reduce scatter artifacts and improve the image quality. This method has great potential to correct CBCT images allowing its use in adaptive radiotherapy.« less
Ma, Irene W Y; Caplin, Joshua D; Azad, Aftab; Wilson, Christina; Fifer, Michael A; Bagchi, Aranya; Liteplo, Andrew S; Noble, Vicki E
2017-12-01
Non-invasive measures that can accurately estimate cardiac output may help identify volume-responsive patients. This study seeks to compare two non-invasive measures (corrected carotid flow time and carotid blood flow) and their correlations with invasive reference measurements of cardiac output. Consenting adult patients (n = 51) at Massachusetts General Hospital cardiac catheterization laboratory undergoing right heart catheterization between February and April 2016 were included. Carotid ultrasound images were obtained concurrently with cardiac output measurements, obtained by the thermodilution method in the absence of severe tricuspid regurgitation and by the Fick oxygen method otherwise. Corrected carotid flow time was calculated as systole time/√cycle time. Carotid blood flow was calculated as π × (carotid diameter) 2 /4 × velocity time integral × heart rate. Measurements were obtained using a single carotid waveform and an average of three carotid waveforms for both measures. Single waveform measurements of corrected flow time did not correlate with cardiac output (ρ = 0.25, 95% CI -0.03 to 0.49, p = 0.08), but an average of three waveforms correlated significantly, although weakly (ρ = 0.29, 95% CI 0.02-0.53, p = 0.046). Carotid blood flow measurements correlated moderately with cardiac output regardless of if single waveform or an average of three waveforms were used: ρ = 0.44, 95% CI 0.18-0.63, p = 0.004, and ρ = 0.41, 95% CI 0.16-0.62, p = 0.004, respectively. Carotid blood flow may be a better marker of cardiac output and less subject to measurements issues than corrected carotid flow time.
InSAR Tropospheric Correction Methods: A Statistical Comparison over Different Regions
NASA Astrophysics Data System (ADS)
Bekaert, D. P.; Walters, R. J.; Wright, T. J.; Hooper, A. J.; Parker, D. J.
2015-12-01
Observing small magnitude surface displacements through InSAR is highly challenging, and requires advanced correction techniques to reduce noise. In fact, one of the largest obstacles facing the InSAR community is related to tropospheric noise correction. Spatial and temporal variations in temperature, pressure, and relative humidity result in a spatially-variable InSAR tropospheric signal, which masks smaller surface displacements due to tectonic or volcanic deformation. Correction methods applied today include those relying on weather model data, GNSS and/or spectrometer data. Unfortunately, these methods are often limited by the spatial and temporal resolution of the auxiliary data. Alternatively a correction can be estimated from the high-resolution interferometric phase by assuming a linear or a power-law relationship between the phase and topography. For these methods, the challenge lies in separating deformation from tropospheric signals. We will present results of a statistical comparison of the state-of-the-art tropospheric corrections estimated from spectrometer products (MERIS and MODIS), a low and high spatial-resolution weather model (ERA-I and WRF), and both the conventional linear and power-law empirical methods. We evaluate the correction capability over Southern Mexico, Italy, and El Hierro, and investigate the impact of increasing cloud cover on the accuracy of the tropospheric delay estimation. We find that each method has its strengths and weaknesses, and suggest that further developments should aim to combine different correction methods. All the presented methods are included into our new open source software package called TRAIN - Toolbox for Reducing Atmospheric InSAR Noise (Bekaert et al., in review), which is available to the community Bekaert, D., R. Walters, T. Wright, A. Hooper, and D. Parker (in review), Statistical comparison of InSAR tropospheric correction techniques, Remote Sensing of Environment
Vertebral derotation in adolescent idiopathic scoliosis causes hypokyphosis of the thoracic spine
2012-01-01
Background The purpose of this study was to test the hypothesis that direct vertebral derotation by pedicle screws (PS) causes hypokyphosis of the thoracic spine in adolescent idiopathic scoliosis (AIS) patients, using computer simulation. Methods Twenty AIS patients with Lenke type 1 or 2 who underwent posterior correction surgeries using PS were included in this study. Simulated corrections of each patient’s scoliosis, as determined by the preoperative CT scan data, were performed on segmented 3D models of the whole spine. Two types of simulated extreme correction were performed: 1) complete coronal correction only (C method) and 2) complete coronal correction with complete derotation of vertebral bodies (C + D method). The kyphosis angle (T5-T12) and vertebral rotation angle at the apex were measured before and after the simulated corrections. Results The mean kyphosis angle after the C + D method was significantly smaller than that after the C method (2.7 ± 10.0° vs. 15.0 ± 7.1°, p < 0.01). The mean preoperative apical rotation angle of 15.2 ± 5.5° was completely corrected after the C + D method (0°) and was unchanged after the C method (17.6 ± 4.2°). Conclusions In the 3D simulation study, kyphosis was reduced after complete correction of the coronal and rotational deformity, but it was maintained after the coronal-only correction. These results proved the hypothesis that the vertebral derotation obtained by PS causes hypokyphosis of the thoracic spine. PMID:22691717
Du, Pan; Kibbe, Warren A; Lin, Simon M
2006-09-01
A major problem for current peak detection algorithms is that noise in mass spectrometry (MS) spectra gives rise to a high rate of false positives. The false positive rate is especially problematic in detecting peaks with low amplitudes. Usually, various baseline correction algorithms and smoothing methods are applied before attempting peak detection. This approach is very sensitive to the amount of smoothing and aggressiveness of the baseline correction, which contribute to making peak detection results inconsistent between runs, instrumentation and analysis methods. Most peak detection algorithms simply identify peaks based on amplitude, ignoring the additional information present in the shape of the peaks in a spectrum. In our experience, 'true' peaks have characteristic shapes, and providing a shape-matching function that provides a 'goodness of fit' coefficient should provide a more robust peak identification method. Based on these observations, a continuous wavelet transform (CWT)-based peak detection algorithm has been devised that identifies peaks with different scales and amplitudes. By transforming the spectrum into wavelet space, the pattern-matching problem is simplified and in addition provides a powerful technique for identifying and separating the signal from the spike noise and colored noise. This transformation, with the additional information provided by the 2D CWT coefficients can greatly enhance the effective signal-to-noise ratio. Furthermore, with this technique no baseline removal or peak smoothing preprocessing steps are required before peak detection, and this improves the robustness of peak detection under a variety of conditions. The algorithm was evaluated with SELDI-TOF spectra with known polypeptide positions. Comparisons with two other popular algorithms were performed. The results show the CWT-based algorithm can identify both strong and weak peaks while keeping false positive rate low. The algorithm is implemented in R and will be included as an open source module in the Bioconductor project.
Harmonic source wavefront aberration correction for ultrasound imaging
Dianis, Scott W.; von Ramm, Olaf T.
2011-01-01
A method is proposed which uses a lower-frequency transmit to create a known harmonic acoustical source in tissue suitable for wavefront correction without a priori assumptions of the target or requiring a transponder. The measurement and imaging steps of this method were implemented on the Duke phased array system with a two-dimensional (2-D) array. The method was tested with multiple electronic aberrators [0.39π to 1.16π radians root-mean-square (rms) at 4.17 MHz] and with a physical aberrator 0.17π radians rms at 4.17 MHz) in a variety of imaging situations. Corrections were quantified in terms of peak beam amplitude compared to the unaberrated case, with restoration between 0.6 and 36.6 dB of peak amplitude with a single correction. Standard phantom images before and after correction were obtained and showed both visible improvement and 14 dB contrast improvement after correction. This method, when combined with previous phase correction methods, may be an important step that leads to improved clinical images. PMID:21303031
NASA Astrophysics Data System (ADS)
Kokka, Alexander; Pulli, Tomi; Poikonen, Tuomas; Askola, Janne; Ikonen, Erkki
2017-08-01
This paper presents a fisheye camera method for determining spatial non-uniformity corrections in luminous flux measurements with integrating spheres. Using a fisheye camera installed into a port of an integrating sphere, the relative angular intensity distribution of the lamp under test is determined. This angular distribution is used for calculating the spatial non-uniformity correction for the lamp when combined with the spatial responsivity data of the sphere. The method was validated by comparing it to a traditional goniophotometric approach when determining spatial correction factors for 13 LED lamps with different angular spreads. The deviations between the spatial correction factors obtained using the two methods ranged from -0.15 % to 0.15%. The mean magnitude of the deviations was 0.06%. For a typical LED lamp, the expanded uncertainty (k = 2 ) for the spatial non-uniformity correction factor was evaluated to be 0.28%. The fisheye camera method removes the need for goniophotometric measurements in determining spatial non-uniformity corrections, thus resulting in considerable system simplification. Generally, no permanent modifications to existing integrating spheres are required.
Conomos, Matthew P; Miller, Michael B; Thornton, Timothy A
2015-05-01
Population structure inference with genetic data has been motivated by a variety of applications in population genetics and genetic association studies. Several approaches have been proposed for the identification of genetic ancestry differences in samples where study participants are assumed to be unrelated, including principal components analysis (PCA), multidimensional scaling (MDS), and model-based methods for proportional ancestry estimation. Many genetic studies, however, include individuals with some degree of relatedness, and existing methods for inferring genetic ancestry fail in related samples. We present a method, PC-AiR, for robust population structure inference in the presence of known or cryptic relatedness. PC-AiR utilizes genome-screen data and an efficient algorithm to identify a diverse subset of unrelated individuals that is representative of all ancestries in the sample. The PC-AiR method directly performs PCA on the identified ancestry representative subset and then predicts components of variation for all remaining individuals based on genetic similarities. In simulation studies and in applications to real data from Phase III of the HapMap Project, we demonstrate that PC-AiR provides a substantial improvement over existing approaches for population structure inference in related samples. We also demonstrate significant efficiency gains, where a single axis of variation from PC-AiR provides better prediction of ancestry in a variety of structure settings than using 10 (or more) components of variation from widely used PCA and MDS approaches. Finally, we illustrate that PC-AiR can provide improved population stratification correction over existing methods in genetic association studies with population structure and relatedness. © 2015 WILEY PERIODICALS, INC.
Ligand Electron Density Shape Recognition Using 3D Zernike Descriptors
NASA Astrophysics Data System (ADS)
Gunasekaran, Prasad; Grandison, Scott; Cowtan, Kevin; Mak, Lora; Lawson, David M.; Morris, Richard J.
We present a novel approach to crystallographic ligand density interpretation based on Zernike shape descriptors. Electron density for a bound ligand is expanded in an orthogonal polynomial series (3D Zernike polynomials) and the coefficients from this expansion are employed to construct rotation-invariant descriptors. These descriptors can be compared highly efficiently against large databases of descriptors computed from other molecules. In this manuscript we describe this process and show initial results from an electron density interpretation study on a dataset containing over a hundred OMIT maps. We could identify the correct ligand as the first hit in about 30 % of the cases, within the top five in a further 30 % of the cases, and giving rise to an 80 % probability of getting the correct ligand within the top ten matches. In all but a few examples, the top hit was highly similar to the correct ligand in both shape and chemistry. Further extensions and intrinsic limitations of the method are discussed.
Pregnancy and Parenting Support for Incarcerated Women: Lessons Learned
Shlafer, Rebecca J.; Gerrity, Erica; Duwe, Grant
2017-01-01
Background There are more than 200,000 incarcerated women in U.S. prisons and jails, and it is estimated that 6% to 10% are pregnant. Pregnant incarcerated women experience complex risks that can compromise their health and the health of their offspring. Objectives Identify lessons learned from a community–university pilot study of a prison-based pregnancy and parenting support program. Methods A community–university–corrections partnership was formed to provide education and support to pregnant incarcerated women through a prison-based pilot program. Evaluation data assessed women’s physical and mental health concerns and satisfaction with the program. Between October 2011 and December 2012, 48 women participated. Lessons Learned We learned that providing services for pregnant incarcerated women requires an effective partnership with the Department of Corrections, adaptations to traditional community-based participatory research (CBPR) approaches, and resources that support both direct service and ongoing evaluation. Conclusions Effective services for pregnant incarcerated women can be provided through a successful community– university–corrections partnership. PMID:26548788
A novel method for correcting scanline-observational bias of discontinuity orientation
Huang, Lei; Tang, Huiming; Tan, Qinwen; Wang, Dingjian; Wang, Liangqing; Ez Eldin, Mutasim A. M.; Li, Changdong; Wu, Qiong
2016-01-01
Scanline observation is known to introduce an angular bias into the probability distribution of orientation in three-dimensional space. In this paper, numerical solutions expressing the functional relationship between the scanline-observational distribution (in one-dimensional space) and the inherent distribution (in three-dimensional space) are derived using probability theory and calculus under the independence hypothesis of dip direction and dip angle. Based on these solutions, a novel method for obtaining the inherent distribution (also for correcting the bias) is proposed, an approach which includes two procedures: 1) Correcting the cumulative probabilities of orientation according to the solutions, and 2) Determining the distribution of the corrected orientations using approximation methods such as the one-sample Kolmogorov-Smirnov test. The inherent distribution corrected by the proposed method can be used for discrete fracture network (DFN) modelling, which is applied to such areas as rockmass stability evaluation, rockmass permeability analysis, rockmass quality calculation and other related fields. To maximize the correction capacity of the proposed method, the observed sample size is suggested through effectiveness tests for different distribution types, dispersions and sample sizes. The performance of the proposed method and the comparison of its correction capacity with existing methods are illustrated with two case studies. PMID:26961249
Continental-scale Validation of MODIS-based and LEDAPS Landsat ETM+ Atmospheric Correction Methods
NASA Technical Reports Server (NTRS)
Ju, Junchang; Roy, David P.; Vermote, Eric; Masek, Jeffrey; Kovalskyy, Valeriy
2012-01-01
The potential of Landsat data processing to provide systematic continental scale products has been demonstrated by several projects including the NASA Web-enabled Landsat Data (WELD) project. The recent free availability of Landsat data increases the need for robust and efficient atmospheric correction algorithms applicable to large volume Landsat data sets. This paper compares the accuracy of two Landsat atmospheric correction methods: a MODIS-based method and the Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) method. Both methods are based on the 6SV radiative transfer code but have different atmospheric characterization approaches. The MODIS-based method uses the MODIS Terra derived dynamic aerosol type, aerosol optical thickness, and water vapor to atmospherically correct ETM+ acquisitions in each coincident orbit. The LEDAPS method uses aerosol characterizations derived independently from each Landsat acquisition and assumes a fixed continental aerosol type and uses ancillary water vapor. Validation results are presented comparing ETM+ atmospherically corrected data generated using these two methods with AERONET corrected ETM+ data for 95 10 km×10 km 30 m subsets, a total of nearly 8 million 30 m pixels, located across the conterminous United States. The results indicate that the MODIS-based method has better accuracy than the LEDAPS method for the ETM+ red and longer wavelength bands.
Gadermayr, M.; Liedlgruber, M.; Uhl, A.; Vécsei, A.
2013-01-01
Due to the optics used in endoscopes, a typical degradation observed in endoscopic images are barrel-type distortions. In this work we investigate the impact of methods used to correct such distortions in images on the classification accuracy in the context of automated celiac disease classification. For this purpose we compare various different distortion correction methods and apply them to endoscopic images, which are subsequently classified. Since the interpolation used in such methods is also assumed to have an influence on the resulting classification accuracies, we also investigate different interpolation methods and their impact on the classification performance. In order to be able to make solid statements about the benefit of distortion correction we use various different feature extraction methods used to obtain features for the classification. Our experiments show that it is not possible to make a clear statement about the usefulness of distortion correction methods in the context of an automated diagnosis of celiac disease. This is mainly due to the fact that an eventual benefit of distortion correction highly depends on the feature extraction method used for the classification. PMID:23981585
Wei, Yinsheng; Guo, Rujiang; Xu, Rongqing; Tang, Xiudong
2014-01-01
Ionospheric phase perturbation with large amplitude causes broadening sea clutter's Bragg peaks to overlap each other; the performance of traditional decontamination methods about filtering Bragg peak is poor, which greatly limits the detection performance of HF skywave radars. In view of the ionospheric phase perturbation with large amplitude, this paper proposes a cascaded approach based on improved S-method to correct the ionospheric phase contamination. This approach consists of two correction steps. At the first step, a time-frequency distribution method based on improved S-method is adopted and an optimal detection method is designed to obtain a coarse ionospheric modulation estimation from the time-frequency distribution. At the second correction step, based on the phase gradient algorithm (PGA) is exploited to eliminate the residual contamination. Finally, use the measured data to verify the effectiveness of the method. Simulation results show the time-frequency resolution of this method is high and is not affected by the interference of the cross term; ionospheric phase perturbation with large amplitude can be corrected in low signal-to-noise (SNR); such a cascade correction method has a good effect. PMID:24578656
RELIC: a novel dye-bias correction method for Illumina Methylation BeadChip.
Xu, Zongli; Langie, Sabine A S; De Boever, Patrick; Taylor, Jack A; Niu, Liang
2017-01-03
The Illumina Infinium HumanMethylation450 BeadChip and its successor, Infinium MethylationEPIC BeadChip, have been extensively utilized in epigenome-wide association studies. Both arrays use two fluorescent dyes (Cy3-green/Cy5-red) to measure methylation level at CpG sites. However, performance difference between dyes can result in biased estimates of methylation levels. Here we describe a novel method, called REgression on Logarithm of Internal Control probes (RELIC) to correct for dye bias on whole array by utilizing the intensity values of paired internal control probes that monitor the two color channels. We evaluate the method in several datasets against other widely used dye-bias correction methods. Results on data quality improvement showed that RELIC correction statistically significantly outperforms alternative dye-bias correction methods. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website ( https://www.bioconductor.org/packages/release/bioc/html/ENmix.html ). RELIC is an efficient and robust method to correct for dye-bias in Illumina Methylation BeadChip data. It outperforms other alternative methods and conveniently implemented in R package ENmix to facilitate DNA methylation studies.
Becker, Pierre T; de Bel, Annelies; Martiny, Delphine; Ranque, Stéphane; Piarroux, Renaud; Cassagne, Carole; Detandt, Monique; Hendrickx, Marijke
2014-11-01
The identification of filamentous fungi by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) relies mainly on a robust and extensive database of reference spectra. To this end, a large in-house library containing 760 strains and representing 472 species was built and evaluated on 390 clinical isolates by comparing MALDI-TOF MS with the classical identification method based on morphological observations. The use of MALDI-TOF MS resulted in the correct identification of 95.4% of the isolates at species level, without considering LogScore values. Taking into account the Brukers' cutoff value for reliability (LogScore >1.70), 85.6% of the isolates were correctly identified. For a number of isolates, microscopic identification was limited to the genus, resulting in only 61.5% of the isolates correctly identified at species level while the correctness reached 94.6% at genus level. Using this extended in-house database, MALDI-TOF MS thus appears superior to morphology in order to obtain a robust and accurate identification of filamentous fungi. A continuous extension of the library is however necessary to further improve its reliability. Indeed, 15 isolates were still not represented while an additional three isolates were not recognized, probably because of a lack of intraspecific variability of the corresponding species in the database. © The Author 2014. Published by Oxford University Press on behalf of The International Society for Human and Animal Mycology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
An Automated Baseline Correction Method Based on Iterative Morphological Operations.
Chen, Yunliang; Dai, Liankui
2018-05-01
Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.
Identifying pathogenic processes by integrating microarray data with prior knowledge
2014-01-01
Background It is of great importance to identify molecular processes and pathways that are involved in disease etiology. Although there has been an extensive use of various high-throughput methods for this task, pathogenic pathways are still not completely understood. Often the set of genes or proteins identified as altered in genome-wide screens show a poor overlap with canonical disease pathways. These findings are difficult to interpret, yet crucial in order to improve the understanding of the molecular processes underlying the disease progression. We present a novel method for identifying groups of connected molecules from a set of differentially expressed genes. These groups represent functional modules sharing common cellular function and involve signaling and regulatory events. Specifically, our method makes use of Bayesian statistics to identify groups of co-regulated genes based on the microarray data, where external information about molecular interactions and connections are used as priors in the group assignments. Markov chain Monte Carlo sampling is used to search for the most reliable grouping. Results Simulation results showed that the method improved the ability of identifying correct groups compared to traditional clustering, especially for small sample sizes. Applied to a microarray heart failure dataset the method found one large cluster with several genes important for the structure of the extracellular matrix and a smaller group with many genes involved in carbohydrate metabolism. The method was also applied to a microarray dataset on melanoma cancer patients with or without metastasis, where the main cluster was dominated by genes related to keratinocyte differentiation. Conclusion Our method found clusters overlapping with known pathogenic processes, but also pointed to new connections extending beyond the classical pathways. PMID:24758699
Liles, Iyanna; Haddad, Lisa B; Lathrop, Eva; Hankin, Abigail
2016-05-01
Almost half of all pregnancies in the United States are unintended; these pregnancies are associated with adverse outcomes. Many reproductive-age females seek care in the emergency department (ED), are at risk of pregnancy, and are amenable to contraceptive services in this setting. Through a pilot study, we sought to assess ED providers' current practices; attitudes; and knowledge of emergency contraception (EC) and nonemergency contraception (non-EC), as well as barriers with respect to contraception initiation. ED physicians and associate providers in Georgia were e-mailed a link to an anonymous Internet questionnaire using state professional databases and contacts. The questionnaire included Likert scales with multiple-choice questions to assess study objectives. Descriptive statistics were generated as well as univariate analyses using χ(2) and Fisher exact tests. A total of 1232 providers were e-mailed, with 119 questionnaires completed. Participants were predominantly physicians (80%), men (59%), and individuals younger than 45 years (59%). Common practices were referrals (96%), EC prescriptions (77%), and non-EC prescriptions (40%). Common barriers were perceived as low likelihood for follow-up (63%), risk of complications (58%), and adverse effects (51%). More than 70% of participants correctly identified the highly effective contraceptive methods, 3% identified the correct maximum EC initiation time, and 42% correctly recognized pregnancy as a higher risk than hormonal contraception use for pulmonary embolism. Most ED providers in this pilot study referred patients for contraception; however, there was no universal contraceptive counseling and management. Many ED providers in this study had an incorrect understanding of the efficacy, risks, and eligibility associated with contraceptive methods. This lack of understanding may affect patient access and be a barrier to patient care.
Marshall, Paul S; Hoelzle, James B; Heyerdahl, Danielle; Nelson, Nathaniel W
2016-10-01
[Correction Notice: An Erratum for this article was reported in Vol 28(10) of Psychological Assessment (see record 2016-22725-001). In the article, the penultimate sentence of the abstract should read “These results suggest that a significant percentage of those making a suspect effort will be diagnosed with ADHD using the most commonly employed assessment methods: an interview alone (71%); an interview and ADHD behavior rating scales combined (65%); and an interview, behavior rating scales, and most continuous performance tests combined (62%).” All versions of this article have been corrected.] This retrospective study examines how many adult patients would plausibly receive a diagnosis of attention-deficit/hyperactivity disorder (ADHD) if performance and symptom validity measures were not administered during neuropsychological evaluations. Five hundred fifty-four patients were extracted from an archival clinical dataset. A total of 102 were diagnosed with ADHD based on cognitive testing, behavior rating scales, effort testing, and clinical interview; 115 were identified as putting forth suspect effort in accordance with the Slick, Sherman, and Iverson (1999) criteria. From a clinical decision-making perspective, suspect effort and ADHD groups were nearly indistinguishable on ADHD behavior, executive function, and functional impairment rating scales, as well as on cognitive testing and key clinical interview questions. These results suggest that a significant percentage of those making a suspect effort will be diagnosed with ADHD using the most commonly employed assessment methods: an interview alone (71%); an interview and ADHD behavior rating scales combined (65%); and an interview, behavior rating scales, and most continuous performance tests combined (62%) [corrected]. This research makes clear that it is essential to evaluate task engagement and possible symptom amplification during clinical evaluations. PsycINFO Database Record (c) 2016 APA, all rights reserved
Intra, J; Sala, M R; Falbo, R; Cappellini, F; Brambilla, P
2016-12-01
Rapid and early identification of micro-organisms in blood has a key role in the diagnosis of a febrile patient, in particular, in guiding the clinician to define the correct antibiotic therapy. This study presents a simple and very fast method with high performances for identifying bacteria by matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) after only 4 h of incubation. We used early bacterial growth on PolyViteX chocolate agar plates inoculated with five drops of blood-broth medium deposited in the same point and spread with a sterile loop, followed by a direct transfer procedure on MALDI-TOF MS target slides without additional modification. Ninety-nine percentage of aerobic bacteria were correctly identified from 600 monomicrobial-positive blood cultures. This procedure allowed obtaining the correct identification of fastidious pathogens, such as Streptococcus pneumoniae, Neisseria meningitidis and Haemophilus influenzae that need complex nutritional and environmental requirements in order to grow. Compared to the traditional pathogen identification from blood cultures that takes over 24 h, the reliability of results, rapid performance and suitability of this protocol allowed a more rapid administration of optimal antimicrobial treatment in the patients. Bloodstream infections are serious conditions with a high mortality and morbidity rate. Rapid identification of pathogens and appropriate antimicrobial therapy have a key role for successful patient outcome. In this work, we developed a rapid, simplified, accurate, and efficient method, reaching 99 % identification of aerobic bacteria from monomicrobial-positive blood cultures by using early growth on enriched medium, direct transfer to target plate without additional procedures, matrix-assisted laser desorption ionization-time of flight mass spectrometry and SARAMIS database. The application of this protocol allows to anticipate appropriate antibiotic therapy. © 2016 The Society for Applied Microbiology.
Mixture of autoregressive modeling orders and its implication on single trial EEG classification
Atyabi, Adham; Shic, Frederick; Naples, Adam
2016-01-01
Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331
Hird, Sarah; Kubatko, Laura; Carstens, Bryan
2010-11-01
We describe a method for estimating species trees that relies on replicated subsampling of large data matrices. One application of this method is phylogeographic research, which has long depended on large datasets that sample intensively from the geographic range of the focal species; these datasets allow systematicists to identify cryptic diversity and understand how contemporary and historical landscape forces influence genetic diversity. However, analyzing any large dataset can be computationally difficult, particularly when newly developed methods for species tree estimation are used. Here we explore the use of replicated subsampling, a potential solution to the problem posed by large datasets, with both a simulation study and an empirical analysis. In the simulations, we sample different numbers of alleles and loci, estimate species trees using STEM, and compare the estimated to the actual species tree. Our results indicate that subsampling three alleles per species for eight loci nearly always results in an accurate species tree topology, even in cases where the species tree was characterized by extremely rapid divergence. Even more modest subsampling effort, for example one allele per species and two loci, was more likely than not (>50%) to identify the correct species tree topology, indicating that in nearly all cases, computing the majority-rule consensus tree from replicated subsampling provides a good estimate of topology. These results were supported by estimating the correct species tree topology and reasonable branch lengths for an empirical 10-locus great ape dataset. Copyright © 2010 Elsevier Inc. All rights reserved.
Surgical correction of pectus arcuatum
Ershova, Ksenia; Adamyan, Ruben
2016-01-01
Background Pectus arcuatum is a rear congenital chest wall deformity and methods of surgical correction are debatable. Methods Surgical correction of pectus arcuatum always includes one or more horizontal sternal osteotomies, resection of deformed rib cartilages and finally anterior chest wall stabilization. The study is approved by the institutional ethical committee and has obtained the informed consent from every patient. Results In this video we show our modification of pectus arcuatum correction with only partial sternal osteotomy and further stabilization by vertical parallel titanium plates. Conclusions Reported method is a feasible option for surgical correction of pectus arcuatum. PMID:29078483
Zhao, Yong; Hong, Wen-Xue
2011-11-01
Fast, nondestructive and accurate identification of special quality eggs is an urgent problem. The present paper proposed a new feature extraction method based on symbol entropy to identify near infrared spectroscopy of special quality eggs. The authors selected normal eggs, free range eggs, selenium-enriched eggs and zinc-enriched eggs as research objects and measured the near-infrared diffuse reflectance spectra in the range of 12 000-4 000 cm(-1). Raw spectra were symbolically represented with aggregation approximation algorithm and symbolic entropy was extracted as feature vector. An error-correcting output codes multiclass support vector machine classifier was designed to identify the spectrum. Symbolic entropy feature is robust when parameter changed and the highest recognition rate reaches up to 100%. The results show that the identification method of special quality eggs using near-infrared is feasible and the symbol entropy can be used as a new feature extraction method of near-infrared spectra.
Guo, Song; Liu, Chunhua; Zhou, Peng; Li, Yanling
2016-01-01
Tyrosine sulfation is one of the ubiquitous protein posttranslational modifications, where some sulfate groups are added to the tyrosine residues. It plays significant roles in various physiological processes in eukaryotic cells. To explore the molecular mechanism of tyrosine sulfation, one of the prerequisites is to correctly identify possible protein tyrosine sulfation residues. In this paper, a novel method was presented to predict protein tyrosine sulfation residues from primary sequences. By means of informative feature construction and elaborate feature selection and parameter optimization scheme, the proposed predictor achieved promising results and outperformed many other state-of-the-art predictors. Using the optimal features subset, the proposed method achieved mean MCC of 94.41% on the benchmark dataset, and a MCC of 90.09% on the independent dataset. The experimental performance indicated that our new proposed method could be effective in identifying the important protein posttranslational modifications and the feature selection scheme would be powerful in protein functional residues prediction research fields.
Liu, Chunhua; Zhou, Peng; Li, Yanling
2016-01-01
Tyrosine sulfation is one of the ubiquitous protein posttranslational modifications, where some sulfate groups are added to the tyrosine residues. It plays significant roles in various physiological processes in eukaryotic cells. To explore the molecular mechanism of tyrosine sulfation, one of the prerequisites is to correctly identify possible protein tyrosine sulfation residues. In this paper, a novel method was presented to predict protein tyrosine sulfation residues from primary sequences. By means of informative feature construction and elaborate feature selection and parameter optimization scheme, the proposed predictor achieved promising results and outperformed many other state-of-the-art predictors. Using the optimal features subset, the proposed method achieved mean MCC of 94.41% on the benchmark dataset, and a MCC of 90.09% on the independent dataset. The experimental performance indicated that our new proposed method could be effective in identifying the important protein posttranslational modifications and the feature selection scheme would be powerful in protein functional residues prediction research fields. PMID:27034949
Müller-Lutz, Anja; Ljimani, Alexandra; Stabinska, Julia; Zaiss, Moritz; Boos, Johannes; Wittsack, Hans-Jörg; Schleich, Christoph
2018-05-14
The study compares glycosaminoglycan chemical exchange saturation transfer (gagCEST) imaging of intervertebral discs corrected for solely B 0 inhomogeneities or both B 0 and B 1 inhomogeneities. Lumbar intervertebral discs of 20 volunteers were examined with T 2 -weighted and gagCEST imaging. Field inhomogeneity correction was performed with B 0 correction only and with correction of both B 0 and B 1 . GagCEST effects measured by the asymmetric magnetization transfer ratio (MTR asym ) and signal-to-noise ratio (SNR) were compared between both methods. Significant higher MTR asym and SNR values were obtained in the nucleus pulposus using B 0 and B 1 correction compared with B 0 -corrected gagCEST. The GagCEST effect was significantly different in the nucleus pulposus compared with the annulus fibrosus for both methods. The B 0 and B 1 field inhomogeneity correction method leads to an improved quality of gagCEST imaging in IVDs compared with only B 0 correction.
NASA Technical Reports Server (NTRS)
Hammock, William R., Jr.; Cota, Phillip E., Jr.; Rosenbaum, Bernard J.; Barrett, Michael J.
1991-01-01
Standard leak detection methods at ambient temperature have been developed in order to prevent excessive leakage from the Space Shuttle liquid oxygen and liquid hydrogen Main Propulsion System. Unacceptable hydrogen leakage was encountered on the Columbia and Atlantis flight vehicles in the summer of 1990 after the standard leak check requirements had been satisfied. The leakage was only detectable when the fuel system was exposed to subcooled liquid hydrogen during External Tank loading operations. Special instrumentation and analytical tools were utilized during a series of propellant tanking tests in order to identify the sources of the hydrogen leakage. After the leaks were located and corrected, the physical characteristics of the leak sources were analyzed in an effort to understand how the discrepancies were introduced and why the leakage had evaded the standard leak detection methods. As a result of the post-leak analysis, corrective actions and leak detection improvements have been implemented in order to preclude a similar incident.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schulz, T.; Remmele, T.; Korytov, M.
2014-01-21
Based on the evaluation of lattice parameter maps in aberration corrected high resolution transmission electron microscopy images, we propose a simple method that allows quantifying the composition and disorder of a semiconductor alloy at the unit cell scale with high accuracy. This is realized by considering, next to the out-of-plane, also the in-plane lattice parameter component allowing to separate the chemical composition from the strain field. Considering only the out-of-plane lattice parameter component not only yields large deviations from the true local alloy content but also carries the risk of identifying false ordering phenomena like formations of chains or platelets.more » Our method is demonstrated on image simulations of relaxed supercells, as well as on experimental images of an In{sub 0.20}Ga{sub 0.80}N quantum well. Principally, our approach is applicable to all epitaxially strained compounds in the form of quantum wells, free standing islands, quantum dots, or wires.« less
Grading of direct laryngoscopy. A survey of current practice.
Cohen, A M; Fleming, B G; Wace, J R
1994-06-01
One hundred and twenty anaesthetists (30 of each grade), from three separate regions, were interviewed as to how they recorded the appearance of laryngeal structures at direct laryngoscopy and about their knowledge of the commonly used numerical grading system. About two-thirds of anaesthetists surveyed (69.2%) used the numerical grading system, but of these, over half could not identify a 'grade 2' laryngoscopic appearance correctly. Of anaesthetists who did not use the numerical method, over half could not correctly state the difference between a 'grade 2' and a 'grade 3' laryngoscopic appearance. Over 40% of anaesthetists stated incorrectly that the grading should be made on the initial view, even when laryngeal pressure had been needed. Junior anaesthetists were more likely to use the numerical method of recording. The results show that there is unacceptable uncertainty and inaccuracy in the use of the numerical grading system by users as well as non-users, which makes the current routine clinical use of the numerical grading system unsatisfactory.
76 FR 44010 - Medicare Program; Hospice Wage Index for Fiscal Year 2012; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-22
.... 93.774, Medicare-- Supplementary Medical Insurance Program) Dated: July 15, 2011. Dawn L. Smalls... corrects technical errors that appeared in the notice of CMS ruling published in the Federal Register on... FR 26731), there were technical errors that are identified and corrected in the Correction of Errors...
La Scola, Bernard; Raoult, Didier
2009-11-25
With long delays observed between sampling and availability of results, the usefulness of blood cultures in the context of emergency infectious diseases has recently been questioned. Among methods that allow quicker bacterial identification from growing colonies, matrix-assisted laser desorption ionisation time-of-flight (MALDI-TOF) mass spectrometry was demonstrated to accurately identify bacteria routinely isolated in a clinical biology laboratory. In order to speed up the identification process, in the present work we attempted bacterial identification directly from blood culture bottles detected positive by the automate. We prospectively analysed routine MALDI-TOF identification of bacteria detected in blood culture by two different protocols involving successive centrifugations and then lysis by trifluoroacetic acid or formic acid. Of the 562 blood culture broths detected as positive by the automate and containing one bacterial species, 370 (66%) were correctly identified. Changing the protocol from trifluoroacetic acid to formic acid improved identification of Staphylococci, and overall correct identification increased from 59% to 76%. Lack of identification was observed mostly with viridans streptococci, and only one false positive was observed. In the 22 positive blood culture broths that contained two or more different species, only one of the species was identified in 18 samples, no species were identified in two samples and false species identifications were obtained in two cases. The positive predictive value of bacterial identification using this procedure was 99.2%. MALDI-TOF MS is an efficient method for direct routine identification of bacterial isolates in blood culture, with the exception of polymicrobial samples and viridans streptococci. It may replace routine identification performed on colonies, provided improvement for the specificity of blood culture broths growing viridans streptococci is obtained in the near future.
Topographic correction realization based on the CBERS-02B image
NASA Astrophysics Data System (ADS)
Qin, Hui-ping; Yi, Wei-ning; Fang, Yong-hua
2011-08-01
The special topography of mountain terrain will induce the retrieval distortion in same species and surface spectral lines. In order to improve the research accuracy of topographic surface characteristic, many researchers have focused on topographic correction. Topographic correction methods can be statistical-empirical model or physical model, in which the methods based on the digital elevation model data are most popular. Restricted by spatial resolution, previous model mostly corrected topographic effect based on Landsat TM image, whose spatial resolution is 30 meter that can be easily achieved from internet or calculated from digital map. Some researchers have also done topographic correction based on high spatial resolution images, such as Quickbird and Ikonos, but there is little correlative research on the topographic correction of CBERS-02B image. In this study, liao-ning mountain terrain was taken as the objective. The digital elevation model data was interpolated to 2.36 meter by 15 meter original digital elevation model one meter by one meter. The C correction, SCS+C correction, Minnaert correction and Ekstrand-r were executed to correct the topographic effect. Then the corrected results were achieved and compared. The images corrected with C correction, SCS+C correction, Minnaert correction and Ekstrand-r were compared, and the scatter diagrams between image digital number and cosine of solar incidence angel with respect to surface normal were shown. The mean value, standard variance, slope of scatter diagram, and separation factor were statistically calculated. The analysed result shows that the shadow is weakened in corrected images than the original images, and the three-dimensional affect is removed. The absolute slope of fitting lines in scatter diagram is minished. Minnaert correction method has the most effective result. These demonstrate that the former correction methods can be successfully adapted to CBERS-02B images. The DEM data can be interpolated step by step to get the corresponding spatial resolution approximately for the condition that high spatial resolution elevation data is hard to get.
Overcoming Sequence Misalignments with Weighted Structural Superposition
Khazanov, Nickolay A.; Damm-Ganamet, Kelly L.; Quang, Daniel X.; Carlson, Heather A.
2012-01-01
An appropriate structural superposition identifies similarities and differences between homologous proteins that are not evident from sequence alignments alone. We have coupled our Gaussian-weighted RMSD (wRMSD) tool with a sequence aligner and seed extension (SE) algorithm to create a robust technique for overlaying structures and aligning sequences of homologous proteins (HwRMSD). HwRMSD overcomes errors in the initial sequence alignment that would normally propagate into a standard RMSD overlay. SE can generate a corrected sequence alignment from the improved structural superposition obtained by wRMSD. HwRMSD’s robust performance and its superiority over standard RMSD are demonstrated over a range of homologous proteins. Its better overlay results in corrected sequence alignments with good agreement to HOMSTRAD. Finally, HwRMSD is compared to established structural alignment methods: FATCAT, SSM, CE, and Dalilite. Most methods are comparable at placing residue pairs within 2 Å, but HwRMSD places many more residue pairs within 1 Å, providing a clear advantage. Such high accuracy is essential in drug design, where small distances can have a large impact on computational predictions. This level of accuracy is also needed to correct sequence alignments in an automated fashion, especially for omics-scale analysis. HwRMSD can align homologs with low sequence identity and large conformational differences, cases where both sequence-based and structural-based methods may fail. The HwRMSD pipeline overcomes the dependency of structural overlays on initial sequence pairing and removes the need to determine the best sequence-alignment method, substitution matrix, and gap parameters for each unique pair of homologs. PMID:22733542
Tan, Kok Chooi; Lim, Hwee San; Matjafri, Mohd Zubir; Abdullah, Khiruddin
2012-06-01
Atmospheric corrections for multi-temporal optical satellite images are necessary, especially in change detection analyses, such as normalized difference vegetation index (NDVI) rationing. Abrupt change detection analysis using remote-sensing techniques requires radiometric congruity and atmospheric correction to monitor terrestrial surfaces over time. Two atmospheric correction methods were used for this study: relative radiometric normalization and the simplified method for atmospheric correction (SMAC) in the solar spectrum. A multi-temporal data set consisting of two sets of Landsat images from the period between 1991 and 2002 of Penang Island, Malaysia, was used to compare NDVI maps, which were generated using the proposed atmospheric correction methods. Land surface temperature (LST) was retrieved using ATCOR3_T in PCI Geomatica 10.1 image processing software. Linear regression analysis was utilized to analyze the relationship between NDVI and LST. This study reveals that both of the proposed atmospheric correction methods yielded high accuracy through examination of the linear correlation coefficients. To check for the accuracy of the equation obtained through linear regression analysis for every single satellite image, 20 points were randomly chosen. The results showed that the SMAC method yielded a constant value (in terms of error) to predict the NDVI value from linear regression analysis-derived equation. The errors (average) from both proposed atmospheric correction methods were less than 10%.
NASA Astrophysics Data System (ADS)
Wang, Jinliang; Wu, Xuejiao
2010-11-01
Geometric correction of imagery is a basic application of remote sensing technology. Its precision will impact directly on the accuracy and reliability of applications. The accuracy of geometric correction depends on many factors, including the used model for correction and the accuracy of the reference map, the number of ground control points (GCP) and its spatial distribution, resampling methods. The ETM+ image of Kunming Dianchi Lake Basin and 1:50000 geographical maps had been used to compare different correction methods. The results showed that: (1) The correction errors were more than one pixel and some of them were several pixels when the polynomial model was used. The correction accuracy was not stable when the Delaunay model was used. The correction errors were less than one pixel when the collinearity equation was used. (2) 6, 9, 25 and 35 GCP were selected randomly for geometric correction using the polynomial correction model respectively, the best result was obtained when 25 GCPs were used. (3) The contrast ratio of image corrected by using nearest neighbor and the best resampling rate was compared to that of using the cubic convolution and bilinear model. But the continuity of pixel gravy value was not very good. The contrast of image corrected was the worst and the computation time was the longest by using the cubic convolution method. According to the above results, the result was the best by using bilinear to resample.
How does bias correction of RCM precipitation affect modelled runoff?
NASA Astrophysics Data System (ADS)
Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Vaze, J.; Evans, J. P.
2014-09-01
Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the difference between the tested methods is small in the modelling experiments here (and as reported in the literature), mainly because of the substantial corrections required and inconsistent errors over time (non-stationarity). The errors remaining in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitation of RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.
Deriving pathway maps from automated text analysis using a grammar-based approach.
Olsson, Björn; Gawronska, Barbara; Erlendsson, Björn
2006-04-01
We demonstrate how automated text analysis can be used to support the large-scale analysis of metabolic and regulatory pathways by deriving pathway maps from textual descriptions found in the scientific literature. The main assumption is that correct syntactic analysis combined with domain-specific heuristics provides a good basis for relation extraction. Our method uses an algorithm that searches through the syntactic trees produced by a parser based on a Referent Grammar formalism, identifies relations mentioned in the sentence, and classifies them with respect to their semantic class and epistemic status (facts, counterfactuals, hypotheses). The semantic categories used in the classification are based on the relation set used in KEGG (Kyoto Encyclopedia of Genes and Genomes), so that pathway maps using KEGG notation can be automatically generated. We present the current version of the relation extraction algorithm and an evaluation based on a corpus of abstracts obtained from PubMed. The results indicate that the method is able to combine a reasonable coverage with high accuracy. We found that 61% of all sentences were parsed, and 97% of the parse trees were judged to be correct. The extraction algorithm was tested on a sample of 300 parse trees and was found to produce correct extractions in 90.5% of the cases.
Zhang, Ying; Alonzo, Todd A
2016-11-01
In diagnostic medicine, the volume under the receiver operating characteristic (ROC) surface (VUS) is a commonly used index to quantify the ability of a continuous diagnostic test to discriminate between three disease states. In practice, verification of the true disease status may be performed only for a subset of subjects under study since the verification procedure is invasive, risky, or expensive. The selection for disease examination might depend on the results of the diagnostic test and other clinical characteristics of the patients, which in turn can cause bias in estimates of the VUS. This bias is referred to as verification bias. Existing verification bias correction in three-way ROC analysis focuses on ordinal tests. We propose verification bias-correction methods to construct ROC surface and estimate the VUS for a continuous diagnostic test, based on inverse probability weighting. By applying U-statistics theory, we develop asymptotic properties for the estimator. A Jackknife estimator of variance is also derived. Extensive simulation studies are performed to evaluate the performance of the new estimators in terms of bias correction and variance. The proposed methods are used to assess the ability of a biomarker to accurately identify stages of Alzheimer's disease. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Alternative method for predicting optimal insertion depth of the laryngeal tube in children.
Kim, J T; Jeon, S Y; Kim, C S; Kim, S D; Kim, H S
2007-11-01
Little information is available about the accuracy of the teeth mark on the laryngeal tube (LT) as a guide to correct placement in children. The aim of this crossover study was to evaluate three methods for optimal insertion depth of the size (#) 2 tube in children weighing 12-25 kg. In 24 children, the LT #2 was consecutively inserted by three different methods: (A) until the thick teeth mark on the tube was aligned with the upper incisors, (B) until resistance was felt, and (C) by inserting to a depth, previously measured, of the curved distance between the cricoid cartilage and the upper incisor. In each case, the depth of insertion, the degree of effective ventilation, the presence of leakage, and the fibreoptic view were assessed. Insertion based on the teeth mark led to a shorter insertion depth and a greater incidence of inadequate ventilation compared with the other two methods. There was no difference in the adequacy of ventilation between methods B and C. The vocal cords were more easily identified with methods B (62.5%) and C (75%) than with method A (12.5%). Insertion of the LT #2 aligned with the teeth mark can result in a shallow insertion depth and inadequate ventilation. The measured distance from the cricoid cartilage to the upper incisor offers alternative guidance for correct LT insertion.
[Is ultrasound equal to X-ray in pediatric fracture diagnosis?].
Moritz, J D; Hoffmann, B; Meuser, S H; Sehr, D H; Caliebe, A; Heller, M
2010-08-01
Ultrasound is currently not established for the diagnosis of fractures. The aim of this study was to compare ultrasound and X-ray beyond their use solely for the identification of fractures, i. e., for the detection of fracture type and dislocation for pediatric fracture diagnosis. Limb bones of dead young pigs served as a model for pediatric bones. The fractured bones were examined with ultrasound, X-ray, and CT, which served as the gold standard. 162 of 248 bones were fractured. 130 fractures were identified using ultrasound, and 148 using X-ray. There were some advantages of X-ray over ultrasound in the detection of fracture type (80 correct results using X-ray, 66 correct results using ultrasound). Ultrasound, however, was superior to X-ray for dislocation identification (41 correct results using X-ray, 51 correct results using ultrasound). Both findings were not statistically significant after adjustment for multiple testing. Ultrasound not only has comparable sensitivity to that of X-ray for the identification of limb fractures but is also equally effective for the diagnosis of fracture type and dislocation. Thus, ultrasound can be used as an adequate alternative method to X-ray for pediatric fracture diagnosis. Georg Thieme Verlag KG Stuttgart, New York.
How to securely replicate services (preliminary version)
NASA Technical Reports Server (NTRS)
Reiter, Michael; Birman, Kenneth
1992-01-01
A method is presented for constructing replicated services that retain their availability and integrity despite several servers and clients being corrupted by an intruder, in addition to others failing benignly. More precisely, a service is replicated by 'n' servers in such a way that a correct client will accept a correct server's response if, for some prespecified parameter, k, at least k servers are correct and fewer than k servers are correct. The issue of maintaining causality among client requests is also addressed. A security breach resulting from an intruder's ability to effect a violation of causality in the sequence of requests processed by the service is illustrated. An approach to counter this problem is proposed that requires that fewer than k servers are corrupt and, to ensure liveness, that k is less than or = n - 2t, where t is the assumed maximum total number of both corruptions and benign failures suffered by servers in any system run. An important and novel feature of these schemes is that the client need not be able to identify or authenticate even a single server. Instead, the client is required only to possess at most two public keys for the service.
Ice Cores Dating With a New Inverse Method Taking Account of the Flow Modeling Errors
NASA Astrophysics Data System (ADS)
Lemieux-Dudon, B.; Parrenin, F.; Blayo, E.
2007-12-01
Deep ice cores extracted from Antarctica or Greenland recorded a wide range of past climatic events. In order to contribute to the Quaternary climate system understanding, the calculation of an accurate depth-age relationship is a crucial point. Up to now ice chronologies for deep ice cores estimated with inverse approaches are based on quite simplified ice-flow models that fail to reproduce flow irregularities and consequently to respect all available set of age markers. We describe in this paper, a new inverse method that takes into account the model uncertainty in order to circumvent the restrictions linked to the use of simplified flow models. This method uses first guesses on two flow physical entities, the ice thinning function and the accumulation rate and then identifies correction functions on both flow entities. We highlight two major benefits brought by this new method: first of all the ability to respect large set of observations and as a consequence, the feasibility to estimate a synchronized common ice chronology for several cores at the same time. This inverse approach relies on a bayesian framework. To respect the positive constraint on the searched correction functions, we assume lognormal probability distribution on one hand for the background errors, but also for one particular set of the observation errors. We test this new inversion method on three cores simultaneously (the two EPICA cores : DC and DML and the Vostok core) and we assimilate more than 150 observations (e.g.: age markers, stratigraphic links,...). We analyze the sensitivity of the solution with respect to the background information, especially the prior error covariance matrix. The confidence intervals based on the posterior covariance matrix calculation, are estimated on the correction functions and for the first time on the overall output chronologies.