The Detection and Correction of Bias in Student Ratings of Instruction.
ERIC Educational Resources Information Center
Haladyna, Thomas; Hess, Robert K.
1994-01-01
A Rasch model was used to detect and correct bias in Likert rating scales used to assess student perceptions of college teaching, using a database of ratings. Statistical corrections were significant, supporting the model's potential utility. Recommendations are made for a theoretical rationale and further research on the model. (Author/MSE)
Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms
2007-09-01
punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data
Staircase-scene-based nonuniformity correction in aerial point target detection systems.
Huo, Lijun; Zhou, Dabiao; Wang, Dejiang; Liu, Rang; He, Bin
2016-09-01
Focal-plane arrays (FPAs) are often interfered by heavy fixed-pattern noise, which severely degrades the detection rate and increases the false alarms in airborne point target detection systems. Thus, high-precision nonuniformity correction is an essential preprocessing step. In this paper, a new nonuniformity correction method is proposed based on a staircase scene. This correction method can compensate for the nonlinear response of the detector and calibrate the entire optical system with computational efficiency and implementation simplicity. Then, a proof-of-concept point target detection system is established with a long-wave Sofradir FPA. Finally, the local standard deviation of the corrected image and the signal-to-clutter ratio of the Airy disk of a Boeing B738 are measured to evaluate the performance of the proposed nonuniformity correction method. Our experimental results demonstrate that the proposed correction method achieves high-quality corrections.
The solar cycle variation of the rates of CMEs and related activity
NASA Technical Reports Server (NTRS)
Webb, David F.
1991-01-01
Coronal mass ejections (CMEs) are an important aspect of the physics of the corona and heliosphere. This paper presents results of a study of occurrence frequencies of CMEs and related activity tracers over more than a complete solar activity cycle. To properly estimate occurrence rates, observed CME rates must be corrected for instrument duty cycles, detection efficiencies away from the skyplane, mass detection thresholds, and geometrical considerations. These corrections are evaluated using CME data from 1976-1989 obtained with the Skylab, SMM and SOLWIND coronagraphs and the Helios-2 photometers. The major results are: (1) the occurrence rate of CMEs tends to track the activity cycle in both amplitude and phase; (2) the corrected rates from different instruments are reasonably consistent; and (3) over the long term, no one class of solar activity tracer is better correlated with CME rate than any other (with the possible exception of type II bursts).
40 CFR Table 2 to Subpart Kkkkk of... - Operating Limits
Code of Federal Regulations, 2010 CFR
2010-07-01
... stack. 2. Kiln equipped with a DIFF or DLS/FF a. If you use a bag leak detection system, initiate corrective action within 1 hour of a bag leak detection system alarm and complete corrective actions in... to the scrubber water, maintain the average scrubber chemical feed rate for each 3-hour block period...
Ishihara, Kazuyuki; Nabuchi, Akihiro; Ito, Rieko; Miyachi, Kouji; Kuramitsu, Howard K; Okuda, Katsuji
2004-03-01
Utilizing PCR, the 16S rRNA detection rates for Porphyromonas gingivalis, Actinobacillus actinomycetemcomitans, Bacteroides forsythus, Treponema denticola, and Campylobacter rectus in samples of stenotic coronary artery plaques were determined to be 21.6, 23.3, 5.9, 23.5, and 15.7%, respectively. The detection rates for P. gingivalis and C. rectus correlated with their presence in subgingival plaque.
A two-dimensional matrix correction for off-axis portal dose prediction errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, Daniel W.; Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263; Kumaraswamy, Lalith
2013-05-15
Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. ['An effective correction algorithm for off-axis portal dosimetry errors,' Med. Phys. 36, 4089-4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axismore » prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone. As in the 1D correction case, the 2D algorithm leaves the portal dosimetry process virtually unchanged in the central portion of the detector, and thus these correction algorithms are not needed for centrally located fields of moderate size (at least, in the case of 6 MV beam energy).Conclusion: The 2D correction improves the portal dosimetry results for those fields for which the 1D correction proves insufficient, especially in the inplane, off-axis regions of the detector. This 2D correction neglects the relatively smaller discrepancies that may be caused by backscatter from nonuniform machine components downstream from the detecting layer.« less
ERIC Educational Resources Information Center
Starns, Jeffrey J.; Rotello, Caren M.; Hautus, Michael J.
2014-01-01
We tested the dual process and unequal variance signal detection models by jointly modeling recognition and source confidence ratings. The 2 approaches make unique predictions for the slope of the recognition memory zROC function for items with correct versus incorrect source decisions. The standard bivariate Gaussian version of the unequal…
NASA Astrophysics Data System (ADS)
King, Jill L.; Gur, David; Rockette, Howard E.; Curtin, Hugh D.; Obuchowski, Nancy A.; Thaete, F. Leland; Britton, Cynthia A.; Metz, Charles E.
1991-07-01
The relationship between subjective judgments of image quality for the performance of specific detection tasks and radiologists' confidence level in arriving at correct diagnoses was investigated in two studies in which 12 readers, using a total of three different display environments, interpreted a series of 300 PA chest images. The modalities used were conventional films, laser-printed films, and high-resolution CRT display of digitized images. For the detection of interstitial disease, nodules, and pneumothoraces, there was no statistically significant correlation (Spearman rho) between subjective ratings of quality and radiologists' confidence in detecting these abnormalities. However, in each study, for all modalities and all readers but one, a small but statistically significant correlation was found between the radiologists' ability to correctly and confidently rule out interstitial disease and their subjective ratings of image quality.
A physics investigation of deadtime losses in neutron counting at low rates with Cf252
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Louise G; Croft, Stephen
2009-01-01
{sup 252}Cf spontaneous fission sources are used for the characterization of neutron counters and the determination of calibration parameters; including both neutron coincidence counting (NCC) and neutron multiplicity deadtime (DT) parameters. Even at low event rates, temporally-correlated neutron counting using {sup 252}Cf suffers a deadtime effect. Meaning that in contrast to counting a random neutron source (e.g. AmLi to a close approximation), DT losses do not vanish in the low rate limit. This is because neutrons are emitted from spontaneous fission events in time-correlated 'bursts', and are detected over a short period commensurate with their lifetime in the detector (characterizedmore » by the system die-away time, {tau}). Thus, even when detected neutron events from different spontaneous fissions are unlikely to overlap in time, neutron events within the detected 'burst' are subject to intrinsic DT losses. Intrinsic DT losses for dilute Pu will be lower since the multiplicity distribution is softer, but real items also experience self-multiplication which can increase the 'size' of the bursts. Traditional NCC DT correction methods do not include the intrinsic (within burst) losses. We have proposed new forms of the traditional NCC Singles and Doubles DT correction factors. In this work, we apply Monte Carlo neutron pulse train analysis to investigate the functional form of the deadtime correction factors for an updating deadtime. Modeling is based on a high efficiency {sup 3}He neutron counter with short die-away time, representing an ideal {sup 3}He based detection system. The physics of dead time losses at low rates is explored and presented. It is observed that new forms are applicable and offer more accurate correction than the traditional forms.« less
Automatic Brain Tumor Detection in T2-weighted Magnetic Resonance Images
NASA Astrophysics Data System (ADS)
Dvořák, P.; Kropatsch, W. G.; Bartušek, K.
2013-10-01
This work focuses on fully automatic detection of brain tumors. The first aim is to determine, whether the image contains a brain with a tumor, and if it does, localize it. The goal of this work is not the exact segmentation of tumors, but the localization of their approximate position. The test database contains 203 T2-weighted images of which 131 are images of healthy brain and the remaining 72 images contain brain with pathological area. The estimation, whether the image shows an afflicted brain and where a pathological area is, is done by multi resolution symmetry analysis. The first goal was tested by five-fold cross-validation technique with 100 repetitions to avoid the result dependency on sample order. This part of the proposed method reaches the true positive rate of 87.52% and the true negative rate of 93.14% for an afflicted brain detection. The evaluation of the second part of the algorithm was carried out by comparing the estimated location to the true tumor location. The detection of the tumor location reaches the rate of 95.83% of correct anomaly detection and the rate 87.5% of correct tumor location.
A dual-process account of auditory change detection.
McAnally, Ken I; Martin, Russell L; Eramudugolla, Ranmalee; Stuart, Geoffrey W; Irvine, Dexter R F; Mattingley, Jason B
2010-08-01
Listeners can be "deaf" to a substantial change in a scene comprising multiple auditory objects unless their attention has been directed to the changed object. It is unclear whether auditory change detection relies on identification of the objects in pre- and post-change scenes. We compared the rates at which listeners correctly identify changed objects with those predicted by change-detection models based on signal detection theory (SDT) and high-threshold theory (HTT). Detected changes were not identified as accurately as predicted by models based on either theory, suggesting that some changes are detected by a process that does not support change identification. Undetected changes were identified as accurately as predicted by the HTT model but much less accurately than predicted by the SDT models. The process underlying change detection was investigated further by determining receiver-operating characteristics (ROCs). ROCs did not conform to those predicted by either a SDT or a HTT model but were well modeled by a dual-process that incorporated HTT and SDT components. The dual-process model also accurately predicted the rates at which detected and undetected changes were correctly identified.
Kreilinger, Alex; Hiebel, Hannah; Müller-Putz, Gernot R
2016-03-01
This work aimed to find and evaluate a new method for detecting errors in continuous brain-computer interface (BCI) applications. Instead of classifying errors on a single-trial basis, the new method was based on multiple events (MEs) analysis to increase the accuracy of error detection. In a BCI-driven car game, based on motor imagery (MI), discrete events were triggered whenever subjects collided with coins and/or barriers. Coins counted as correct events, whereas barriers were errors. This new method, termed ME method, combined and averaged the classification results of single events (SEs) and determined the correctness of MI trials, which consisted of event sequences instead of SEs. The benefit of this method was evaluated in an offline simulation. In an online experiment, the new method was used to detect erroneous MI trials. Such MI trials were discarded and could be repeated by the users. We found that, even with low SE error potential (ErrP) detection rates, feasible accuracies can be achieved when combining MEs to distinguish erroneous from correct MI trials. Online, all subjects reached higher scores with error detection than without, at the cost of longer times needed for completing the game. Findings suggest that ErrP detection may become a reliable tool for monitoring continuous states in BCI applications when combining MEs. This paper demonstrates a novel technique for detecting errors in online continuous BCI applications, which yields promising results even with low single-trial detection rates.
Corrections of clinical chemistry test results in a laboratory information system.
Wang, Sihe; Ho, Virginia
2004-08-01
The recently released reports by the Institute of Medicine, To Err Is Human and Patient Safety, have received national attention because of their focus on the problem of medical errors. Although a small number of studies have reported on errors in general clinical laboratories, there are, to our knowledge, no reported studies that focus on errors in pediatric clinical laboratory testing. To characterize the errors that have caused corrections to have to be made in pediatric clinical chemistry results in the laboratory information system, Misys. To provide initial data on the errors detected in pediatric clinical chemistry laboratories in order to improve patient safety in pediatric health care. All clinical chemistry staff members were informed of the study and were requested to report in writing when a correction was made in the laboratory information system, Misys. Errors were detected either by the clinicians (the results did not fit the patients' clinical conditions) or by the laboratory technologists (the results were double-checked, and the worksheets were carefully examined twice a day). No incident that was discovered before or during the final validation was included. On each Monday of the study, we generated a report from Misys that listed all of the corrections made during the previous week. We then categorized the corrections according to the types and stages of the incidents that led to the corrections. A total of 187 incidents were detected during the 10-month study, representing a 0.26% error detection rate per requisition. The distribution of the detected incidents included 31 (17%) preanalytic incidents, 46 (25%) analytic incidents, and 110 (59%) postanalytic incidents. The errors related to noninterfaced tests accounted for 50% of the total incidents and for 37% of the affected tests and orderable panels, while the noninterfaced tests and panels accounted for 17% of the total test volume in our laboratory. This pilot study provided the rate and categories of errors detected in a pediatric clinical chemistry laboratory based on the corrections of results in the laboratory information system. A direct interface of the instruments to the laboratory information system showed that it had favorable effects on reducing laboratory errors.
Dead time corrections for inbeam γ-spectroscopy measurements
NASA Astrophysics Data System (ADS)
Boromiza, M.; Borcea, C.; Negret, A.; Olacel, A.; Suliman, G.
2017-08-01
Relatively high counting rates were registered in a proton inelastic scattering experiment on 16O and 28Si using HPGe detectors which was performed at the Tandem facility of IFIN-HH, Bucharest. In consequence, dead time corrections were needed in order to determine the absolute γ-production cross sections. Considering that the real counting rate follows a Poisson distribution, the dead time correction procedure is reformulated in statistical terms. The arriving time interval between the incoming events (Δt) obeys an exponential distribution with a single parameter - the average of the associated Poisson distribution. We use this mathematical connection to calculate and implement the dead time corrections for the counting rates of the mentioned experiment. Also, exploiting an idea introduced by Pommé et al., we describe a consistent method for calculating the dead time correction which completely eludes the complicated problem of measuring the dead time of a given detection system. Several comparisons are made between the corrections implemented through this method and by using standard (phenomenological) dead time models and we show how these results were used for correcting our experimental cross sections.
NASA Astrophysics Data System (ADS)
Green, S. J.; Tamburello, N.; Miller, S. E.; Akins, J. L.; Côté, I. M.
2013-06-01
A standard approach to improving the accuracy of reef fish population estimates derived from underwater visual censuses (UVCs) is the application of species-specific correction factors, which assumes that a species' detectability is constant under all conditions. To test this assumption, we quantified detection rates for invasive Indo-Pacific lionfish ( Pterois volitans and P. miles), which are now a primary threat to coral reef conservation throughout the Caribbean. Estimates of lionfish population density and distribution, which are essential for managing the invasion, are currently obtained through standard UVCs. Using two conventional UVC methods, the belt transect and stationary visual census (SVC), we assessed how lionfish detection rates vary with lionfish body size and habitat complexity (measured as rugosity) on invaded continuous and patch reefs off Cape Eleuthera, the Bahamas. Belt transect and SVC surveys performed equally poorly, with both methods failing to detect the presence of lionfish in >50 % of surveys where thorough, lionfish-focussed searches yielded one or more individuals. Conventional methods underestimated lionfish biomass by ~200 %. Crucially, detection rate varied significantly with both lionfish size and reef rugosity, indicating that the application of a single correction factor across habitats and stages of invasion is unlikely to accurately characterize local populations. Applying variable correction factors that account for site-specific lionfish size and rugosity to conventional survey data increased estimates of lionfish biomass, but these remained significantly lower than actual biomass. To increase the accuracy and reliability of estimates of lionfish density and distribution, monitoring programs should use detailed area searches rather than standard visual survey methods. Our study highlights the importance of accounting for sources of spatial and temporal variation in detection to increase the accuracy of survey data from coral reef systems.
Vertex evoked potentials in a rating-scale detection task: Relation to signal probability
NASA Technical Reports Server (NTRS)
Squires, K. C.; Squires, N. K.; Hillyard, S. A.
1974-01-01
Vertex evoked potentials were recorded from human subjects performing in an auditory detection task with rating scale responses. Three values of a priori probability of signal presentation were tested. The amplitudes of the N1 and P3 components of the vertex potential associated with correct detections of the signal were found to be systematically related to the strictness of the response criterion and independent of variations in a priori signal probability. No similar evoked potential components were found associated with signal absent judgements (misses and correct rejections) regardless of the confidence level of the judgement or signal probability. These results strongly support the contention that the form of the vertex evoked response is closely correlated with the subject's psychophysical decision regarding the presence or absence of a threshold level signal.
Power corrections to the universal heavy WIMP-nucleon cross section
NASA Astrophysics Data System (ADS)
Chen, Chien-Yi; Hill, Richard J.; Solon, Mikhail P.; Wijangco, Alexander M.
2018-06-01
WIMP-nucleon scattering is analyzed at order 1 / M in Heavy WIMP Effective Theory. The 1 / M power corrections, where M ≫mW is the WIMP mass, distinguish between different underlying UV models with the same universal limit and their impact on direct detection rates can be enhanced relative to naive expectations due to generic amplitude-level cancellations at leading order. The necessary one- and two-loop matching calculations onto the low-energy effective theory for WIMP interactions with Standard Model quarks and gluons are performed for the case of an electroweak SU(2) triplet WIMP, considering both the cases of elementary fermions and composite scalars. The low-velocity WIMP-nucleon scattering cross section is evaluated and compared with current experimental limits and projected future sensitivities. Our results provide the most robust prediction for electroweak triplet Majorana fermion dark matter direct detection rates; for this case, a cancellation between two sources of power corrections yields a small total 1 / M correction, and a total cross section close to the universal limit for M ≳ few × 100GeV. For the SU(2) composite scalar, the 1 / M corrections introduce dependence on underlying strong dynamics. Using a leading chiral logarithm evaluation, the total 1 / M correction has a larger magnitude and uncertainty than in the fermionic case, with a sign that further suppresses the total cross section. These examples provide definite targets for future direct detection experiments and motivate large scale detectors capable of probing to the neutrino floor in the TeV mass regime.
2014-05-01
hand and right hand on the piano, or strumming and chording on the guitar . Perceptual This skill category involves detecting and interpreting sensory...measured as the percent correct, # correct, accumulated points, task/test scoring correct action/timing/performance. This also includes quality rating by...competition and scoring , as well as constraints, privileges and penalties. Simulation-Based The primary delivery environment is an interactive synthetic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chien-Yi; Hill, Richard J.; Solon, Mikhail P.
WIMP-nucleon scattering is analyzed at ordermore » $1/M$ in Heavy WIMP Effective Theory. The $1/M$ power corrections, where $$M\\gg m_W$$ is the WIMP mass, distinguish between different underlying UV models with the same universal limit and their impact on direct detection rates can be enhanced relative to naive expectations due to generic amplitude-level cancellations at leading order. The necessary one- and two-loop matching calculations onto the low-energy effective theory for WIMP interactions with Standard Model quarks and gluons are performed for the case of an electroweak SU(2) triplet WIMP, considering both the cases of elementary fermions and composite scalars. The low-velocity WIMP-nucleon scattering cross section is evaluated and compared with current experimental limits and projected future sensitivities. Our results provide the most robust prediction for electroweak triplet Majorana fermion dark matter direct detection rates; for this case, a cancellation between two sources of power corrections yields a small total $1/M$ correction, and a total cross section close to the universal limit for $$M \\gtrsim {\\rm few} \\times 100\\,{\\rm GeV}$$. For the SU(2) composite scalar, the $1/M$ corrections introduce dependence on underlying strong dynamics. Using a leading chiral logarithm evaluation, the total $1/M$ correction has a larger magnitude and uncertainty than in the fermionic case, with a sign that further suppresses the total cross section. These examples provide definite targets for future direct detection experiments and motivate large scale detectors capable of probing to the neutrino floor in the TeV mass regime.« less
SVM based colon polyps classifier in a wireless active stereo endoscope.
Ayoub, J; Granado, B; Mhanna, Y; Romain, O
2010-01-01
This work focuses on the recognition of three-dimensional colon polyps captured by an active stereo vision sensor. The detection algorithm consists of SVM classifier trained on robust feature descriptors. The study is related to Cyclope, this prototype sensor allows real time 3D object reconstruction and continues to be optimized technically to improve its classification task by differentiation between hyperplastic and adenomatous polyps. Experimental results were encouraging and show correct classification rate of approximately 97%. The work contains detailed statistics about the detection rate and the computing complexity. Inspired by intensity histogram, the work shows a new approach that extracts a set of features based on depth histogram and combines stereo measurement with SVM classifiers to correctly classify benign and malignant polyps.
A simple and effective figure caption detection system for old-style documents
NASA Astrophysics Data System (ADS)
Liu, Zongyi; Zhou, Hanning
2011-01-01
Identifying figure captions has wide applications in producing high quality e-books such as kindle books or ipad books. In this paper, we present a rule-based system to detect horizontal figure captions in old-style documents. Our algorithm consists of three steps: (i) segment images into regions of different types such as text and figures, (ii) search the best caption region candidate based on heuristic rules such as region alignments and distances, and (iii) expand caption regions identified in step (ii) with its neighboring text-regions in order to correct oversegmentation errors. We test our algorithm using 81 images collected from old-style books, with each image containing at least one figure area. We show that the approach is able to correctly detect figure captions from images with different layouts, and we also measure its performances in terms of both precision rate and recall rate.
Thermal imaging as a lie detection tool at airports.
Warmelink, Lara; Vrij, Aldert; Mann, Samantha; Leal, Sharon; Forrester, Dave; Fisher, Ronald P
2011-02-01
We tested the accuracy of thermal imaging as a lie detection tool in airport screening. Fifty-one passengers in an international airport departure hall told the truth or lied about their forthcoming trip in an interview. Their skin temperature was recorded via a thermal imaging camera. Liars' skin temperature rose significantly during the interview, whereas truth tellers' skin temperature remained constant. On the basis of these different patterns, 64% of truth tellers and 69% of liars were classified correctly. The interviewers made veracity judgements independently from the thermal recordings. The interviewers outperformed the thermal recordings and classified 72% of truth tellers and 77% of liars correctly. Accuracy rates based on the combination of thermal imaging scores and interviewers' judgements were the same as accuracy rates based on interviewers' judgements alone. Implications of the findings for the suitability of thermal imaging as a lie detection tool in airports are discussed.
First day of life pulse oximetry screening to detect congenital heart defects.
Meberg, Alf; Brügmann-Pieper, Sabine; Due, Reidar; Eskedal, Leif; Fagerli, Ingebjørg; Farstad, Teresa; Frøisland, Dag Helge; Sannes, Catharina Hovland; Johansen, Ole Jakob; Keljalic, Jasmina; Markestad, Trond; Nygaard, Egil Andre; Røsvik, Alet; Silberg, Inger Elisabeth
2008-06-01
To evaluate the efficacy of first day of life pulse oximetry screening to detect congenital heart defects (CHDs). We performed a population-based prospective multicenter study of postductal (foot) arterial oxygen saturation (SpO(2)) in apparently healthy newborns after transfer from the delivery suite to the nursery. SpO(2) < 95% led to further diagnostic evaluations. Of 57,959 live births, 50,008 (86%) were screened. In the screened population, 35 CHDs were [corrected] classified as critical (ductus dependent, cyanotic). CHDs were prospectively registered and diagnosed in 658/57,959 (1.1%) [corrected] Of the infants screened, 324 (0.6%) failed the test. Of these, 43 (13%) had CHDs (27 critical), and 134 (41%) had pulmonary diseases or other disorders. The remaining 147 infants (45%) were healthy with transitional circulation. The median age for babies with CHDs at failing the test was 6 hours (range, 1-21 hours). For identifying critical CHDs, the pulse oximetry screening had a sensitivity rate of 77.1% (95% CI, 59.4-89.0), specificity rate of 99.4% (95% CI, 99.3-99.5), and a false-positive rate of 0.6% (95% CI, 0.5-0.7). Early pulse oximetry screening promotes early detection of critical CHDs and other potentially severe diseases. The sensitivity rate for detecting critical CHDs is high, and the false-positive rate is low.
The Occurrence and the Success Rate of Self-Initiated Self-Repair
ERIC Educational Resources Information Center
Sato, Rintaro; Takatsuka, Shigenobu
2016-01-01
Errors naturally appear in spontaneous speeches and conversations. Particularly in a second or foreign language, it is only natural that mistakes happen as a part of the learning process. After an inappropriate expression is detected, it can be corrected. This act of correcting can be initiated either by the speaker (non-native speaker) or the…
Method and apparatus for diagnosing breached fuel elements
Gross, K.C.; Lambert, J.D.B.; Nomura, S.
1987-03-02
The invention provides an apparatus and method for diagnosing breached fuel elements in a nuclear reactor. A detection system measures the activity of isotopes from the cover gas in the reactor. A data acquisition and processing system monitors the detection system and corrects for the effects of the cover-gas clean up system on the measured activity and further calculates the derivative curve of the corrected activity as a function of time. A plotting system graphs the derivative curve, which represents the instantaneous release rate of fission gas from a breached fuel element. 8 figs.
Method and system for turbomachinery surge detection
Faymon, David K.; Mays, Darrell C.; Xiong, Yufei
2004-11-23
A method and system for surge detection within a gas turbine engine, comprises: measuring the compressor discharge pressure (CDP) of the gas turbine over a period of time; determining a time derivative (CDP.sub.D ) of the measured (CDP) correcting the CDP.sub.D for altitude, (CDP.sub.DCOR); estimating a short-term average of CDP.sub.DCOR.sup.2 ; estimating a short-term average of CDP.sub.DCOR ; and determining a short-term variance of corrected CDP rate of change (CDP.sub.roc) based upon the short-term average of CDP.sub.DCOR and the short-term average of CDP.sub.DCOR.sup.2. The method and system then compares the short-term variance of corrected CDP rate of change with a pre-determined threshold (CDP.sub.proc) and signals an output when CDP.sub.roc >CDP.sub.proc. The method and system provides a signal of a surge within the gas turbine engine when CDP.sub.roc remains>CDP.sub.proc for pre-determined period of time.
Communication of ALS Patients by Detecting Event-Related Potential
NASA Astrophysics Data System (ADS)
Kanou, Naoyuki; Sakuma, Kenji; Nakashima, Kenji
Amyotrophic Lateral Sclerosis(ALS) patients are unable to successfully communicate their desires, although their mental capacity is the same as non-affected persons. Therefore, the authors put emphasis on Event-Related Potential(ERP) which elicits the highest outcome for the target visual and hearing stimuli. P300 is one component of ERP. It is positive potential that is elicited when the subject focuses attention on stimuli that appears infrequently. In this paper, the authors focused on P200 and N200 components, in addition to P300, for their great improvement in the rate of correct judgment in the target word-specific experiment. Hence the authors propose the algorithm that specifies target words by detecting these three components. Ten healthy subjects and ALS patient underwent the experiment in which a target word out of five words, was specified by this algorithm. The rates of correct judgment in nine of ten healthy subjects were more than 90.0%. The highest rate was 99.7%. The highest rate of ALS patient was 100.0%. Through these results, the authors found the possibility that ALS patients could communicate with surrounding persons by detecting ERP(P200, N200 and P300) as their desire.
Implications of PSR J0737-3039B for the Galactic NS-NS binary merger rate
NASA Astrophysics Data System (ADS)
Kim, Chunglee; Perera, Benetge Bhakthi Pranama; McLaughlin, Maura A.
2015-03-01
The Double Pulsar (PSR J0737-3039) is the only neutron star-neutron star (NS-NS) binary in which both NSs have been detectable as radio pulsars. The Double Pulsar has been assumed to dominate the Galactic NS-NS binary merger rate R_g among all known systems, solely based on the properties of the first-born, recycled pulsar (PSR J0737-3039A, or A) with an assumption for the beaming correction factor of 6. In this work, we carefully correct observational biases for the second-born, non-recycled pulsar (PSR J0737-0737B, or B) and estimate the contribution from the Double Pulsar on R_g using constraints available from both A and B. Observational constraints from the B pulsar favour a small beaming correction factor for A (˜2), which is consistent with a bipolar model. Considering known NS-NS binaries with the best observational constraints, including both A and B, we obtain R_g=21_{-14}^{+28} Myr-1 at 95 per cent confidence from our reference model. We expect the detection rate of gravitational waves from NS-NS inspirals for the advanced ground-based gravitational-wave detectors is to be 8^{+10}_{-5} yr-1 at 95 per cent confidence. Within several years, gravitational-wave detections relevant to NS-NS inspirals will provide us useful information to improve pulsar population models.
Laser Scanner For Automatic Inspection Of Printed Wiring Boards
NASA Astrophysics Data System (ADS)
Geise, Philip; George, Eugene; Freese, Fritz; Brown, Robert; Ruwe, Victor
1980-11-01
An, Instrument is described which inspects unpopulated, populated (components onserted and leads clinched), and soldered printed wiring boards for correct hole location, component presence, correct lead clinch direction and solder bridges. The instrument consists of a low power heliumneon laser, an x-y moving iron galvanometer scanner and several folding mirros. A unique shadow signature is detected by silicon photodiodes located at the optium geometry to allow rapid and reliable detection of components with correctly clinched leads. A reflective glint screen is utilized to inspect for a solder bridges. The detected signal are processed and evaluated by a minocomputer which also controls the scan inspection rate of at least 25 components or 50 components holes per second. The return of investment on this instrument for high volume production of printed wirind boards is less than one yea and only slightly longer for medium run military application.
Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiala, David J; Mueller, Frank; Engelmann, Christian
Faults have become the norm rather than the exception for high-end computing on clusters with 10s/100s of thousands of cores. Exacerbating this situation, some of these faults remain undetected, manifesting themselves as silent errors that corrupt memory while applications continue to operate and report incorrect results. This paper studies the potential for redundancy to both detect and correct soft errors in MPI message-passing applications. Our study investigates the challenges inherent to detecting soft errors within MPI application while providing transparent MPI redundancy. By assuming a model wherein corruption in application data manifests itself by producing differing MPI message data betweenmore » replicas, we study the best suited protocols for detecting and correcting MPI data that is the result of corruption. To experimentally validate our proposed detection and correction protocols, we introduce RedMPI, an MPI library which resides in the MPI profiling layer. RedMPI is capable of both online detection and correction of soft errors that occur in MPI applications without requiring any modifications to the application source by utilizing either double or triple redundancy. Our results indicate that our most efficient consistency protocol can successfully protect applications experiencing even high rates of silent data corruption with runtime overheads between 0% and 30% as compared to unprotected applications without redundancy. Using our fault injector within RedMPI, we observe that even a single soft error can have profound effects on running applications, causing a cascading pattern of corruption in most cases causes that spreads to all other processes. RedMPI's protection has been shown to successfully mitigate the effects of soft errors while allowing applications to complete with correct results even in the face of errors.« less
Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping
NASA Astrophysics Data System (ADS)
Piedrafita, Álvaro; Renes, Joseph M.
2017-12-01
We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.
Powerful Inference with the D-Statistic on Low-Coverage Whole-Genome Data
Soraggi, Samuele; Wiuf, Carsten; Albrechtsen, Anders
2017-01-01
The detection of ancient gene flow between human populations is an important issue in population genetics. A common tool for detecting ancient admixture events is the D-statistic. The D-statistic is based on the hypothesis of a genetic relationship that involves four populations, whose correctness is assessed by evaluating specific coincidences of alleles between the groups. When working with high-throughput sequencing data, calling genotypes accurately is not always possible; therefore, the D-statistic currently samples a single base from the reads of one individual per population. This implies ignoring much of the information in the data, an issue especially striking in the case of ancient genomes. We provide a significant improvement to overcome the problems of the D-statistic by considering all reads from multiple individuals in each population. We also apply type-specific error correction to combat the problems of sequencing errors, and show a way to correct for introgression from an external population that is not part of the supposed genetic relationship, and how this leads to an estimate of the admixture rate. We prove that the D-statistic is approximated by a standard normal distribution. Furthermore, we show that our method outperforms the traditional D-statistic in detecting admixtures. The power gain is most pronounced for low and medium sequencing depth (1–10×), and performances are as good as with perfectly called genotypes at a sequencing depth of 2×. We show the reliability of error correction in scenarios with simulated errors and ancient data, and correct for introgression in known scenarios to estimate the admixture rates. PMID:29196497
Fault-tolerant quantum error detection.
Linke, Norbert M; Gutierrez, Mauricio; Landsman, Kevin A; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R; Monroe, Christopher
2017-10-01
Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors.
Counting-loss correction for X-ray spectroscopy using unit impulse pulse shaping.
Hong, Xu; Zhou, Jianbin; Ni, Shijun; Ma, Yingjie; Yao, Jianfeng; Zhou, Wei; Liu, Yi; Wang, Min
2018-03-01
High-precision measurement of X-ray spectra is affected by the statistical fluctuation of the X-ray beam under low-counting-rate conditions. It is also limited by counting loss resulting from the dead-time of the system and pile-up pulse effects, especially in a high-counting-rate environment. In this paper a detection system based on a FAST-SDD detector and a new kind of unit impulse pulse-shaping method is presented, for counting-loss correction in X-ray spectroscopy. The unit impulse pulse-shaping method is evolved by inverse deviation of the pulse from a reset-type preamplifier and a C-R shaper. It is applied to obtain the true incoming rate of the system based on a general fast-slow channel processing model. The pulses in the fast channel are shaped to unit impulse pulse shape which possesses small width and no undershoot. The counting rate in the fast channel is corrected by evaluating the dead-time of the fast channel before it is used to correct the counting loss in the slow channel.
An Adaptive Deghosting Method in Neural Network-Based Infrared Detectors Nonuniformity Correction
Li, Yiyang; Jin, Weiqi; Zhu, Jin; Zhang, Xu; Li, Shuo
2018-01-01
The problems of the neural network-based nonuniformity correction algorithm for infrared focal plane arrays mainly concern slow convergence speed and ghosting artifacts. In general, the more stringent the inhibition of ghosting, the slower the convergence speed. The factors that affect these two problems are the estimated desired image and the learning rate. In this paper, we propose a learning rate rule that combines adaptive threshold edge detection and a temporal gate. Through the noise estimation algorithm, the adaptive spatial threshold is related to the residual nonuniformity noise in the corrected image. The proposed learning rate is used to effectively and stably suppress ghosting artifacts without slowing down the convergence speed. The performance of the proposed technique was thoroughly studied with infrared image sequences with both simulated nonuniformity and real nonuniformity. The results show that the deghosting performance of the proposed method is superior to that of other neural network-based nonuniformity correction algorithms and that the convergence speed is equivalent to the tested deghosting methods. PMID:29342857
An Adaptive Deghosting Method in Neural Network-Based Infrared Detectors Nonuniformity Correction.
Li, Yiyang; Jin, Weiqi; Zhu, Jin; Zhang, Xu; Li, Shuo
2018-01-13
The problems of the neural network-based nonuniformity correction algorithm for infrared focal plane arrays mainly concern slow convergence speed and ghosting artifacts. In general, the more stringent the inhibition of ghosting, the slower the convergence speed. The factors that affect these two problems are the estimated desired image and the learning rate. In this paper, we propose a learning rate rule that combines adaptive threshold edge detection and a temporal gate. Through the noise estimation algorithm, the adaptive spatial threshold is related to the residual nonuniformity noise in the corrected image. The proposed learning rate is used to effectively and stably suppress ghosting artifacts without slowing down the convergence speed. The performance of the proposed technique was thoroughly studied with infrared image sequences with both simulated nonuniformity and real nonuniformity. The results show that the deghosting performance of the proposed method is superior to that of other neural network-based nonuniformity correction algorithms and that the convergence speed is equivalent to the tested deghosting methods.
Eliminating ambiguity in digital signals
NASA Technical Reports Server (NTRS)
Weber, W. J., III
1979-01-01
Multiamplitude minimum shift keying (mamsk) transmission system, method of differential encoding overcomes problem of ambiguity associated with advanced digital-transmission techniques with little or no penalty in transmission rate, error rate, or system complexity. Principle of method states, if signal points are properly encoded and decoded, bits are detected correctly, regardless of phase ambiguities.
A Dual-Process Account of Auditory Change Detection
ERIC Educational Resources Information Center
McAnally, Ken I.; Martin, Russell L.; Eramudugolla, Ranmalee; Stuart, Geoffrey W.; Irvine, Dexter R. F.; Mattingley, Jason B.
2010-01-01
Listeners can be "deaf" to a substantial change in a scene comprising multiple auditory objects unless their attention has been directed to the changed object. It is unclear whether auditory change detection relies on identification of the objects in pre- and post-change scenes. We compared the rates at which listeners correctly identify changed…
Correcting Estimates of the Occurrence Rate of Earth-like Exoplanets for Stellar Multiplicity
NASA Astrophysics Data System (ADS)
Cantor, Elliot; Dressing, Courtney D.; Ciardi, David R.; Christiansen, Jessie
2018-06-01
One of the most prominent questions in the exoplanet field has been determining the true occurrence rate of potentially habitable Earth-like planets. NASA’s Kepler mission has been instrumental in answering this question by searching for transiting exoplanets, but follow-up observations of Kepler target stars are needed to determine whether or not the surveyed Kepler targets are in multi-star systems. While many researchers have searched for companions to Kepler planet host stars, few studies have investigated the larger target sample. Regardless of physical association, the presence of nearby stellar companions biases our measurements of a system’s planetary parameters and reduces our sensitivity to small planets. Assuming that all Kepler target stars are single (as is done in many occurrence rate calculations) would overestimate our search completeness and result in an underestimate of the frequency of potentially habitable Earth-like planets. We aim to correct for this bias by characterizing the set of targets for which Kepler could have detected Earth-like planets. We are using adaptive optics (AO) imaging to reveal potential stellar companions and near-infrared spectroscopy to refine stellar parameters for a subset of the Kepler targets that are most amenable to the detection of Earth-like planets. We will then derive correction factors to correct for the biases in the larger set of target stars and determine the true frequency of systems with Earth-like planets. Due to the prevalence of stellar multiples, we expect to calculate an occurrence rate for Earth-like exoplanets that is higher than current figures.
Fault-tolerant quantum error detection
Linke, Norbert M.; Gutierrez, Mauricio; Landsman, Kevin A.; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R.; Monroe, Christopher
2017-01-01
Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors. PMID:29062889
NASA Astrophysics Data System (ADS)
Zhang, Guoguang; Yu, Zitian; Wang, Junmin
2017-03-01
Yaw rate is a crucial signal for the motion control systems of ground vehicles. Yet it may be contaminated by sensor bias. In order to correct the contaminated yaw rate signal and estimate the sensor bias, a robust gain-scheduling observer is proposed in this paper. First of all, a two-degree-of-freedom (2DOF) vehicle lateral and yaw dynamic model is presented, and then a Luenberger-like observer is proposed. To make the observer more applicable to real vehicle driving operations, a 2DOF vehicle model with uncertainties on the coefficients of tire cornering stiffness is employed. Further, a gain-scheduling approach and a robustness enhancement are introduced, leading to a robust gain-scheduling observer. Sensor bias detection mechanism is also designed. Case studies are conducted using an electric ground vehicle to assess the performance of signal correction and sensor bias estimation under difference scenarios.
Powerful Inference with the D-Statistic on Low-Coverage Whole-Genome Data.
Soraggi, Samuele; Wiuf, Carsten; Albrechtsen, Anders
2018-02-02
The detection of ancient gene flow between human populations is an important issue in population genetics. A common tool for detecting ancient admixture events is the D-statistic. The D-statistic is based on the hypothesis of a genetic relationship that involves four populations, whose correctness is assessed by evaluating specific coincidences of alleles between the groups. When working with high-throughput sequencing data, calling genotypes accurately is not always possible; therefore, the D-statistic currently samples a single base from the reads of one individual per population. This implies ignoring much of the information in the data, an issue especially striking in the case of ancient genomes. We provide a significant improvement to overcome the problems of the D-statistic by considering all reads from multiple individuals in each population. We also apply type-specific error correction to combat the problems of sequencing errors, and show a way to correct for introgression from an external population that is not part of the supposed genetic relationship, and how this leads to an estimate of the admixture rate. We prove that the D-statistic is approximated by a standard normal distribution. Furthermore, we show that our method outperforms the traditional D-statistic in detecting admixtures. The power gain is most pronounced for low and medium sequencing depth (1-10×), and performances are as good as with perfectly called genotypes at a sequencing depth of 2×. We show the reliability of error correction in scenarios with simulated errors and ancient data, and correct for introgression in known scenarios to estimate the admixture rates. Copyright © 2018 Soraggi et al.
Performance analysis of a cascaded coding scheme with interleaved outer code
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.
Bidra, Avinash S; Nguyen, Viensuong; Manzotti, Anna; Kuo, Chia-Ling
2018-01-01
To study the subjective differences in direct lip support assessments and to determine if dentists and laypeople are able to discern and correctly identify direct changes in lip support between flange and flangeless dentures. A random sample of 20 maxillary edentulous patients described in part 2 of the study was used for analysis. A total of 60 judges comprising 15 general dentists, 15 prosthodontists, and 30 laypeople, the majority of who were distinct from part 2 of the study, were recruited. All images used in this study were cropped at the infraorbital level and converted to black and white tone, to encourage the judges to focus on lip support. The judges were un-blinded to the study objectives and told what to look for, and were asked to rate the lip support of each of the 80 images on a 100 mm visual analog scale (VAS). The judges then took a discriminatory sensory analysis test (triangle test) where they were required to correctly identify the image with a flangeless denture out of a set of 3 images. Both the VAS and triangle test ratings were conducted twice in a random order, and mean ratings were used for all analyses. The overall VAS ratings of lip support for images with flangeless dentures were slightly lower compared to images with labial flanges, and this difference was statistically significant (p < 0.0001). This was true for both profile and frontal images. However, the magnitude of these differences was too small (no greater than 5 mm on a 100-mm scale) to be clinically significant or meaningful. The differences in VAS ratings were not significant between the judges. For the triangle test, judges overall correctly identified the flangeless denture image in 55% of frontal image sets and 60% of profile image sets. The difference in correct identification rate between frontal and profile images was statistically significant (p < 0.0001). For frontal and profile images, prosthodontists had the highest correct identification rate (61% and 69%), followed by general dentists (53% and 68%) and by laypeople (53% and 50%). The difference in correct identification rate was statistically significant between various judges (p = 0.012). For all judges, the likelihood of correctly identifying images with flangeless dentures was significantly greater than 1/3, which was the minimum chance for correct identification (p < 0.0001). Removal of a labial flange in a maxillary denture resulted in slightly lower ratings of lip support compared to images with a labial flange, but the differences were clinically insignificant. When judges were forced to look for differences, flangeless dentures were detected more often in profile images. Prosthodontists detected the flangeless dentures more often than general dentists and laypeople. © 2017 by the American College of Prosthodontists.
Modeling Data Containing Outliers using ARIMA Additive Outlier (ARIMA-AO)
NASA Astrophysics Data System (ADS)
Saleh Ahmar, Ansari; Guritno, Suryo; Abdurakhman; Rahman, Abdul; Awi; Alimuddin; Minggi, Ilham; Arif Tiro, M.; Kasim Aidid, M.; Annas, Suwardi; Utami Sutiksno, Dian; Ahmar, Dewi S.; Ahmar, Kurniawan H.; Abqary Ahmar, A.; Zaki, Ahmad; Abdullah, Dahlan; Rahim, Robbi; Nurdiyanto, Heri; Hidayat, Rahmat; Napitupulu, Darmawan; Simarmata, Janner; Kurniasih, Nuning; Andretti Abdillah, Leon; Pranolo, Andri; Haviluddin; Albra, Wahyudin; Arifin, A. Nurani M.
2018-01-01
The aim this study is discussed on the detection and correction of data containing the additive outlier (AO) on the model ARIMA (p, d, q). The process of detection and correction of data using an iterative procedure popularized by Box, Jenkins, and Reinsel (1994). By using this method we obtained an ARIMA models were fit to the data containing AO, this model is added to the original model of ARIMA coefficients obtained from the iteration process using regression methods. In the simulation data is obtained that the data contained AO initial models are ARIMA (2,0,0) with MSE = 36,780, after the detection and correction of data obtained by the iteration of the model ARIMA (2,0,0) with the coefficients obtained from the regression Zt = 0,106+0,204Z t-1+0,401Z t-2-329X 1(t)+115X 2(t)+35,9X 3(t) and MSE = 19,365. This shows that there is an improvement of forecasting error rate data.
Estimating animal mortality from anthropogenic hazards
Carcass searches are a common method for studying the risk of anthropogenic hazards to wildlife, including non-target poisoning and collisions with anthropogenic structures. Typically, numbers of carcasses found must be corrected for scavenging rates and imperfect detection. Para...
Breast MR segmentation and lesion detection with cellular neural networks and 3D template matching.
Ertaş, Gökhan; Gülçür, H Ozcan; Osman, Onur; Uçan, Osman N; Tunaci, Mehtap; Dursun, Memduh
2008-01-01
A novel fully automated system is introduced to facilitate lesion detection in dynamic contrast-enhanced, magnetic resonance mammography (DCE-MRM). The system extracts breast regions from pre-contrast images using a cellular neural network, generates normalized maximum intensity-time ratio (nMITR) maps and performs 3D template matching with three layers of 12x12 cells to detect lesions. A breast is considered to be properly segmented when relative overlap >0.85 and misclassification rate <0.10. Sensitivity, false-positive rate per slice and per lesion are used to assess detection performance. The system was tested with a dataset of 2064 breast MR images (344slicesx6 acquisitions over time) from 19 women containing 39 marked lesions. Ninety-seven percent of the breasts were segmented properly and all the lesions were detected correctly (detection sensitivity=100%), however, there were some false-positive detections (31%/lesion, 10%/slice).
An improved PCA method with application to boiler leak detection.
Sun, Xi; Marquez, Horacio J; Chen, Tongwen; Riaz, Muhammad
2005-07-01
Principal component analysis (PCA) is a popular fault detection technique. It has been widely used in process industries, especially in the chemical industry. In industrial applications, achieving a sensitive system capable of detecting incipient faults, which maintains the false alarm rate to a minimum, is a crucial issue. Although a lot of research has been focused on these issues for PCA-based fault detection and diagnosis methods, sensitivity of the fault detection scheme versus false alarm rate continues to be an important issue. In this paper, an improved PCA method is proposed to address this problem. In this method, a new data preprocessing scheme and a new fault detection scheme designed for Hotelling's T2 as well as the squared prediction error are developed. A dynamic PCA model is also developed for boiler leak detection. This new method is applied to boiler water/steam leak detection with real data from Syncrude Canada's utility plant in Fort McMurray, Canada. Our results demonstrate that the proposed method can effectively reduce false alarm rate, provide effective and correct leak alarms, and give early warning to operators.
Malik, Marek; Hnatkova, Katerina; Batchvarov, Velislav; Gang, Yi; Smetana, Peter; Camm, A John
2004-12-01
Regulatory authorities require new drugs to be investigated using a so-called "thorough QT/QTc study" to identify compounds with a potential of influencing cardiac repolarization in man. Presently drafted regulatory consensus requires these studies to be powered for the statistical detection of QTc interval changes as small as 5 ms. Since this translates into a noticeable drug development burden, strategies need to be identified allowing the size and thus the cost of thorough QT/QTc studies to be minimized. This study investigated the influence of QT and RR interval data quality and the precision of heart rate correction on the sample sizes of thorough QT/QTc studies. In 57 healthy subjects (26 women, age range 19-42 years), a total of 4,195 drug-free digital electrocardiograms (ECG) were obtained (65-84 ECGs per subject). All ECG parameters were measured manually using the most accurate approach with reconciliation of measurement differences between different cardiologists and aligning the measurements of corresponding ECG patterns. From the data derived in this measurement process, seven different levels of QT/RR data quality were obtained, ranging from the simplest approach of measuring 3 beats in one ECG lead to the most exact approach. Each of these QT/RR data-sets was processed with eight different heart rate corrections ranging from Bazett and Fridericia corrections to the individual QT/RR regression modelling with optimization of QT/RR curvature. For each combination of data quality and heart rate correction, standard deviation of individual mean QTc values and mean of individual standard deviations of QTc values were calculated and used to derive the size of thorough QT/QTc studies with an 80% power to detect 5 ms QTc changes at the significance level of 0.05. Irrespective of data quality and heart rate corrections, the necessary sample sizes of studies based on between-subject comparisons (e.g., parallel studies) are very substantial requiring >140 subjects per group. However, the required study size may be substantially reduced in investigations based on within-subject comparisons (e.g., crossover studies or studies of several parallel groups each crossing over an active treatment with placebo). While simple measurement approaches with ad-hoc heart rate correction still lead to requirements of >150 subjects, the combination of best data quality with most accurate individualized heart rate correction decreases the variability of QTc measurements in each individual very substantially. In the data of this study, the average of standard deviations of QTc values calculated separately in each individual was only 5.2 ms. Such a variability in QTc data translates to only 18 subjects per study group (e.g., the size of a complete one-group crossover study) to detect 5 ms QTc change with an 80% power. Cost calculations show that by involving the most stringent ECG handling and measurement, the cost of a thorough QT/QTc study may be reduced to approximately 25%-30% of the cost imposed by the simple ECG reading (e.g., three complexes in one lead only).
Hidden Markov models for estimating animal mortality from anthropogenic hazards
Carcasses searches are a common method for studying the risk of anthropogenic hazards to wildlife, including non-target poisoning and collisions with anthropogenic structures. Typically, numbers of carcasses found must be corrected for scavenging rates and imperfect detection. ...
Du, Pan; Kibbe, Warren A; Lin, Simon M
2006-09-01
A major problem for current peak detection algorithms is that noise in mass spectrometry (MS) spectra gives rise to a high rate of false positives. The false positive rate is especially problematic in detecting peaks with low amplitudes. Usually, various baseline correction algorithms and smoothing methods are applied before attempting peak detection. This approach is very sensitive to the amount of smoothing and aggressiveness of the baseline correction, which contribute to making peak detection results inconsistent between runs, instrumentation and analysis methods. Most peak detection algorithms simply identify peaks based on amplitude, ignoring the additional information present in the shape of the peaks in a spectrum. In our experience, 'true' peaks have characteristic shapes, and providing a shape-matching function that provides a 'goodness of fit' coefficient should provide a more robust peak identification method. Based on these observations, a continuous wavelet transform (CWT)-based peak detection algorithm has been devised that identifies peaks with different scales and amplitudes. By transforming the spectrum into wavelet space, the pattern-matching problem is simplified and in addition provides a powerful technique for identifying and separating the signal from the spike noise and colored noise. This transformation, with the additional information provided by the 2D CWT coefficients can greatly enhance the effective signal-to-noise ratio. Furthermore, with this technique no baseline removal or peak smoothing preprocessing steps are required before peak detection, and this improves the robustness of peak detection under a variety of conditions. The algorithm was evaluated with SELDI-TOF spectra with known polypeptide positions. Comparisons with two other popular algorithms were performed. The results show the CWT-based algorithm can identify both strong and weak peaks while keeping false positive rate low. The algorithm is implemented in R and will be included as an open source module in the Bioconductor project.
Rohde, Max; Nielsen, Anne L; Johansen, Jørgen; Sørensen, Jens A; Nguyen, Nina; Diaz, Anabel; Nielsen, Mie K; Asmussen, Jon T; Christiansen, Janus M; Gerke, Oke; Thomassen, Anders; Alavi, Abass; Høilund-Carlsen, Poul Flemming; Godballe, Christian
2017-12-01
The purpose of this study was to determine the detection rate of distant metastasis and synchronous cancer, comparing clinically used imaging strategies based on chest x-ray + head and neck MRI (CXR/MRI) and chest CT + head and neck MRI (CHCT/MRI) with 18 F-FDG PET/CT upfront in the diagnostic workup of patients with oral, pharyngeal, or laryngeal cancer. Methods: This was a prospective cohort study based on paired data. Consecutive patients with histologically verified primary head and squamous cell carcinoma at Odense University Hospital from September 2013 to March 2016 were considered for the study. Included patients underwent CXR/MRI and CHCT/MRI as well as PET/CT on the same day and before biopsy. Scans were read masked by separate teams of experienced nuclear physicians or radiologists. The true detection rate of distant metastasis and synchronous cancer was assessed for CXR/MRI, CHCT/MRI, and PET/CT. Results: A total of 307 patients were included. CXR/MRI correctly detected 3 (1%) patients with distant metastasis, CHCT/MRI detected 11 (4%) patients, and PET/CT detected 18 (6%) patients. The absolute differences of 5% and 2%, respectively, were statistically significant in favor of PET/CT. Also, PET/CT correctly detected 25 (8%) synchronous cancers, which was significantly more than CXR/MRI (3 patients, 1%) and CHCT/MRI (6 patients, 2%). The true detection rate of distant metastasis or synchronous cancer with PET/CT was 13% (40 patients), which was significantly higher than 2% (6 patients) for CXR/MRI and 6% (17 patients) for CHCT/MRI. Conclusion: A clinical imaging strategy based on PET/CT demonstrated a significantly higher detection rate of distant metastasis or synchronous cancer than strategies in current clinical imaging guidelines, of which European ones primarily recommend CXR/MRI, whereas U.S. guidelines preferably point to CHCT/MRI in patients with head and neck squamous cell carcinoma. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
The relationship between socio-economic status and cancer detection at screening
NASA Astrophysics Data System (ADS)
Taylor-Phillips, Sian; Ogboye, Toyin; Hamborg, Tom; Kearins, Olive; O'Sullivan, Emma; Clarke, Aileen
2015-03-01
It is well known that socio-economic status is a strong predictor of screening attendance, with women of higher socioeconomic status more likely to attend breast cancer screening. We investigated whether socio-economic status was related to the detection of cancer at breast screening centres. In two separate projects we combined UK data from the population census, the screening information systems, and the cancer registry. Five years of data from all 81 screening centres in the UK was collected. Only women who had previously attended screening were included. The study was given ethical approval by the University of Warwick Biomedical Research Ethics committee reference SDR-232-07- 2012. Generalised linear models with a log-normal link function were fitted to investigate the relationship between predictors and the age corrected cancer detection rate at each centre. We found that screening centres serving areas with lower average socio-economic status had lower cancer detection rates, even after correcting for the age distribution of the population. This may be because there may be a correlation between higher socio-economic status and some risk factors for breast cancer such as nullparity (never bearing children). When applying adjustment for age, ethnicity and socioeconomic status of the population screened (rather than simply age) we found that SDR can change by up to 0.11.
An improved method to detect correct protein folds using partial clustering.
Zhou, Jianjun; Wishart, David S
2013-01-16
Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient "partial" clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either C(α) RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance.
An improved method to detect correct protein folds using partial clustering
2013-01-01
Background Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient “partial“ clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. Results We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either Cα RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. Conclusions The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance. PMID:23323835
Commers, Tessa; Swindells, Susan; Sayles, Harlan; Gross, Alan E; Devetten, Marcel; Sandkovsky, Uriel
2014-01-01
Errors in prescribing antiretroviral therapy (ART) often occur with the hospitalization of HIV-infected patients. The rapid identification and prevention of errors may reduce patient harm and healthcare-associated costs. A retrospective review of hospitalized HIV-infected patients was carried out between 1 January 2009 and 31 December 2011. Errors were documented as omission, underdose, overdose, duplicate therapy, incorrect scheduling and/or incorrect therapy. The time to error correction was recorded. Relative risks (RRs) were computed to evaluate patient characteristics and error rates. A total of 289 medication errors were identified in 146/416 admissions (35%). The most common was drug omission (69%). At an error rate of 31%, nucleoside reverse transcriptase inhibitors were associated with an increased risk of error when compared with protease inhibitors (RR 1.32; 95% CI 1.04-1.69) and co-formulated drugs (RR 1.59; 95% CI 1.19-2.09). Of the errors, 31% were corrected within the first 24 h, but over half (55%) were never remedied. Admissions with an omission error were 7.4 times more likely to have all errors corrected within 24 h than were admissions without an omission. Drug interactions with ART were detected on 51 occasions. For the study population (n = 177), an increased risk of admission error was observed for black (43%) compared with white (28%) individuals (RR 1.53; 95% CI 1.16-2.03) but no significant differences were observed between white patients and other minorities or between men and women. Errors in inpatient ART were common, and the majority were never detected. The most common errors involved omission of medication, and nucleoside reverse transcriptase inhibitors had the highest rate of prescribing error. Interventions to prevent and correct errors are urgently needed.
Relative risk estimates from spatial and space-time scan statistics: Are they biased?
Prates, Marcos O.; Kulldorff, Martin; Assunção, Renato M.
2014-01-01
The purely spatial and space-time scan statistics have been successfully used by many scientists to detect and evaluate geographical disease clusters. Although the scan statistic has high power in correctly identifying a cluster, no study has considered the estimates of the cluster relative risk in the detected cluster. In this paper we evaluate whether there is any bias on these estimated relative risks. Intuitively, one may expect that the estimated relative risks has upward bias, since the scan statistic cherry picks high rate areas to include in the cluster. We show that this intuition is correct for clusters with low statistical power, but with medium to high power the bias becomes negligible. The same behaviour is not observed for the prospective space-time scan statistic, where there is an increasing conservative downward bias of the relative risk as the power to detect the cluster increases. PMID:24639031
Wang, X L; Wang, Z J; Tang, L; Cao, W W; Sun, X H
2017-02-11
Objective: To evaluate the usefulness of albumin correction in determination of cytomegalovirus IgG in the aqueous humor of Posner-Schlossman syndrome (PSS) patients. Methods: Cases series studies. Forty-two patients (26 men and 16 women) who were diagnosed as PSS were enrolled from Oct. 2009 to Oct. 2015 at the Eye and ENT Hospital. During the same period, 20 patients with primary open-angle glaucoma (POAG) and 30 patients with bacterial endophthalmitis or retinal necrosis were enrolled as negative control group and inflammatory disease control group, respectively. Aqueous humor and serum samples were assayed to detect CMV IgG by enzyme-linked immunosorbent assay (ELISA), and albumin by scattering immunonephelometry. CMV DNA in aqueous humor was assayed by polymerase chain reaction (PCR). The ratio which was calculated as the (aqueous humor CMV IgG/serum CMV IgG)/(aqueous humor concentration of albumin/serum albumin concentration) over 0.6 was considered as intraocular antibody formation. Performance of differentiating control eyes from eyes with CMV-positive PSS was evaluated by the receiver operating characteristic curve. The ANOVA test, Mann-Whitney test and Chi-square test were performed to compare the differences among groups. Results: The detectable rate of CMV IgG antibody in the aqueous humor was 76.2%, 100.0% and 10.0% in PSS, inflammatory disease control and POAG groups, respectively. The levels of CMV IgG antibody in the PSS groups were significantly higher than that of POAG groups ( Z= 4.23, P< 0.001).The positive rate corrected by the albumin was 71.4%, 3.3% and 0.0%.The corrected positive rate in PSS groups was significantly higher than that of the inflammatory disease control and POAG groups (χ(2)=30.38, P< 0.01; χ(2)= 24.89, P< 0.01), with a sensitivity of 75.0% and a specificity of 98.0%. The area under the curve for calibrated ratio was 0.942 (95% CI : 0.859 to 0.984) which was higher than that of CMV IgG ( Z= 6.19, P< 0.001).The corrected positive rate of CMV IgG antibody (71.4%) was higher than that of CMV DNA (47.6%, χ(2)=4.003, P= 0.045). Conclusions: CMV IgG antibody ratio which was corrected by aqueous humor and serum albumin could effectively improve aqueous antibody specificity in PSS patients. Furthermore, CMV IgG antibody ratio combined with PCR could improve the sensitivity of CMV detection. All of which help clarify the CMV infection in PSS in CMV DNA negative eyes. (Chin J Ophthalmol, 2017, 53: 104-108) .
Comparison of human and algorithmic target detection in passive infrared imagery
NASA Astrophysics Data System (ADS)
Weber, Bruce A.; Hutchinson, Meredith
2003-09-01
We have designed an experiment that compares the performance of human observers and a scale-insensitive target detection algorithm that uses pixel level information for the detection of ground targets in passive infrared imagery. The test database contains targets near clutter whose detectability ranged from easy to very difficult. Results indicate that human observers detect more "easy-to-detect" targets, and with far fewer false alarms, than the algorithm. For "difficult-to-detect" targets, human and algorithm detection rates are considerably degraded, and algorithm false alarms excessive. Analysis of detections as a function of observer confidence shows that algorithm confidence attribution does not correspond to human attribution, and does not adequately correlate with correct detections. The best target detection score for any human observer was 84%, as compared to 55% for the algorithm for the same false alarm rate. At 81%, the maximum detection score for the algorithm, the same human observer had 6 false alarms per frame as compared to 29 for the algorithm. Detector ROC curves and observer-confidence analysis benchmarks the algorithm and provides insights into algorithm deficiencies and possible paths to improvement.
Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros
2013-01-01
Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors. PMID:24688709
Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros
2013-01-01
Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors.
Quinn, Edel M; Meland, Ellen; McGinn, Stacy; Anderson, John H
2017-02-01
Preoperative anaemia is a risk factor for poorer postoperative outcomes and many colorectal cancer patients have iron-deficiency anaemia. The aim of this study was to assess if a preoperative iron-deficiency anaemia management protocol for elective colorectal surgery patients helps improve detection and treatment of iron-deficiency, and improve patient outcomes. Retrospective data was collected from 95 consecutive patients undergoing colorectal cancer surgery to establish baseline anaemia correction rates and perioperative transfusion rates. A new pathway for early detection of iron-deficiency anaemia, and treatment with intravenous iron replacement, for colorectal cancer patients was then developed and implemented. Data from 81 patients was collected prospectively post-implementation to assess the impact of the pathway. Pre-intervention data showed anaemic patients were seventeen times more likely to require perioperative transfusion than non-anaemic patients (95% CI 1.9-151.0, p = 0.011). Post-intervention, fifteen patients with iron-deficiency were treated with either intravenous (n = 8) or oral iron (n = 7). Mean Day 3 postoperative haemoglobin levels were significantly lower in patients with uncorrected anaemia (9.5 g/dL, p = 0.004); those patients whose anaemia was corrected by iron replacement therapy preoperatively had similar postoperative results to non-anaemic patients (10.93 g/dL vs 11.4 g/dL, p = 0.781). Postoperative transfusion rates remained high at 38% in patients with uncorrected anaemia, compared to 0% in corrected anaemia and 3.5% in non-anaemic patients. Introduction of an iron-deficiency anaemia management pathway has resulted in improved perioperative haemoglobin levels, with a reduction in perioperative transfusion, in elective colorectal patients. Implementation of this pathway could result in similar outcomes across other categories of surgical patients. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.
Burns, Joseph E.; Yao, Jianhua; Muñoz, Hector
2016-01-01
Purpose To design and validate a fully automated computer system for the detection and anatomic localization of traumatic thoracic and lumbar vertebral body fractures at computed tomography (CT). Materials and Methods This retrospective study was HIPAA compliant. Institutional review board approval was obtained, and informed consent was waived. CT examinations in 104 patients (mean age, 34.4 years; range, 14–88 years; 32 women, 72 men), consisting of 94 examinations with positive findings for fractures (59 with vertebral body fractures) and 10 control examinations (without vertebral fractures), were performed. There were 141 thoracic and lumbar vertebral body fractures in the case set. The locations of fractures were marked and classified by a radiologist according to Denis column involvement. The CT data set was divided into training and testing subsets (37 and 67 subsets, respectively) for analysis by means of prototype software for fully automated spinal segmentation and fracture detection. Free-response receiver operating characteristic analysis was performed. Results Training set sensitivity for detection and localization of fractures within each vertebra was 0.82 (28 of 34 findings; 95% confidence interval [CI]: 0.68, 0.90), with a false-positive rate of 2.5 findings per patient. The sensitivity for fracture localization to the correct vertebra was 0.88 (23 of 26 findings; 95% CI: 0.72, 0.96), with a false-positive rate of 1.3. Testing set sensitivity for the detection and localization of fractures within each vertebra was 0.81 (87 of 107 findings; 95% CI: 0.75, 0.87), with a false-positive rate of 2.7. The sensitivity for fracture localization to the correct vertebra was 0.92 (55 of 60 findings; 95% CI: 0.79, 0.94), with a false-positive rate of 1.6. The most common cause of false-positive findings was nutrient foramina (106 of 272 findings [39%]). Conclusion The fully automated computer system detects and anatomically localizes vertebral body fractures in the thoracic and lumbar spine on CT images with a high sensitivity and a low false-positive rate. © RSNA, 2015 Online supplemental material is available for this article. PMID:26172532
Controlling qubit drift by recycling error correction syndromes
NASA Astrophysics Data System (ADS)
Blume-Kohout, Robin
2015-03-01
Physical qubits are susceptible to systematic drift, above and beyond the stochastic Markovian noise that motivates quantum error correction. This parameter drift must be compensated - if it is ignored, error rates will rise to intolerable levels - but compensation requires knowing the parameters' current value, which appears to require halting experimental work to recalibrate (e.g. via quantum tomography). Fortunately, this is untrue. I show how to perform on-the-fly recalibration on the physical qubits in an error correcting code, using only information from the error correction syndromes. The algorithm for detecting and compensating drift is very simple - yet, remarkably, when used to compensate Brownian drift in the qubit Hamiltonian, it achieves a stabilized error rate very close to the theoretical lower bound. Against 1/f noise, it is less effective only because 1/f noise is (like white noise) dominated by high-frequency fluctuations that are uncompensatable. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE
[Shock shape representation of sinus heart rate based on cloud model].
Yin, Wenfeng; Zhao, Jie; Chen, Tiantian; Zhang, Junjian; Zhang, Chunyou; Li, Dapeng; An, Baijing
2014-04-01
The present paper is to analyze the trend of sinus heart rate RR interphase sequence after a single ventricular premature beat and to compare it with the two parameters, turbulence onset (TO) and turbulence slope (TS). Based on the acquisition of sinus rhythm concussion sample, we in this paper use a piecewise linearization method to extract its linear characteristics, following which we describe shock form with natural language through cloud model. In the process of acquisition, we use the exponential smoothing method to forecast the position where QRS wave may appear to assist QRS wave detection, and use template to judge whether current cardiac is sinus rhythm. And we choose some signals from MIT-BIH Arrhythmia Database to detect whether the algorithm is effective in Matlab. The results show that our method can correctly detect the changing trend of sinus heart rate. The proposed method can achieve real-time detection of sinus rhythm shocks, which is simple and easily implemented, so that it is effective as a supplementary method.
Dead-time compensation for a logarithmic display rate meter
Larson, John A.; Krueger, Frederick P.
1988-09-20
An improved circuit is provided for application to a radiation survey meter that uses a detector that is subject to dead time. The circuit compensates for dead time over a wide range of count rates by producing a dead-time pulse for each detected event, a live-time pulse that spans the interval between dead-time pulses, and circuits that average the value of these pulses over time. The logarithm of each of these values is obtained and the logarithms are subtracted to provide a signal that is proportional to a count rate that is corrected for the effects of dead time. The circuit produces a meter indication and is also capable of producing an audible indication of detected events.
Dead-time compensation for a logarithmic display rate meter
Larson, J.A.; Krueger, F.P.
1987-10-05
An improved circuit is provided for application to a radiation survey meter that uses a detector that is subject to dead time. The circuit compensates for dead time over a wide range of count rates by producing a dead-time pulse for each detected event, a live-time pulse that spans the interval between dead-time pulses, and circuits that average the value of these pulses over time. The logarithm of each of these values is obtained and the logarithms are subtracted to provide a signal that is proportional to a count rate that is corrected for the effects of dead time. The circuit produces a meter indication and is also capable of producing an audible indication of detected events. 5 figs.
NASA Astrophysics Data System (ADS)
Mossoux, Enmanuelle; Grosso, Nicolas
2017-08-01
Context. X-ray flaring activity from the closest supermassive black hole Sagittarius A* (Sgr A*) located at the center of our Galaxy has been observed since 2000 October 26 thanks to the current generation of X-ray facilities. In a study of X-ray flaring activity from Sgr A* using Chandra and XMM-Newton public observations from 1999 to 2014 and Swift monitoring in 2014, it was argued that the "bright and very bright" flaring rate has increased from 2014 August 31. Aims: As a result of additional observations performed in 2015 with Chandra, XMM-Newton, and Swift (total exposure of 482 ks), we seek to test the significance and persistence of this increase of flaring rate and to determine the threshold of unabsorbed flare flux or fluence leading to any change of flaring rate. Methods: We reprocessed the Chandra, XMM-Newton, and Swift data from 1999 to 2015 November 2. From these data, we detected the X-ray flares via our two-step Bayesian blocks algorithm with a prior on the number of change points properly calibrated for each observation. We improved the Swift data analysis by correcting the effects of the target variable position on the detector and we detected the X-ray flares with a 3σ threshold on the binned light curves. The mean unabsorbed fluxes of the 107 detected flares were consistently computed from the extracted spectra and the corresponding calibration files, assuming the same spectral parameters. We constructed the observed distribution of flare fluxes and durations from the XMM-Newton and Chandra detections. We corrected this observed distribution from the detection biases to estimate the intrinsic distribution of flare fluxes and durations. From this intrinsic distribution, we determined the average flare detection efficiency for each XMM-Newton, Chandra, and Swift observation. We finally applied the Bayesian blocks algorithm on the arrival times of the flares corrected from the corresponding efficiency. Results: We confirm a constant overall flaring rate from 1999 to 2015 and a rise in the flaring rate by a factor of three for the most luminous and most energetic flares from 2014 August 31, I.e., about four months after the pericenter passage of the Dusty S-cluster Object (DSO)/G2 close to Sgr A*. In addition, we identify a decay of the flaring rate for the less luminous and less energetic flares from 2013 August and November, respectively, I.e., about 10 and 7 months before the pericenter passage of the DSO/G2 and 13 and 10 months before the rise in the bright flaring rate. Conclusions: The decay of the faint flaring rate is difficult to explain in terms of the tidal disruption of a dusty cloud since it occurred well before the pericenter passage of the DSO/G2, whose stellar nature is now well established. Moreover, a mass transfer from the DSO/G2 to Sgr A* is not required to produce the rise in the bright flaring rate since the energy saved by the decay of the number of faint flares during a long period of time may be later released by several bright flares during a shorter period of time.
Discriminability and Sensitivity to Reinforcer Magnitude in a Detection Task
ERIC Educational Resources Information Center
Alsop, Brent; Porritt, Melissa
2006-01-01
Three pigeons discriminated between two sample stimuli (intensities of red light). The difficulty of the discrimination was varied over four levels. At each level, the relative reinforcer magnitude for the two correct responses was varied across conditions, and the reinforcer rates were equal. Within levels, discriminability between the sample…
Error Detection in Mechanized Classification Systems
ERIC Educational Resources Information Center
Hoyle, W. G.
1976-01-01
When documentary material is indexed by a mechanized classification system, and the results judged by trained professionals, the number of documents in disagreement, after suitable adjustment, defines the error rate of the system. In a test case disagreement was 22 percent and, of this 22 percent, the computer correctly identified two-thirds of…
Multimodal Sensor Fusion for Personnel Detection
2011-07-01
video ). Efficacy of UGS systems is often limited by high false alarm rates because the onboard data processing algorithms may not be able to correctly...humans) and animals (e.g., donkeys , mules, and horses). The humans walked alone and in groups with and without backpacks; the animals were led by their
Developing a Qualia-Based Multi-Agent Architecture for Use in Malware Detection
2010-03-01
executables were correctly classified with a 6% false positive rate [7]. Kolter and Maloof expand Schultz’s work by analyzing different...Proceedings of the 2001 IEEE Symposium on Security and Privacy. Los Alamitos, CA: IEEE Computer Society, 2001. [8] J. Z. Kolter and M. A. Maloof
User acceptance of intelligent avionics: A study of automatic-aided target recognition
NASA Technical Reports Server (NTRS)
Becker, Curtis A.; Hayes, Brian C.; Gorman, Patrick C.
1991-01-01
User acceptance of new support systems typically was evaluated after the systems were specified, designed, and built. The current study attempts to assess user acceptance of an Automatic-Aided Target Recognition (ATR) system using an emulation of such a proposed system. The detection accuracy and false alarm level of the ATR system were varied systematically, and subjects rated the tactical value of systems exhibiting different performance levels. Both detection accuracy and false alarm level affected the subjects' ratings. The data from two experiments suggest a cut-off point in ATR performance below which the subjects saw little tactical value in the system. An ATR system seems to have obvious tactical value only if it functions at a correct detection rate of 0.7 or better with a false alarm level of 0.167 false alarms per square degree or fewer.
Narayan, Sreenath; Kalhan, Satish C.; Wilson, David L.
2012-01-01
I.Abstract Purpose To reduce swaps in fat-water separation methods, a particular issue on 7T small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Materials and Methods Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Results Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Conclusion Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. PMID:23023815
Narayan, Sreenath; Kalhan, Satish C; Wilson, David L
2013-05-01
To reduce swaps in fat-water separation methods, a particular issue on 7 Tesla (T) small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Lohrmann, Carol A.
1990-03-01
Interoperability of commercial Land Mobile Radios (LMR) and the military's tactical LMR is highly desirable if the U.S. government is to respond effectively in a national emergency or in a joint military operation. This ability to talk securely and immediately across agency and military service boundaries is often overlooked. One way to ensure interoperability is to develop and promote Federal communication standards (FS). This thesis surveys one area of the proposed FS 1024 for LMRs; namely, the error detection and correction (EDAC) of the message indicator (MI) bits used for cryptographic synchronization. Several EDAC codes are examined (Hamming, Quadratic Residue, hard decision Golay and soft decision Golay), tested on three FORTRAN programmed channel simulations (INMARSAT, Gaussian and constant burst width), compared and analyzed (based on bit error rates and percent of error-free super-frame runs) so that a best code can be recommended. Out of the four codes under study, the soft decision Golay code (24,12) is evaluated to be the best. This finding is based on the code's ability to detect and correct errors as well as the relative ease of implementation of the algorithm.
Real-time 3-D contrast-enhanced transcranial ultrasound and aberration correction.
Ivancevich, Nikolas M; Pinton, Gianmarco F; Nicoletto, Heather A; Bennett, Ellen; Laskowitz, Daniel T; Smith, Stephen W
2008-09-01
Contrast-enhanced (CE) transcranial ultrasound (US) and reconstructed 3-D transcranial ultrasound have shown advantages over traditional methods in a variety of cerebrovascular diseases. We present the results from a novel ultrasound technique, namely real-time 3-D contrast-enhanced transcranial ultrasound. Using real-time 3-D (RT3D) ultrasound and microbubble contrast agent, we scanned 17 healthy volunteers via a single temporal window and nine via the suboccipital window and report our detection rates for the major cerebral vessels. In 71% of subjects, both of our observers identified the ipsilateral circle of Willis from the temporal window, and in 59% we imaged the entire circle of Willis. From the suboccipital window, both observers detected the entire vertebrobasilar circulation in 22% of subjects, and in 44%, the basilar artery. After performing phase aberration correction on one subject, we were able to increase the diagnostic value of the scan, detecting a vessel not present in the uncorrected scan. These preliminary results suggest that RT3D CE transcranial US and RT3D CE transcranial US with phase aberration correction have the potential to greatly impact the field of neurosonology.
Real-Time 3D Contrast-Enhanced Transcranial Ultrasound and Aberration Correction
Ivancevich, Nikolas M.; Pinton, Gianmarco F.; Nicoletto, Heather A.; Bennett, Ellen; Laskowitz, Daniel T.; Smith, Stephen W.
2008-01-01
Contrast-enhanced (CE) transcranial ultrasound (US) and reconstructed 3D transcranial ultrasound have shown advantages over traditional methods in a variety of cerebrovascular diseases. We present the results from a novel ultrasound technique, namely real-time 3D contrast-enhanced transcranial ultrasound. Using real-time 3D (RT3D) ultrasound and micro-bubble contrast agent, we scanned 17 healthy volunteers via a single temporal window and 9 via the sub-occipital window and report our detection rates for the major cerebral vessels. In 71% of subjects, both of our observers identified the ipsilateral circle of Willis from the temporal window, and in 59% we imaged the entire circle of Willis. From the sub-occipital window, both observers detected the entire vertebrobasilar circulation in 22% of subjects, and in 44% the basilar artery. After performing phase aberration correction on one subject, we were able to increase the diagnostic value of the scan, detecting a vessel not present in the uncorrected scan. These preliminary results suggest that RT3D CE transcranial US and RT3D CE transcranial US with phase aberration correction have the potential to greatly impact the field of neurosonology. PMID:18395321
Skaane, Per; Kshirsagar, Ashwini; Hofvind, Solveig; Jahr, Gunnar; Castellino, Ronald A
2012-04-01
Double reading improves the cancer detection rate in mammography screening. Single reading with computer-aided detection (CAD) has been considered to be an alternative to double reading. Little is known about the potential benefit of CAD in breast cancer screening with double reading. To compare prospective independent double reading of screen-film (SFM) and full-field digital (FFDM) mammography in population-based screening with retrospective standalone CAD performance on the baseline mammograms of the screen-detected cancers and subsequent cancers diagnosed during the follow-up period. The study had ethics committee approval. A 5-point rating scale for probability of cancer was used for 23,923 (SFM = 16,983; FFDM = 6940) screening mammograms. Of 208 evaluable cancers, 104 were screen-detected and 104 were subsequent (44 interval and 60 next screening round) cancers. Baseline mammograms of subsequent cancers were retrospectively classified in consensus without information about cancer location, histology, or CAD prompting as normal, non-specific minimal signs, significant minimal signs, and false-negatives. The baseline mammograms of the screen-detected cancers and subsequent cancers were evaluated by CAD. Significant minimal signs and false-negatives were considered 'actionable' and potentially diagnosable if correctly prompted by CAD. CAD correctly marked 94% (98/104) of the baseline mammograms of the screen-detected cancers (SFM = 95% [61/64]; FFDM = 93% [37/40]), including 96% (23/24) of those with discordant interpretations. Considering only those baseline examinations of subsequent cancers prospectively interpreted as normal and retrospectively categorized as 'actionable', CAD input at baseline screening had the potential to increase the cancer detection rate from 0.43% to 0.51% (P = 0.13); and to increase cancer detection by 16% ([104 + 17]/104) and decrease interval cancers by 20% (from 44 to 35). CAD may have the potential to increase cancer detection by up to 16%, and to reduce the number of interval cancers by up to 20% in SFM and FFDM screening programs using independent double reading with consensus review. The influence of true- and false-positive CAD marks on decision-making can, however, only be evaluated in a prospective clinical study.
Pozzi, P; Wilding, D; Soloviev, O; Verstraete, H; Bliek, L; Vdovin, G; Verhaegen, M
2017-01-23
The quality of fluorescence microscopy images is often impaired by the presence of sample induced optical aberrations. Adaptive optical elements such as deformable mirrors or spatial light modulators can be used to correct aberrations. However, previously reported techniques either require special sample preparation, or time consuming optimization procedures for the correction of static aberrations. This paper reports a technique for optical sectioning fluorescence microscopy capable of correcting dynamic aberrations in any fluorescent sample during the acquisition. This is achieved by implementing adaptive optics in a non conventional confocal microscopy setup, with multiple programmable confocal apertures, in which out of focus light can be separately detected, and used to optimize the correction performance with a sampling frequency an order of magnitude faster than the imaging rate of the system. The paper reports results comparing the correction performances to traditional image optimization algorithms, and demonstrates how the system can compensate for dynamic changes in the aberrations, such as those introduced during a focal stack acquisition though a thick sample.
Streiner, David L
2015-10-01
Testing many null hypotheses in a single study results in an increased probability of detecting a significant finding just by chance (the problem of multiplicity). Debates have raged over many years with regard to whether to correct for multiplicity and, if so, how it should be done. This article first discusses how multiple tests lead to an inflation of the α level, then explores the following different contexts in which multiplicity arises: testing for baseline differences in various types of studies, having >1 outcome variable, conducting statistical tests that produce >1 P value, taking multiple "peeks" at the data, and unplanned, post hoc analyses (i.e., "data dredging," "fishing expeditions," or "P-hacking"). It then discusses some of the methods that have been proposed for correcting for multiplicity, including single-step procedures (e.g., Bonferroni); multistep procedures, such as those of Holm, Hochberg, and Šidák; false discovery rate control; and resampling approaches. Note that these various approaches describe different aspects and are not necessarily mutually exclusive. For example, resampling methods could be used to control the false discovery rate or the family-wise error rate (as defined later in this article). However, the use of one of these approaches presupposes that we should correct for multiplicity, which is not universally accepted, and the article presents the arguments for and against such "correction." The final section brings together these threads and presents suggestions with regard to when it makes sense to apply the corrections and how to do so. © 2015 American Society for Nutrition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk
Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less
Gestational and Fetal Outcomes in B19 Maternal Infection: a Problem of Diagnosis▿
Bonvicini, Francesca; Puccetti, Chiara; Salfi, Nunzio C. M.; Guerra, Brunella; Gallinella, Giorgio; Rizzo, Nicola; Zerbini, Marialuisa
2011-01-01
Parvovirus B19 infection during pregnancy is a potential hazard to the fetus because of the virus' ability to infect fetal erythroid precursor cells and fetal tissues. Fetal complications range from transitory fetal anemia and nonimmune fetal hydrops to miscarriage and intrauterine fetal death. In the present study, 72 pregnancies complicated by parvovirus B19 infection were followed up: fetal and neonatal specimens were investigated by serological and/or virological assays to detect fetal/congenital infection, and fetuses and neonates were clinically evaluated to monitor pregnancy outcomes following maternal infection. Analysis of serological and virological maternal B19 markers of infection demonstrated that neither B19 IgM nor B19 DNA detected all maternal infections. IgM serology correctly diagnosed 94.1% of the B19 infections, while DNA testing correctly diagnosed 96.3%. The maximum sensitivity was achieved with the combined detection of both parameters. B19 vertical transmission was observed in 39% of the pregnancies, with an overall 10.2% rate of fetal deaths. The highest rates of congenital infections and B19-related fatal outcomes were observed when maternal infections occurred by the gestational week 20. B19 fetal hydrops occurred in 11.9% of the fetuses, and 28.6% resolved the hydrops with a normal neurodevelopment outcome at 1- to 5-year follow-up. In conclusion, maternal screening based on the concurrent analysis of B19 IgM and DNA should be encouraged to reliably diagnose maternal B19 infection and correctly manage pregnancies at risk. PMID:21849687
Semi-inclusive wino and higgsino annihilation to LL'
Baumgart, Matthew; Vaidya, Varun
2016-03-31
Here, we systematically compute the annihilation rate for winos and higgsinos into the final state relevant for indirect detection experiments, γ + X. The radiative corrections to this process receive enhancement from the large Bloch-Nordsieck-Violating Sudakov logarithm, log(2Mmore » $${\\chi }$$/M W). We resum the double logs and include single logs to fixed order using a formalism that combines nonrelativistic and soft-collinear effective field theories. For the wino case, we update an earlier exclusion adapting results of the HESS experiment. At the thermal relic mass of 3 TeV, LL' corrections result in a ~30% reduction in rate relative to LL. But, single logs do not save the wino, and it is still excluded by an order of magnitude. Finally, experimental cuts produce an endpoint region which, our results show, significantly effects the higgsino rate at its thermal-relic mass near 1 TeV and is deserving of further study.« less
Concurrent remote entanglement with quantum error correction against photon losses
NASA Astrophysics Data System (ADS)
Roy, Ananda; Stone, A. Douglas; Jiang, Liang
2016-09-01
Remote entanglement of distant, noninteracting quantum entities is a key primitive for quantum information processing. We present a protocol to remotely entangle two stationary qubits by first entangling them with propagating ancilla qubits and then performing a joint two-qubit measurement on the ancillas. Subsequently, single-qubit measurements are performed on each of the ancillas. We describe two continuous variable implementations of the protocol using propagating microwave modes. The first implementation uses propagating Schr o ̈ dinger cat states as the flying ancilla qubits, a joint-photon-number-modulo-2 measurement of the propagating modes for the two-qubit measurement, and homodyne detections as the final single-qubit measurements. The presence of inefficiencies in realistic quantum systems limit the success rate of generating high fidelity Bell states. This motivates us to propose a second continuous variable implementation, where we use quantum error correction to suppress the decoherence due to photon loss to first order. To that end, we encode the ancilla qubits in superpositions of Schrödinger cat states of a given photon-number parity, use a joint-photon-number-modulo-4 measurement as the two-qubit measurement, and homodyne detections as the final single-qubit measurements. We demonstrate the resilience of our quantum-error-correcting remote entanglement scheme to imperfections. Further, we describe a modification of our error-correcting scheme by incorporating additional individual photon-number-modulo-2 measurements of the ancilla modes to improve the success rate of generating high-fidelity Bell states. Our protocols can be straightforwardly implemented in state-of-the-art superconducting circuit-QED systems.
Phylogeny and species traits predict bird detectability
Solymos, Peter; Matsuoka, Steven M.; Stralberg, Diana; Barker, Nicole K. S.; Bayne, Erin M.
2018-01-01
Avian acoustic communication has resulted from evolutionary pressures and ecological constraints. We therefore expect that auditory detectability in birds might be predictable by species traits and phylogenetic relatedness. We evaluated the relationship between phylogeny, species traits, and field‐based estimates of the two processes that determine species detectability (singing rate and detection distance) for 141 bird species breeding in boreal North America. We used phylogenetic mixed models and cross‐validation to compare the relative merits of using trait data only, phylogeny only, or the combination of both to predict detectability. We found a strong phylogenetic signal in both singing rates and detection distances; however the strength of phylogenetic effects was less than expected under Brownian motion evolution. The evolution of behavioural traits that determine singing rates was found to be more labile, leaving more room for species to evolve independently, whereas detection distance was mostly determined by anatomy (i.e. body size) and thus the laws of physics. Our findings can help in disentangling how complex ecological and evolutionary mechanisms have shaped different aspects of detectability in boreal birds. Such information can greatly inform single‐ and multi‐species models but more work is required to better understand how to best correct possible biases in phylogenetic diversity and other community metrics.
Manhole Cover Detection Using Vehicle-Based Multi-Sensor Data
NASA Astrophysics Data System (ADS)
Ji, S.; Shi, Y.; Shi, Z.
2012-07-01
A new method combined wit multi-view matching and feature extraction technique is developed to detect manhole covers on the streets using close-range images combined with GPS/IMU and LINDAR data. The covers are an important target on the road traffic as same as transport signs, traffic lights and zebra crossing but with more unified shapes. However, the different shoot angle and distance, ground material, complex street scene especially its shadow, and cars in the road have a great impact on the cover detection rate. The paper introduces a new method in edge detection and feature extraction in order to overcome these difficulties and greatly improve the detection rate. The LIDAR data are used to do scene segmentation and the street scene and cars are excluded from the roads. And edge detection method base on canny which sensitive to arcs and ellipses is applied on the segmented road scene and the interesting areas contain arcs are extracted and fitted to ellipse. The ellipse are then resampled for invariance to shooting angle and distance and then are matched to adjacent images for further checking if covers and . More than 1000 images with different scenes are used in our tests and the detection rate is analyzed. The results verified our method have its advantages in correct covers detection in the complex street scene.
Persistent aerial video registration and fast multi-view mosaicing.
Molina, Edgardo; Zhu, Zhigang
2014-05-01
Capturing aerial imagery at high resolutions often leads to very low frame rate video streams, well under full motion video standards, due to bandwidth, storage, and cost constraints. Low frame rates make registration difficult when an aircraft is moving at high speeds or when global positioning system (GPS) contains large errors or it fails. We present a method that takes advantage of persistent cyclic video data collections to perform an online registration with drift correction. We split the persistent aerial imagery collection into individual cycles of the scene, identify and correct the registration errors on the first cycle in a batch operation, and then use the corrected base cycle as a reference pass to register and correct subsequent passes online. A set of multi-view panoramic mosaics is then constructed for each aerial pass for representation, presentation and exploitation of the 3D dynamic scene. These sets of mosaics are all in alignment to the reference cycle allowing their direct use in change detection, tracking, and 3D reconstruction/visualization algorithms. Stereo viewing with adaptive baselines and varying view angles is realized by choosing a pair of mosaics from a set of multi-view mosaics. Further, the mosaics for the second pass and later can be generated and visualized online as their is no further batch error correction.
Ruiz-Gutierrez, Viviana; Zipkin, Elise F.
2011-01-01
Species occurrence patterns, and related processes of persistence, colonization and turnover, are increasingly being used to infer habitat suitability, predict species distributions, and measure biodiversity potential. The majority of these studies do not account for observational error in their analyses despite growing evidence suggesting that the sampling process can significantly influence species detection and subsequently, estimates of occurrence. We examined the potential biases of species occurrence patterns that can result from differences in detectability across species and habitat types using hierarchical multispecies occupancy models applied to a tropical bird community in an agricultural fragmented landscape. Our results suggest that detection varies widely among species and habitat types. Not incorporating detectability severely biased occupancy dynamics for many species by overestimating turnover rates, producing misleading patterns of persistence and colonization of agricultural habitats, and misclassifying species into ecological categories (i.e., forest specialists and generalists). This is of serious concern, given that most research on the ability of agricultural lands to maintain current levels of biodiversity by and large does not correct for differences in detectability. We strongly urge researchers to apply an inferential framework which explicitly account for differences in detectability to fully characterize species-habitat relationships, correctly guide biodiversity conservation in human-modified landscapes, and generate more accurate predictions of species responses to future changes in environmental conditions.
Lazarowski, Lucia; Foster, Melanie L; Gruen, Margaret E; Sherman, Barbara L; Fish, Richard E; Milgram, Norton W; Dorman, David C
2015-11-01
A critical aspect of canine explosive detection involves the animal's ability respond to novel, untrained odors based on prior experience with training odors. In the current study, adult Labrador retrievers (N = 15) were initially trained to discriminate between a rewarded odor (vanillin) and an unrewarded odor (ethanol) by manipulating scented objects with their nose in order to receive a food reward using a canine-adapted discrimination training apparatus. All dogs successfully learned this olfactory discrimination task (≥80 % correct in a mean of 296 trials). Next, dogs were trained on an ammonium nitrate (AN, NH4NO3) olfactory discrimination task [acquired in 60-240 trials, with a mean (±SEM) number of trials to criterion of 120.0 ± 15.6] and then tested for their ability to respond to untrained ammonium- and/or nitrate-containing chemicals as well as variants of AN compounds. Dogs did not respond to sodium nitrate or ammonium sulfate compounds at rates significantly higher than chance (58.8 ± 4.5 and 57.7 ± 3.3 % correct, respectively). Transfer performance to fertilizer-grade AN, AN mixed in Iraqi soil, and AN and flaked aluminum was significantly higher than chance (66.7 ± 3.2, 73.3 ± 4.0, 68.9 ± 4.0 % correct, respectively); however, substantial individual differences were observed. Only 53, 60, and 64 % of dogs had a correct response rate with fertilizer-grade AN, AN and Iraqi soil, and AN and flaked aluminum, respectively, that were greater than chance. Our results suggest that dogs do not readily generalize from AN to similar AN-based odorants at reliable levels desired for explosive detection dogs and that performance varies significantly within Labrador retrievers selected for an explosive detection program.
NASA Astrophysics Data System (ADS)
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
ARTiiFACT: a tool for heart rate artifact processing and heart rate variability analysis.
Kaufmann, Tobias; Sütterlin, Stefan; Schulz, Stefan M; Vögele, Claus
2011-12-01
The importance of appropriate handling of artifacts in interbeat interval (IBI) data must not be underestimated. Even a single artifact may cause unreliable heart rate variability (HRV) results. Thus, a robust artifact detection algorithm and the option for manual intervention by the researcher form key components for confident HRV analysis. Here, we present ARTiiFACT, a software tool for processing electrocardiogram and IBI data. Both automated and manual artifact detection and correction are available in a graphical user interface. In addition, ARTiiFACT includes time- and frequency-based HRV analyses and descriptive statistics, thus offering the basic tools for HRV analysis. Notably, all program steps can be executed separately and allow for data export, thus offering high flexibility and interoperability with a whole range of applications.
Neyman Pearson detection of K-distributed random variables
NASA Astrophysics Data System (ADS)
Tucker, J. Derek; Azimi-Sadjadi, Mahmood R.
2010-04-01
In this paper a new detection method for sonar imagery is developed in K-distributed background clutter. The equation for the log-likelihood is derived and compared to the corresponding counterparts derived for the Gaussian and Rayleigh assumptions. Test results of the proposed method on a data set of synthetic underwater sonar images is also presented. This database contains images with targets of different shapes inserted into backgrounds generated using a correlated K-distributed model. Results illustrating the effectiveness of the K-distributed detector are presented in terms of probability of detection, false alarm, and correct classification rates for various bottom clutter scenarios.
Manna, F; Pradel, R; Choquet, R; Fréville, H; Cheptou, P-O
2017-10-01
In plants, the presence of a seed bank challenges the application of classical metapopulation models to aboveground presence surveys; ignoring seed bank leads to overestimated extinction and colonization rates. In this article, we explore the possibility to detect seed bank using hidden Markov models in the analysis of aboveground patch occupancy surveys of an annual plant with limited dispersal. Patch occupancy data were generated by simulation under two metapopulation sizes (N = 200 and N = 1,000 patches) and different metapopulation scenarios, each scenario being a combination of the presence/absence of a 1-yr seed bank and the presence/absence of limited dispersal in a circular 1-dimension configuration of patches. In addition, because local conditions often vary among patches in natural metapopulations, we simulated patch occupancy data with heterogeneous germination rate and patch disturbance. Seed bank is not observable from aboveground patch occupancy surveys, hence hidden Markov models were designed to account for uncertainty in patch occupancy. We explored their ability to retrieve the correct scenario. For 10 yr surveys and metapopulation sizes of N = 200 or 1,000 patches, the correct metapopulation scenario was detected at a rate close to 100%, whatever the underlying scenario considered. For smaller, more realistic, survey duration, the length for a reliable detection of the correct scenario depends on the metapopulation size: 3 yr for N = 1,000 and 6 yr for N = 200 are enough. Our method remained powerful to disentangle seed bank from dispersal in the presence of patch heterogeneity affecting either seed germination or patch extinction. Our work shows that seed bank and limited dispersal generate different signatures on aboveground patch occupancy surveys. Therefore, our method provides a powerful tool to infer metapopulation dynamics in a wide range of species with an undetectable life form. © 2017 by the Ecological Society of America.
Analyzing time-ordered event data with missed observations.
Dokter, Adriaan M; van Loon, E Emiel; Fokkema, Wimke; Lameris, Thomas K; Nolet, Bart A; van der Jeugd, Henk P
2017-09-01
A common problem with observational datasets is that not all events of interest may be detected. For example, observing animals in the wild can difficult when animals move, hide, or cannot be closely approached. We consider time series of events recorded in conditions where events are occasionally missed by observers or observational devices. These time series are not restricted to behavioral protocols, but can be any cyclic or recurring process where discrete outcomes are observed. Undetected events cause biased inferences on the process of interest, and statistical analyses are needed that can identify and correct the compromised detection processes. Missed observations in time series lead to observed time intervals between events at multiples of the true inter-event time, which conveys information on their detection probability. We derive the theoretical probability density function for observed intervals between events that includes a probability of missed detection. Methodology and software tools are provided for analysis of event data with potential observation bias and its removal. The methodology was applied to simulation data and a case study of defecation rate estimation in geese, which is commonly used to estimate their digestive throughput and energetic uptake, or to calculate goose usage of a feeding site from dropping density. Simulations indicate that at a moderate chance to miss arrival events ( p = 0.3), uncorrected arrival intervals were biased upward by up to a factor 3, while parameter values corrected for missed observations were within 1% of their true simulated value. A field case study shows that not accounting for missed observations leads to substantial underestimates of the true defecation rate in geese, and spurious rate differences between sites, which are introduced by differences in observational conditions. These results show that the derived methodology can be used to effectively remove observational biases in time-ordered event data.
A Hybrid Wavelet-Based Method for the Peak Detection of Photoplethysmography Signals.
Li, Suyi; Jiang, Shanqing; Jiang, Shan; Wu, Jiang; Xiong, Wenji; Diao, Shu
2017-01-01
The noninvasive peripheral oxygen saturation (SpO 2 ) and the pulse rate can be extracted from photoplethysmography (PPG) signals. However, the accuracy of the extraction is directly affected by the quality of the signal obtained and the peak of the signal identified; therefore, a hybrid wavelet-based method is proposed in this study. Firstly, we suppressed the partial motion artifacts and corrected the baseline drift by using a wavelet method based on the principle of wavelet multiresolution. And then, we designed a quadratic spline wavelet modulus maximum algorithm to identify the PPG peaks automatically. To evaluate this hybrid method, a reflective pulse oximeter was used to acquire ten subjects' PPG signals under sitting, raising hand, and gently walking postures, and the peak recognition results on the raw signal and on the corrected signal were compared, respectively. The results showed that the hybrid method not only corrected the morphologies of the signal well but also optimized the peaks identification quality, subsequently elevating the measurement accuracy of SpO 2 and the pulse rate. As a result, our hybrid wavelet-based method profoundly optimized the evaluation of respiratory function and heart rate variability analysis.
A Hybrid Wavelet-Based Method for the Peak Detection of Photoplethysmography Signals
Jiang, Shanqing; Jiang, Shan; Wu, Jiang; Xiong, Wenji
2017-01-01
The noninvasive peripheral oxygen saturation (SpO2) and the pulse rate can be extracted from photoplethysmography (PPG) signals. However, the accuracy of the extraction is directly affected by the quality of the signal obtained and the peak of the signal identified; therefore, a hybrid wavelet-based method is proposed in this study. Firstly, we suppressed the partial motion artifacts and corrected the baseline drift by using a wavelet method based on the principle of wavelet multiresolution. And then, we designed a quadratic spline wavelet modulus maximum algorithm to identify the PPG peaks automatically. To evaluate this hybrid method, a reflective pulse oximeter was used to acquire ten subjects' PPG signals under sitting, raising hand, and gently walking postures, and the peak recognition results on the raw signal and on the corrected signal were compared, respectively. The results showed that the hybrid method not only corrected the morphologies of the signal well but also optimized the peaks identification quality, subsequently elevating the measurement accuracy of SpO2 and the pulse rate. As a result, our hybrid wavelet-based method profoundly optimized the evaluation of respiratory function and heart rate variability analysis. PMID:29250135
On-board error correction improves IR earth sensor accuracy
NASA Astrophysics Data System (ADS)
Alex, T. K.; Kasturirangan, K.; Shrivastava, S. K.
1989-10-01
Infra-red earth sensors are used in satellites for attitude sensing. Their accuracy is limited by systematic and random errors. The sources of errors in a scanning infra-red earth sensor are analyzed in this paper. The systematic errors arising from seasonal variation of infra-red radiation, oblate shape of the earth, ambient temperature of sensor, changes in scan/spin rates have been analyzed. Simple relations are derived using least square curve fitting for on-board correction of these errors. Random errors arising out of noise from detector and amplifiers, instability of alignment and localized radiance anomalies are analyzed and possible correction methods are suggested. Sun and Moon interference on earth sensor performance has seriously affected a number of missions. The on-board processor detects Sun/Moon interference and corrects the errors on-board. It is possible to obtain eight times improvement in sensing accuracy, which will be comparable with ground based post facto attitude refinement.
Vedder, Oscar; Kürten, Nathalie; Bouwhuis, Sandra
Embryonic development time is thought to impact life histories through trade-offs against life-history traits later in life, yet the inference is based on interspecific comparative analyses only. It is largely unclear whether intraspecific variation in embryonic development time that is not caused by environmental differences occurs, which would be required to detect life-history trade-offs. Here we performed a classical common-garden experiment by incubating fresh eggs of free-living common terns (Sterna hirundo) in a controlled incubation environment at two different temperatures. Hatching success was high but was slightly lower at the lower temperature. While correcting for effects of year, incubation temperature, and laying order, we found significant variation in the incubation time embryos required until hatching and in their heart rate. Embryonic heart rate was significantly positively correlated within clutches, and a similar tendency was found for incubation time, suggesting that intrinsic differences in embryonic development rate between offspring of different parents exist. Incubation time and embryonic heart rate were strongly correlated: embryos with faster heart rates required shorter incubation time. However, after correction for heart rate, embryos still required more time for development at the lower incubation temperature. This suggests that processes other than development require a greater share of resources in a suboptimal environment and that relative resource allocation to development is, therefore, environment dependent. We conclude that there is opportunity to detect intraspecific life-history trade-offs with embryonic development time and that the resolution of trade-offs may differ between embryonic environments.
Warren, Victoria E; Marques, Tiago A; Harris, Danielle; Thomas, Len; Tyack, Peter L; Aguilar de Soto, Natacha; Hickmott, Leigh S; Johnson, Mark P
2017-03-01
Passive acoustic monitoring has become an increasingly prevalent tool for estimating density of marine mammals, such as beaked whales, which vocalize often but are difficult to survey visually. Counts of acoustic cues (e.g., vocalizations), when corrected for detection probability, can be translated into animal density estimates by applying an individual cue production rate multiplier. It is essential to understand variation in these rates to avoid biased estimates. The most direct way to measure cue production rate is with animal-mounted acoustic recorders. This study utilized data from sound recording tags deployed on Blainville's (Mesoplodon densirostris, 19 deployments) and Cuvier's (Ziphius cavirostris, 16 deployments) beaked whales, in two locations per species, to explore spatial and temporal variation in click production rates. No spatial or temporal variation was detected within the average click production rate of Blainville's beaked whales when calculated over dive cycles (including silent periods between dives); however, spatial variation was detected when averaged only over vocal periods. Cuvier's beaked whales exhibited significant spatial and temporal variation in click production rates within vocal periods and when silent periods were included. This evidence of variation emphasizes the need to utilize appropriate cue production rates when estimating density from passive acoustic data.
Detection rates of geckos in visual surveys: Turning confounding variables into useful knowledge
Lardner, Bjorn; Rodda, Gordon H.; Yackel Adams, Amy A.; Savidge, Julie A.; Reed, Robert N.
2016-01-01
Transect surveys without some means of estimating detection probabilities generate population size indices prone to bias because survey conditions differ in time and space. Knowing what causes such bias can help guide the collection of relevant survey covariates, correct the survey data, anticipate situations where bias might be unacceptably large, and elucidate the ecology of target species. We used negative binomial regression to evaluate confounding variables for gecko (primarily Hemidactylus frenatus and Lepidodactylus lugubris) counts on 220-m-long transects surveyed at night, primarily for snakes, on 9,475 occasions. Searchers differed in gecko detection rates by up to a factor of six. The worst and best headlamps differed by a factor of at least two. Strong winds had a negative effect potentially as large as those of searchers or headlamps. More geckos were seen during wet weather conditions, but the effect size was small. Compared with a detection nadir during waxing gibbous (nearly full) moons above the horizon, we saw 28% more geckos during waning crescent moons below the horizon. A sine function suggested that we saw 24% more geckos at the end of the wet season than at the end of the dry season. Fluctuations on a longer timescale also were verified. Disturbingly, corrected data exhibited strong short-term fluctuations that covariates apparently failed to capture. Although some biases can be addressed with measured covariates, others will be difficult to eliminate as a significant source of error in longterm monitoring programs.
A lightweight QRS detector for single lead ECG signals using a max-min difference algorithm.
Pandit, Diptangshu; Zhang, Li; Liu, Chengyu; Chattopadhyay, Samiran; Aslam, Nauman; Lim, Chee Peng
2017-06-01
Detection of the R-peak pertaining to the QRS complex of an ECG signal plays an important role for the diagnosis of a patient's heart condition. To accurately identify the QRS locations from the acquired raw ECG signals, we need to handle a number of challenges, which include noise, baseline wander, varying peak amplitudes, and signal abnormality. This research aims to address these challenges by developing an efficient lightweight algorithm for QRS (i.e., R-peak) detection from raw ECG signals. A lightweight real-time sliding window-based Max-Min Difference (MMD) algorithm for QRS detection from Lead II ECG signals is proposed. Targeting to achieve the best trade-off between computational efficiency and detection accuracy, the proposed algorithm consists of five key steps for QRS detection, namely, baseline correction, MMD curve generation, dynamic threshold computation, R-peak detection, and error correction. Five annotated databases from Physionet are used for evaluating the proposed algorithm in R-peak detection. Integrated with a feature extraction technique and a neural network classifier, the proposed ORS detection algorithm has also been extended to undertake normal and abnormal heartbeat detection from ECG signals. The proposed algorithm exhibits a high degree of robustness in QRS detection and achieves an average sensitivity of 99.62% and an average positive predictivity of 99.67%. Its performance compares favorably with those from the existing state-of-the-art models reported in the literature. In regards to normal and abnormal heartbeat detection, the proposed QRS detection algorithm in combination with the feature extraction technique and neural network classifier achieves an overall accuracy rate of 93.44% based on an empirical evaluation using the MIT-BIH Arrhythmia data set with 10-fold cross validation. In comparison with other related studies, the proposed algorithm offers a lightweight adaptive alternative for R-peak detection with good computational efficiency. The empirical results indicate that it not only yields a high accuracy rate in QRS detection, but also exhibits efficient computational complexity at the order of O(n), where n is the length of an ECG signal. Copyright © 2017 Elsevier B.V. All rights reserved.
Gpm Level 1 Science Requirements: Science and Performance Viewed from the Ground
NASA Technical Reports Server (NTRS)
Petersen, W.; Kirstetter, P.; Wolff, D.; Kidd, C.; Tokay, A.; Chandrasekar, V.; Grecu, M.; Huffman, G.; Jackson, G. S.
2016-01-01
GPM meets Level 1 science requirements for rain estimation based on the strong performance of its radar algorithms. Changes in the V5 GPROF algorithm should correct errors in V4 and will likely resolve GPROF performance issues relative to L1 requirements. L1 FOV Snow detection largely verified but at unknown SWE rate threshold (likely < 0.5 –1 mm/hr/liquid equivalent). Ongoing work to improve SWE rate estimation for both satellite and GV remote sensing.
2011-01-01
Background Monitoring the time course of mortality by cause is a key public health issue. However, several mortality data production changes may affect cause-specific time trends, thus altering the interpretation. This paper proposes a statistical method that detects abrupt changes ("jumps") and estimates correction factors that may be used for further analysis. Methods The method was applied to a subset of the AMIEHS (Avoidable Mortality in the European Union, toward better Indicators for the Effectiveness of Health Systems) project mortality database and considered for six European countries and 13 selected causes of deaths. For each country and cause of death, an automated jump detection method called Polydect was applied to the log mortality rate time series. The plausibility of a data production change associated with each detected jump was evaluated through literature search or feedback obtained from the national data producers. For each plausible jump position, the statistical significance of the between-age and between-gender jump amplitude heterogeneity was evaluated by means of a generalized additive regression model, and correction factors were deduced from the results. Results Forty-nine jumps were detected by the Polydect method from 1970 to 2005. Most of the detected jumps were found to be plausible. The age- and gender-specific amplitudes of the jumps were estimated when they were statistically heterogeneous, and they showed greater by-age heterogeneity than by-gender heterogeneity. Conclusion The method presented in this paper was successfully applied to a large set of causes of death and countries. The method appears to be an alternative to bridge coding methods when the latter are not systematically implemented because they are time- and resource-consuming. PMID:21929756
Prenatal development in fishers (Martes pennanti)
Frost, H.C.; Krohn, W.B.; Bezembluk, E.A.; Lott, R.; Wallace, C.R.
2005-01-01
We evaluated and quantified prenatal growth of fishers (Martes pennanti) using ultrasonography. Seven females gave birth to 21 kits. The first identifiable embryonic structures were seen 42 d prepartum; these appeared to be unimplanted blastocysts or gestational sacs, which subsequently implanted in the uterine horns. Maternal and fetal heart rates were monitored from first detection to birth. Maternal heart rates did not differ among sampling periods, while fetal hearts rates increased from first detection to birth. Head and body differentiation, visible limbs and skeletal ossification were visible by 30, 23 and 21 d prepartum, respectively. Mean diameter of gestational sacs and crown-rump lengths were linearly related to gestational age (P < 0.001). Biparietal and body diameters were also linearly related to gestational age (P < 0.001) and correctly predicted parturition dates within 1-2 d. ?? 2004 Elsevier Inc. All rights reserved.
Correcting false memories: Errors must be noticed and replaced.
Mullet, Hillary G; Marsh, Elizabeth J
2016-04-01
Memory can be unreliable. For example, after reading The new baby stayed awake all night, people often misremember that the new baby cried all night (Brewer, 1977); similarly, after hearing bed, rest, and tired, people often falsely remember that sleep was on the list (Roediger & McDermott, 1995). In general, such false memories are difficult to correct, persisting despite warnings and additional study opportunities. We argue that errors must first be detected to be corrected; consistent with this argument, two experiments showed that false memories were nearly eliminated when conditions facilitated comparisons between participants' errors and corrective feedback (e.g., immediate trial-by-trial feedback that allowed direct comparisons between their responses and the correct information). However, knowledge that they had made an error was insufficient; unless the feedback message also contained the correct answer, the rate of false memories remained relatively constant. On the one hand, there is nothing special about correcting false memories: simply labeling an error as "wrong" is also insufficient for correcting other memory errors, including misremembered facts or mistranslations. However, unlike these other types of errors--which often benefit from the spacing afforded by delayed feedback--false memories require a special consideration: Learners may fail to notice their errors unless the correction conditions specifically highlight them.
NASA Astrophysics Data System (ADS)
Pahlevaninezhad, H.; Lee, A. M. D.; Hyun, C.; Lam, S.; MacAulay, C.; Lane, P. M.
2013-03-01
In this paper, we conduct a phantom study for modeling the autofluorescence (AF) properties of tissue. A combined optical coherence tomography (OCT) and AF imaging system is proposed to measure the strength of the AF signal in terms of the scattering layer thickness and concentration. The combined AF-OCT system is capable of estimating the AF loss due to scattering in the epithelium using the thickness and scattering concentration calculated from the co-registered OCT images. We define a correction factor to account for scattering losses in the epithelium and calculate a scatteringcorrected AF signal. We believe the scattering-corrected AF will reduce the diagnostic false-positives rate in the early detection of airway lesions due to confounding factors such as increased epithelial thickness and inflammations.
Applications and error correction for adiabatic quantum optimization
NASA Astrophysics Data System (ADS)
Pudenz, Kristen
Adiabatic quantum optimization (AQO) is a fast-developing subfield of quantum information processing which holds great promise in the relatively near future. Here we develop an application, quantum anomaly detection, and an error correction code, Quantum Annealing Correction (QAC), for use with AQO. The motivation for the anomaly detection algorithm is the problematic nature of classical software verification and validation (V&V). The number of lines of code written for safety-critical applications such as cars and aircraft increases each year, and with it the cost of finding errors grows exponentially (the cost of overlooking errors, which can be measured in human safety, is arguably even higher). We approach the V&V problem by using a quantum machine learning algorithm to identify charateristics of software operations that are implemented outside of specifications, then define an AQO to return these anomalous operations as its result. Our error correction work is the first large-scale experimental demonstration of quantum error correcting codes. We develop QAC and apply it to USC's equipment, the first and second generation of commercially available D-Wave AQO processors. We first show comprehensive experimental results for the code's performance on antiferromagnetic chains, scaling the problem size up to 86 logical qubits (344 physical qubits) and recovering significant encoded success rates even when the unencoded success rates drop to almost nothing. A broader set of randomized benchmarking problems is then introduced, for which we observe similar behavior to the antiferromagnetic chain, specifically that the use of QAC is almost always advantageous for problems of sufficient size and difficulty. Along the way, we develop problem-specific optimizations for the code and gain insight into the various on-chip error mechanisms (most prominently thermal noise, since the hardware operates at finite temperature) and the ways QAC counteracts them. We finish by showing that the scheme is robust to qubit loss on-chip, a significant benefit when considering an implemented system.
Zhao, Jianhu; Zhang, Hongmei; Wang, Shiqi
2017-01-01
Multibeam echosounder systems (MBES) can record backscatter strengths of gas plumes in the water column (WC) images that may be an indicator of possible occurrence of gas at certain depths. Manual or automatic detection is generally adopted in finding gas plumes, but frequently results in low efficiency and high false detection rates because of WC images that are polluted by noise. To improve the efficiency and reliability of the detection, a comprehensive detection method is proposed in this paper. In the proposed method, the characteristics of WC background noise are first analyzed and given. Then, the mean standard deviation threshold segmentations are respectively used for the denoising of time-angle and depth-angle images, an intersection operation is performed for the two segmented images to further weaken noise in the WC data, and the gas plumes in the WC data are detected from the intersection image by the morphological constraint. The proposed method was tested by conducting shallow-water and deepwater experiments. In these experiments, the detections were conducted automatically and higher correct detection rates than the traditional methods were achieved. The performance of the proposed method is analyzed and discussed. PMID:29186014
Meuter, Renata F I; Lacherez, Philippe F
2016-03-01
We aimed to assess the impact of task demands and individual characteristics on threat detection in baggage screeners. Airport security staff work under time constraints to ensure optimal threat detection. Understanding the impact of individual characteristics and task demands on performance is vital to ensure accurate threat detection. We examined threat detection in baggage screeners as a function of event rate (i.e., number of bags per minute) and time on task across 4 months. We measured performance in terms of the accuracy of detection of Fictitious Threat Items (FTIs) randomly superimposed on X-ray images of real passenger bags. Analyses of the percentage of correct FTI identifications (hits) show that longer shifts with high baggage throughput result in worse threat detection. Importantly, these significant performance decrements emerge within the first 10 min of these busy screening shifts only. Longer shift lengths, especially when combined with high baggage throughput, increase the likelihood that threats go undetected. Shorter shift rotations, although perhaps difficult to implement during busy screening periods, would ensure more consistently high vigilance in baggage screeners and, therefore, optimal threat detection and passenger safety. © 2015, Human Factors and Ergonomics Society.
Error detection and correction unit with built-in self-test capability for spacecraft applications
NASA Technical Reports Server (NTRS)
Timoc, Constantin
1990-01-01
The objective of this project was to research and develop a 32-bit single chip Error Detection and Correction unit capable of correcting all single bit errors and detecting all double bit errors in the memory systems of a spacecraft. We designed the 32-bit EDAC (Error Detection and Correction unit) based on a modified Hamming code and according to the design specifications and performance requirements. We constructed a laboratory prototype (breadboard) which was converted into a fault simulator. The correctness of the design was verified on the breadboard using an exhaustive set of test cases. A logic diagram of the EDAC was delivered to JPL Section 514 on 4 Oct. 1988.
High-performance IR detector modules
NASA Astrophysics Data System (ADS)
Wendler, Joachim; Cabanski, Wolfgang; Rühlich, Ingo; Ziegler, Johann
2004-02-01
The 3rd generation of infrared (IR) detection modules is expected to provide higher video resolution, advanced functions like multi band or multi color capability, higher frame rates, and better thermal resolution. AIM has developed staring and linear high performance focal plane arrays (FPA) integrated into detector/dewar cooler assemblies (IDCA). Linear FPA"s support high resolution formats such as 1920 x 1152 (HDTV), 1280 x 960, or 1536 x 1152. Standard format for staring FPA"s is 640 x 512. In this configuration, QEIP devices sensitive in the 8 10 µm band as well as MCT devices sensitive in the 3.4 5.0 µm band are available. A 256 x 256 high speed detection module allows a full frame rate >800 Hz. Especially usability of long wavelength devices in high performance FLIR systems does not only depend on the classical electrooptical performance parameters such as NEDT, detectivity, and response homogeneity, but are mainly characterized by the stability of the correction coefficients used for image correction. The FPA"s are available in suited integrated detector/dewar cooler assemblies. The linear cooling engines are designed for maximum stability of the focal plane temperature, low operating temperatures down to 60K, high MTTF lifetimes of 6000h and above even under high ambient temperature conditions. The IDCA"s are equipped with AIM standard or custom specific command and control electronics (CCE) providing a well defined interface to the system electronics. Video output signals are provided as 14 bit digital data rates up to 80 MHz for the high speed devices.
NASA Astrophysics Data System (ADS)
Ade, N.; Nam, T. L.; Mhlanga, S. H.
2013-05-01
Although the near-tissue equivalence of diamond allows the direct measurement of dose for clinical applications without the need for energy-corrections, it is often cited that diamond detectors require pre-irradiation, a procedure necessary to stabilize the response or sensitivity of a diamond detector before dose measurements. In addition it has been pointed out that the relative dose measured with a diamond detector requires dose rate dependence correction and that the angular dependence of a detector could be due to its mechanical design or to the intrinsic angular sensitivity of the detection process. While the cause of instability of response has not been meticulously investigated, the issue of dose rate dependence correction is uncertain as some studies ignored it but reported good results. The aims of this study were therefore to investigate, in particular (1) the major cause of the unstable response of diamond detectors requiring pre-irradiation; (2) the influence of dose rate dependence correction in relative dose measurements; and (3) the angular dependence of the diamond detectors. The study was conducted with low-energy X-rays and electron therapy beams on HPHT and CVD synthesized diamonds. Ionization chambers were used for comparative measurements. Through systematic investigations, the major cause of the unstable response of diamond detectors requiring the recommended pre-irradiation step was isolated and attributed to the presence and effects of ambient light. The variation in detector's response between measurements in light and dark conditions could be as high as 63% for a CVD diamond. Dose rate dependence parameters (Δ values) of 0.950 and 1.035 were found for the HPHT and CVD diamond detectors, respectively. Without corrections based on dose rate dependence, the relative differences between depth-doses measured with the diamond detectors and a Markus chamber for exposures to 7 and 14 MeV electron beams were within 2.5%. A dose rate dependence correction using the Δ values obtained seemed to worsen the performance of the HPHT sample (up to about 3.3%) but it had a marginal effect on the performance of the CVD sample. In addition, the angular response of the CVD diamond detector was shown to be comparable with that of a cylindrical chamber. This study concludes that once the responses of the diamond detectors have been stabilised and they are properly shielded from ambient light, pre-irradiation prior to each measurement is not required. Also, the relative dose measured with the diamond detectors do not require dose rate dependence corrections as the required correction is only marginal and could have no dosimetric significance.
Improved chemical identification from sensor arrays using intelligent algorithms
NASA Astrophysics Data System (ADS)
Roppel, Thaddeus A.; Wilson, Denise M.
2001-02-01
Intelligent signal processing algorithms are shown to improve identification rates significantly in chemical sensor arrays. This paper focuses on the use of independently derived sensor status information to modify the processing of sensor array data by using a fast, easily-implemented "best-match" approach to filling in missing sensor data. Most fault conditions of interest (e.g., stuck high, stuck low, sudden jumps, excess noise, etc.) can be detected relatively simply by adjunct data processing, or by on-board circuitry. The objective then is to devise, implement, and test methods for using this information to improve the identification rates in the presence of faulted sensors. In one typical example studied, utilizing separately derived, a-priori knowledge about the health of the sensors in the array improved the chemical identification rate by an artificial neural network from below 10 percent correct to over 99 percent correct. While this study focuses experimentally on chemical sensor arrays, the results are readily extensible to other types of sensor platforms.
Simulation of Rate-Related (Dead-Time) Losses In Passive Neutron Multiplicity Counting Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, L.G.; Norman, P.I.; Leadbeater, T.W.
Passive Neutron Multiplicity Counting (PNMC) based on Multiplicity Shift Register (MSR) electronics (a form of time correlation analysis) is a widely used non-destructive assay technique for quantifying spontaneously fissile materials such as Pu. At high event rates, dead-time losses perturb the count rates with the Singles, Doubles and Triples being increasingly affected. Without correction these perturbations are a major source of inaccuracy in the measured count rates and assay values derived from them. This paper presents the simulation of dead-time losses and investigates the effect of applying different dead-time models on the observed MSR data. Monte Carlo methods have beenmore » used to simulate neutron pulse trains for a variety of source intensities and with ideal detection geometry, providing an event by event record of the time distribution of neutron captures within the detection system. The action of the MSR electronics was modelled in software to analyse these pulse trains. Stored pulse trains were perturbed in software to apply the effects of dead-time according to the chosen physical process; for example, the ideal paralysable (extending) and non-paralysable models with an arbitrary dead-time parameter. Results of the simulations demonstrate the change in the observed MSR data when the system dead-time parameter is varied. In addition, the paralysable and non-paralysable models of deadtime are compared. These results form part of a larger study to evaluate existing dead-time corrections and to extend their application to correlated sources. (authors)« less
Kery, M.; Royle, J. Andrew; Schmid, Hans; Schaub, M.; Volet, B.; Hafliger, G.; Zbinden, N.
2010-01-01
Species' assessments must frequently be derived from opportunistic observations made by volunteers (i.e., citizen scientists). Interpretation of the resulting data to estimate population trends is plagued with problems, including teasing apart genuine population trends from variations in observation effort. We devised a way to correct for annual variation in effort when estimating trends in occupancy (species distribution) from faunal or floral databases of opportunistic observations. First, for all surveyed sites, detection histories (i.e., strings of detection-nondetection records) are generated. Within-season replicate surveys provide information on the detectability of an occupied site. Detectability directly represents observation effort; hence, estimating detectablity means correcting for observation effort. Second, site-occupancy models are applied directly to the detection-history data set (i.e., without aggregation by site and year) to estimate detectability and species distribution (occupancy, i.e., the true proportion of sites where a species occurs). Site-occupancy models also provide unbiased estimators of components of distributional change (i.e., colonization and extinction rates). We illustrate our method with data from a large citizen-science project in Switzerland in which field ornithologists record opportunistic observations. We analyzed data collected on four species: the widespread Kingfisher (Alcedo atthis. ) and Sparrowhawk (Accipiter nisus. ) and the scarce Rock Thrush (Monticola saxatilis. ) and Wallcreeper (Tichodroma muraria. ). Our method requires that all observed species are recorded. Detectability was <1 and varied over the years. Simulations suggested some robustness, but we advocate recording complete species lists (checklists), rather than recording individual records of single species. The representation of observation effort with its effect on detectability provides a solution to the problem of differences in effort encountered when extracting trend information from haphazard observations. We expect our method is widely applicable for global biodiversity monitoring and modeling of species distributions. ?? 2010 Society for Conservation Biology.
USDA-ARS?s Scientific Manuscript database
The Look AHEAD (Action for Health in Diabetes) Study is a long-term clinical trial that aims to determine the cardiovascular disease (CVD) benefits of an intensive lifestyle intervention (ILI) in obese adults with type 2 diabetes. The study was designed to have 90% statistical power to detect an 18%...
Active optical sensors for tree stem detection and classification in nurseries.
Garrido, Miguel; Perez-Ruiz, Manuel; Valero, Constantino; Gliever, Chris J; Hanson, Bradley D; Slaughter, David C
2014-06-19
Active optical sensing (LIDAR and light curtain transmission) devices mounted on a mobile platform can correctly detect, localize, and classify trees. To conduct an evaluation and comparison of the different sensors, an optical encoder wheel was used for vehicle odometry and provided a measurement of the linear displacement of the prototype vehicle along a row of tree seedlings as a reference for each recorded sensor measurement. The field trials were conducted in a juvenile tree nursery with one-year-old grafted almond trees at Sierra Gold Nurseries, Yuba City, CA, United States. Through these tests and subsequent data processing, each sensor was individually evaluated to characterize their reliability, as well as their advantages and disadvantages for the proposed task. Test results indicated that 95.7% and 99.48% of the trees were successfully detected with the LIDAR and light curtain sensors, respectively. LIDAR correctly classified, between alive or dead tree states at a 93.75% success rate compared to 94.16% for the light curtain sensor. These results can help system designers select the most reliable sensor for the accurate detection and localization of each tree in a nursery, which might allow labor-intensive tasks, such as weeding, to be automated without damaging crops.
An improved pi/4-QPSK with nonredundant error correction for satellite mobile broadcasting
NASA Technical Reports Server (NTRS)
Feher, Kamilo; Yang, Jiashi
1991-01-01
An improved pi/4-quadrature phase-shift keying (QPSK) receiver that incorporates a simple nonredundant error correction (NEC) structure is proposed for satellite and land-mobile digital broadcasting. The bit-error-rate (BER) performance of the pi/4-QPSK with NEC is analyzed and evaluated in a fast Rician fading and additive white Gaussian noise (AWGN) environment using computer simulation. It is demonstrated that with simple electronics the performance of a noncoherently detected pi/4-QPSK signal in both AWGN and fast Rician fading can be improved. When the K-factor (a ratio of average power of multipath signal to direct path power) of the Rician channel decreases, the improvement increases. An improvement of 1.2 dB could be obtained at a BER of 0.0001 in the AWGN channel. This performance gain is achieved without requiring any signal redundancy and additional bandwidth. Three types of noncoherent detection schemes of pi/4-QPSK with NEC structure, such as IF band differential detection, baseband differential detection, and FM discriminator, are discussed. It is concluded that the pi/4-QPSK with NEC is an attractive scheme for power-limited satellite land-mobile broadcasting systems.
Variance-reduction normalization technique for a compton camera system
NASA Astrophysics Data System (ADS)
Kim, S. M.; Lee, J. S.; Kim, J. H.; Seo, H.; Kim, C. H.; Lee, C. S.; Lee, S. J.; Lee, M. C.; Lee, D. S.
2011-01-01
For an artifact-free dataset, pre-processing (known as normalization) is needed to correct inherent non-uniformity of detection property in the Compton camera which consists of scattering and absorbing detectors. The detection efficiency depends on the non-uniform detection efficiency of the scattering and absorbing detectors, different incidence angles onto the detector surfaces, and the geometry of the two detectors. The correction factor for each detected position pair which is referred to as the normalization coefficient, is expressed as a product of factors representing the various variations. The variance-reduction technique (VRT) for a Compton camera (a normalization method) was studied. For the VRT, the Compton list-mode data of a planar uniform source of 140 keV was generated from a GATE simulation tool. The projection data of a cylindrical software phantom were normalized with normalization coefficients determined from the non-uniformity map, and then reconstructed by an ordered subset expectation maximization algorithm. The coefficient of variations and percent errors of the 3-D reconstructed images showed that the VRT applied to the Compton camera provides an enhanced image quality and the increased recovery rate of uniformity in the reconstructed image.
Britton, Jr., Charles L.; Wintenberg, Alan L.
1993-01-01
A radiation detection method and system for continuously correcting the quantization of detected charge during pulse pile-up conditions. Charge pulses from a radiation detector responsive to the energy of detected radiation events are converted to voltage pulses of predetermined shape whose peak amplitudes are proportional to the quantity of charge of each corresponding detected event by means of a charge-sensitive preamplifier. These peak amplitudes are sampled and stored sequentially in accordance with their respective times of occurrence. Based on the stored peak amplitudes and times of occurrence, a correction factor is generated which represents the fraction of a previous pulses influence on a preceding pulse peak amplitude. This correction factor is subtracted from the following pulse amplitude in a summing amplifier whose output then represents the corrected charge quantity measurement.
[Evaluation of metabolic rate for a correct risk assessment of thermal environments].
Del Ferraro, Simona; Molinaro, V
2010-01-01
The new law n.81/2008 recognises microclimate as one of physical agents for which risk assessment becomes obligatory. To achieve this it is necessary to evaluate suitable indices, based on heat balance equation, which depend on six parameters: the first four are related to thermal environment and the last two are related to the worker (metabolic rate and thermal insulation). The first four parameters are directly measurable in situ by using a multiple data acquisition unit provided with suitable sensors. Parameters related to the worker are not directly measurable. This aspect represents one of the problems which can lead to an inaccurate risk assessment. Aim of the paper was to identify a method which leads to a correct evaluation of the metabolic rate related to the worker under study. It was decided to follow the procedures described by the standard UNI EN ISO 8996:2005 which presents four different levels to evaluate metabolic rate, each one with an increasing degree of accuracy. Seven workers were selected: three performed light tasks and the other four did heavy work. The study showed that the results appear to be in acceptable agreement in the case of light work while there were detectable differences in value for heavy tasks. The Authors believe it is necessary to stress the importance of a suitable estimation of the metabolic rate in order to carry out a correct risk assessment which quantifies the risk exactly.
Becker, R A; Sales, N G; Santos, G M; Santos, G B; Carvalho, D C
2015-07-01
The identification of fish larvae from two neotropical hydrographic basins using traditional morphological taxonomy and DNA barcoding revealed no conflicting results between the morphological and barcode identification of larvae. A lower rate (25%) of correct morphological identification of eggs as belonging to migratory or non-migratory species was achieved. Accurate identification of ichthyoplankton by DNA barcoding is an important tool for fish reproductive behaviour studies, correct estimation of biodiversity by detecting eggs from rare species, as well as defining environmental and management strategies for fish conservation in the neotropics. © 2015 The Fisheries Society of the British Isles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoerner, M; Hintenlang, D
Purpose: A methodology is presented to correct for measurement inaccuracies at high detector count rates using a plastic and GOS scintillation fibers coupled to a photomultiplier tube with digital readout. This system allows temporal acquisition and manipulation of measured data. Methods: The detection system used was a plastic scintillator and a separate gadolinium scintillator, both (0.5 diameter) coupled to an optical fiber with a Hamamatsu photon counter with a built-in microcontroller and digital interface. Count rate performance of the system was evaluated using the nonparalzable detector model. Detector response was investigated across multiple radiation sources including: orthovoltage x-ray system, colbat-60more » gamma rays, proton therapy beam, and a diagnostic radiography x-ray tube. The dead time parameter was calculated by measuring the count rate of the system at different exposure rates using a reference detector. Results: The system dead time was evaluated for the following sources of radiation used clinically: diagnostic energy x-rays, cobalt-60 gamma rays, orthovoltage xrays, particle proton accelerator, and megavoltage x-rays. It was found that dead time increased significantly when exposing the detector to sources capable of generating Cerenkov radiation, all of the sources sans the diagnostic x-rays, with increasing prominence at higher photon energies. Percent depth dose curves generated by a dedicated ionization chamber and compared to the detection system demonstrated that correcting for dead time improves accuracy. On most sources, nonparalzable model fit provided an improved system response. Conclusion: Overall, the system dead time was variable across the investigated radiation particles and energies. It was demonstrated that the system response accuracy was greatly improved by correcting for dead time effects. Cerenkov radiation plays a significant role in the increase in the system dead time through transient absorption effects attributed to electron hole-pair creations within the optical waveguide.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damato, Antonio L., E-mail: adamato@lroc.harvard.edu; Viswanathan, Akila N.; Don, Sarah M.
2014-10-15
Purpose: To investigate the use of a system using electromagnetic tracking (EMT), post-processing and an error-detection algorithm for detecting errors and resolving uncertainties in high-dose-rate brachytherapy catheter digitization for treatment planning. Methods: EMT was used to localize 15 catheters inserted into a phantom using a stepwise acquisition technique. Five distinct acquisition experiments were performed. Noise associated with the acquisition was calculated. The dwell location configuration was extracted from the EMT data. A CT scan of the phantom was performed, and five distinct catheter digitization sessions were performed. No a priori registration of the CT scan coordinate system with the EMTmore » coordinate system was performed. CT-based digitization was automatically extracted from the brachytherapy plan DICOM files (CT), and rigid registration was performed between EMT and CT dwell positions. EMT registration error was characterized in terms of the mean and maximum distance between corresponding EMT and CT dwell positions per catheter. An algorithm for error detection and identification was presented. Three types of errors were systematically simulated: swap of two catheter numbers, partial swap of catheter number identification for parts of the catheters (mix), and catheter-tip shift. Error-detection sensitivity (number of simulated scenarios correctly identified as containing an error/number of simulated scenarios containing an error) and specificity (number of scenarios correctly identified as not containing errors/number of correct scenarios) were calculated. Catheter identification sensitivity (number of catheters correctly identified as erroneous across all scenarios/number of erroneous catheters across all scenarios) and specificity (number of catheters correctly identified as correct across all scenarios/number of correct catheters across all scenarios) were calculated. The mean detected and identified shift was calculated. Results: The maximum noise ±1 standard deviation associated with the EMT acquisitions was 1.0 ± 0.1 mm, and the mean noise was 0.6 ± 0.1 mm. Registration of all the EMT and CT dwell positions was associated with a mean catheter error of 0.6 ± 0.2 mm, a maximum catheter error of 0.9 ± 0.4 mm, a mean dwell error of 1.0 ± 0.3 mm, and a maximum dwell error of 1.3 ± 0.7 mm. Error detection and catheter identification sensitivity and specificity of 100% were observed for swap, mix and shift (≥2.6 mm for error detection; ≥2.7 mm for catheter identification) errors. A mean detected shift of 1.8 ± 0.4 mm and a mean identified shift of 1.9 ± 0.4 mm were observed. Conclusions: Registration of the EMT dwell positions to the CT dwell positions was possible with a residual mean error per catheter of 0.6 ± 0.2 mm and a maximum error for any dwell of 1.3 ± 0.7 mm. These low residual registration errors show that quality assurance of the general characteristics of the catheters and of possible errors affecting one specific dwell position is possible. The sensitivity and specificity of the catheter digitization verification algorithm was 100% for swap and mix errors and for shifts ≥2.6 mm. On average, shifts ≥1.8 mm were detected, and shifts ≥1.9 mm were detected and identified.« less
CRLH-TL Sensors for Flow Inhomogeneties Detection of Pneumatic Conveyed Pulverized Solids
NASA Astrophysics Data System (ADS)
Angelovski, Aleksandar; Penirschke, Andreas; Jakoby, Rolf
2011-08-01
This paper presents an application of a Composite Right/Left-Handed (CRLH) Transmission Line resonator for a compact mass flow detector which is able to detect inhomogeneous flows. In this concept, series capacitors and shunt inductors are used to synthesize a medium with simultaneously negative permeability and permittivity - the so called metamaterial. The helix shape of the cylindrical CRLH-TL sensor offers the possibility to detect flow inhomogeneities within the pipeline which can be used to correct the detected massflow rate. A combination of two CRLH-TL structures within the same cross-section of the pipeline can improve the angular sensitivity of the sensor. A prototype was realized and tested in a dedicated measurement setup to prove the concept.
Ge, Ji; Wang, YaoNan; Zhou, BoWen; Zhang, Hui
2009-01-01
A biologically inspired spiking neural network model, called pulse-coupled neural networks (PCNN), has been applied in an automatic inspection machine to detect visible foreign particles intermingled in glucose or sodium chloride injection liquids. Proper mechanisms and improved spin/stop techniques are proposed to avoid the appearance of air bubbles, which increases the algorithms' complexity. Modified PCNN is adopted to segment the difference images, judging the existence of foreign particles according to the continuity and smoothness properties of their moving traces. Preliminarily experimental results indicate that the inspection machine can detect the visible foreign particles effectively and the detection speed, accuracy and correct detection rate also satisfying the needs of medicine preparation. PMID:22412318
The Chandra Source Catalog: Algorithms
NASA Astrophysics Data System (ADS)
McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-09-01
Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.
Mesothelioma mortality in Europe: impact of asbestos consumption and simian virus 40.
Leithner, Katharina; Leithner, Andreas; Clar, Heimo; Weinhaeusel, Andreas; Radl, Roman; Krippl, Peter; Rehak, Peter; Windhager, Reinhard; Haas, Oskar A; Olschewski, Horst
2006-11-07
It is well established that asbestos is the most important cause of mesothelioma. The role of simian virus 40 (SV40) in mesothelioma development, on the other hand, remains controversial. This potential human oncogene has been introduced into various populations through contaminated polio vaccines. The aim of this study was to investigate whether the possible presence of SV40 in various European countries, as indicated either by molecular genetic evidence or previous exposure to SV40-contaminated vaccines, had any effect on pleural cancer rates in the respective countries. We conducted a Medline search that covered the period from January 1969 to August 2005 for reports on the detection of SV40 DNA in human tissue samples. In addition, we collected all available information about the types of polio vaccines that had been used in these European countries and their SV40 contamination status. Our ecological analysis confirms that pleural cancer mortality in males, but not in females, correlates with the extent of asbestos exposure 25 - 30 years earlier. In contrast, neither the presence of SV40 DNA in tumor samples nor a previous vaccination exposure had any detectable influence on the cancer mortality rate in neither in males (asbestos-corrected rates) nor in females. Using the currently existing data on SV40 prevalence, no association between SV40 prevalence and asbestos-corrected male pleural cancer can be demonstrated.
High-rate dead-time corrections in a general purpose digital pulse processing system
Abbene, Leonardo; Gerardi, Gaetano
2015-01-01
Dead-time losses are well recognized and studied drawbacks in counting and spectroscopic systems. In this work the abilities on dead-time correction of a real-time digital pulse processing (DPP) system for high-rate high-resolution radiation measurements are presented. The DPP system, through a fast and slow analysis of the output waveform from radiation detectors, is able to perform multi-parameter analysis (arrival time, pulse width, pulse height, pulse shape, etc.) at high input counting rates (ICRs), allowing accurate counting loss corrections even for variable or transient radiations. The fast analysis is used to obtain both the ICR and energy spectra with high throughput, while the slow analysis is used to obtain high-resolution energy spectra. A complete characterization of the counting capabilities, through both theoretical and experimental approaches, was performed. The dead-time modeling, the throughput curves, the experimental time-interval distributions (TIDs) and the counting uncertainty of the recorded events of both the fast and the slow channels, measured with a planar CdTe (cadmium telluride) detector, will be presented. The throughput formula of a series of two types of dead-times is also derived. The results of dead-time corrections, performed through different methods, will be reported and discussed, pointing out the error on ICR estimation and the simplicity of the procedure. Accurate ICR estimations (nonlinearity < 0.5%) were performed by using the time widths and the TIDs (using 10 ns time bin width) of the detected pulses up to 2.2 Mcps. The digital system allows, after a simple parameter setting, different and sophisticated procedures for dead-time correction, traditionally implemented in complex/dedicated systems and time-consuming set-ups. PMID:26289270
A high dynamic range pulse counting detection system for mass spectrometry.
Collings, Bruce A; Dima, Martian D; Ivosev, Gordana; Zhong, Feng
2014-01-30
A high dynamic range pulse counting system has been developed that demonstrates an ability to operate at up to 2e8 counts per second (cps) on a triple quadrupole mass spectrometer. Previous pulse counting detection systems have typically been limited to about 1e7 cps at the upper end of the systems dynamic range. Modifications to the detection electronics and dead time correction algorithm are described in this paper. A high gain transimpedance amplifier is employed that allows a multi-channel electron multiplier to be operated at a significantly lower bias potential than in previous pulse counting systems. The system utilises a high-energy conversion dynode, a multi-channel electron multiplier, a high gain transimpedance amplifier, non-paralysing detection electronics and a modified dead time correction algorithm. Modification of the dead time correction algorithm is necessary due to a characteristic of the pulse counting electronics. A pulse counting detection system with the capability to count at ion arrival rates of up to 2e8 cps is described. This is shown to provide a linear dynamic range of nearly five orders of magnitude for a sample of aprazolam with concentrations ranging from 0.0006970 ng/mL to 3333 ng/mL while monitoring the m/z 309.1 → m/z 205.2 transition. This represents an upward extension of the detector's linear dynamic range of about two orders of magnitude. A new high dynamic range pulse counting system has been developed demonstrating the ability to operate at up to 2e8 cps on a triple quadrupole mass spectrometer. This provides an upward extension of the detector's linear dynamic range by about two orders of magnitude over previous pulse counting systems. Copyright © 2013 John Wiley & Sons, Ltd.
Optical communication for space missions
NASA Technical Reports Server (NTRS)
Firtmaurice, M.
1991-01-01
Activities performed at NASA/GSFC (Goddard Space Flight Center) related to direct detection optical communications for space applications are discussed. The following subject areas are covered: (1) requirements for optical communication systems (data rates and channel quality; spatial acquisition; fine tracking and pointing; and transmit point-ahead correction); (2) component testing and development (laser diodes performance characterization and life testing; and laser diode power combining); (3) system development and simulations (The GSFC pointing, acquisition and tracking system; hardware description; preliminary performance analysis; and high data rate transmitter/receiver systems); and (4) proposed flight demonstration of optical communications.
Fraser, Matthew; McKay, Colette M.
2012-01-01
Temporal modulation transfer functions (TMTFs) were measured for six users of cochlear implants, using different carrier rates and levels. Unlike most previous studies investigating modulation detection, the experimental design limited potential effects of overall loudness cues. Psychometric functions (percent correct discrimination of modulated from unmodulated stimuli versus modulation depth) were obtained. For each modulation depth, each modulated stimulus was loudness balanced to the unmodulated reference stimulus, and level jitter was applied in the discrimination task. The loudness-balance data showed that the modulated stimuli were louder than the unmodulated reference stimuli with the same average current, thus confirming the need to limit loudness cues when measuring modulation detection. TMTFs measured in this way had a low-pass characteristic, with a cut-off frequency (at comfortably loud levels) similar to that for normal-hearing listeners. A reduction in level caused degradation in modulation detection efficiency and a lower-cut-off frequency (i.e. poorer temporal resolution). An increase in carrier rate also led to a degradation in modulation detection efficiency, but only at lower levels or higher modulation frequencies. When detection thresholds were expressed as a proportion of dynamic range, there was no effect of carrier rate for the lowest modulation frequency (50 Hz) at either level. PMID:22146425
Local concurrent error detection and correction in data structures using virtual backpointers
NASA Technical Reports Server (NTRS)
Li, C. C.; Chen, P. P.; Fuchs, W. K.
1987-01-01
A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data structures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared databased of Virtual Double Linked Lists.
Local concurrent error detection and correction in data structures using virtual backpointers
NASA Technical Reports Server (NTRS)
Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent
1989-01-01
A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.
NASA Astrophysics Data System (ADS)
Lyu, Jiang-Tao; Zhou, Chen
2017-12-01
Ionospheric refraction is one of the principal error sources for limiting the accuracy of radar systems for space target detection. High-accuracy measurement of the ionospheric electron density along the propagation path of radar wave is the most important procedure for the ionospheric refraction correction. Traditionally, the ionospheric model and the ionospheric detection instruments, like ionosonde or GPS receivers, are employed for obtaining the electron density. However, both methods are not capable of satisfying the requirements of correction accuracy for the advanced space target radar system. In this study, we propose a novel technique for ionospheric refraction correction based on radar dual-frequency detection. Radar target range measurements at two adjacent frequencies are utilized for calculating the electron density integral exactly along the propagation path of the radar wave, which can generate accurate ionospheric range correction. The implementation of radar dual-frequency detection is validated by a P band radar located in midlatitude China. The experimental results present that the accuracy of this novel technique is more accurate than the traditional ionospheric model correction. The technique proposed in this study is very promising for the high-accuracy radar detection and tracking of objects in geospace.
Local concurrent error detection and correction in data structures using virtual backpointers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, C.C.J.; Chen, P.P.; Fuchs, W.K.
1989-11-01
A new technique, based on virtual backpointers, is presented in this paper for local concurrent error detection and correction in linked data structures. Two new data structures utilizing virtual backpointers, the Virtual Double-Linked List and the B-Tree and Virtual Backpointers, are described. For these structures, double errors within a fixed-size checking window can be detected in constant time and single errors detected during forward moves can be corrected in constant time.
Choo, Wai K; McGeary, Katie; Farman, Colin; Greyling, Andre; Cross, Stephen J; Leslie, Stephen J
2014-01-01
This study aimed to examine whether general practitioner (GP) practice locations in remote and rural areas affected the pattern of direct access echocardiography referral and to assess any variations in echocardiographic findings. All referrals made by all GP practices in the Scottish Highlands over a 36-month period were analysed. Referral patterns were examined according to distance and rurality based on the Scottish Government's Urban-Rural Classification. Reasons for referral and cardiac abnormality detection rates were also examined. In total, 1188 referrals were made from 49 different GP practices; range of referral rates was 0.3-20.1 per 1000 population with a mean of 6.5 referrals per 1000 population. Referral rates were not significantly different between urban and rural practices after correction for population size. There was no correlation between the referral rates and the distance from the centre (r2=0.004, p=0.65). The most common reason for referral was the presence of new murmur (46%). The most common presenting symptom was breathlessness (44%). Overall, 28% of studies had significant abnormal findings requiring direct input from a cardiologist. There was no clear relationship between referral rates and cardiac abnormality detection rates (r2=0.07, p=0.37). The average cardiac abnormality detection rate was 56%, (range 52-60%), with no variation based on rurality (p=0.891). In this cohort, rurality and distance were not barriers to an equitable direct access echocardiography service. Cardiac abnormality detection rates are consistent with that of other studies.
Chaos-on-a-chip secures data transmission in optical fiber links.
Argyris, Apostolos; Grivas, Evangellos; Hamacher, Michael; Bogris, Adonis; Syvridis, Dimitris
2010-03-01
Security in information exchange plays a central role in the deployment of modern communication systems. Besides algorithms, chaos is exploited as a real-time high-speed data encryption technique which enhances the security at the hardware level of optical networks. In this work, compact, fully controllable and stably operating monolithic photonic integrated circuits (PICs) that generate broadband chaotic optical signals are incorporated in chaos-encoded optical transmission systems. Data sequences with rates up to 2.5 Gb/s with small amplitudes are completely encrypted within these chaotic carriers. Only authorized counterparts, supplied with identical chaos generating PICs that are able to synchronize and reproduce the same carriers, can benefit from data exchange with bit-rates up to 2.5Gb/s with error rates below 10(-12). Eavesdroppers with access to the communication link experience a 0.5 probability to detect correctly each bit by direct signal detection, while eavesdroppers supplied with even slightly unmatched hardware receivers are restricted to data extraction error rates well above 10(-3).
Augmented burst-error correction for UNICON laser memory. [digital memory
NASA Technical Reports Server (NTRS)
Lim, R. S.
1974-01-01
A single-burst-error correction system is described for data stored in the UNICON laser memory. In the proposed system, a long fire code with code length n greater than 16,768 bits was used as an outer code to augment an existing inner shorter fire code for burst error corrections. The inner fire code is a (80,64) code shortened from the (630,614) code, and it is used to correct a single-burst-error on a per-word basis with burst length b less than or equal to 6. The outer code, with b less than or equal to 12, would be used to correct a single-burst-error on a per-page basis, where a page consists of 512 32-bit words. In the proposed system, the encoding and error detection processes are implemented by hardware. A minicomputer, currently used as a UNICON memory management processor, is used on a time-demanding basis for error correction. Based upon existing error statistics, this combination of an inner code and an outer code would enable the UNICON system to obtain a very low error rate in spite of flaws affecting the recorded data.
Compression of digital images over local area networks. Appendix 1: Item 3. M.S. Thesis
NASA Technical Reports Server (NTRS)
Gorjala, Bhargavi
1991-01-01
Differential Pulse Code Modulation (DPCM) has been used with speech for many years. It has not been as successful for images because of poor edge performance. The only corruption in DPC is quantizer error but this corruption becomes quite large in the region of an edge because of the abrupt changes in the statistics of the signal. We introduce two improved DPCM schemes; Edge correcting DPCM and Edge Preservation Differential Coding. These two coding schemes will detect the edges and take action to correct them. In an Edge Correcting scheme, the quantizer error for an edge is encoded using a recursive quantizer with entropy coding and sent to the receiver as side information. In an Edge Preserving scheme, when the quantizer input falls in the overload region, the quantizer error is encoded and sent to the receiver repeatedly until the quantizer input falls in the inner levels. Therefore these coding schemes increase the bit rate in the region of an edge and require variable rate channels. We implement these two variable rate coding schemes on a token wing network. Timed token protocol supports two classes of messages; asynchronous and synchronous. The synchronous class provides a pre-allocated bandwidth and guaranteed response time. The remaining bandwidth is dynamically allocated to the asynchronous class. The Edge Correcting DPCM is simulated by considering the edge information under the asynchronous class. For the simulation of the Edge Preserving scheme, the amount of information sent each time is fixed, but the length of the packet or the bit rate for that packet is chosen depending on the availability capacity. The performance of the network, and the performance of the image coding algorithms, is studied.
Correct acceptance weighs more than correct rejection: a decision bias induced by question framing.
Kareev, Yaakov; Trope, Yaacov
2011-02-01
We propose that in attempting to detect whether an effect exists or not, people set their decision criterion so as to increase the number of hits and decrease the number of misses, at the cost of increasing false alarms and decreasing correct rejections. As a result, we argue, if one of two complementary events is framed as the positive response to a question and the other as the negative response, people will tend to predict the former more often than the latter. Performance in a prediction task with symmetric payoffs and equal base rates supported our proposal. Positive responses were indeed more prevalent than negative responses, irrespective of the phrasing of the question. The bias, slight but consistent and significant, was evident from early in a session and then remained unchanged to the end. A regression analysis revealed that, in addition, individuals' decision criteria reflected their learning experiences, with the weight of hits being greater than that of correct rejections.
Single event upset in avionics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taber, A.; Normand, E.
1993-04-01
Data from military/experimental flights and laboratory testing indicate that typical non radiation-hardened 64K and 256K static random access memories (SRAMs) can experience a significant soft upset rate at aircraft altitudes due to energetic neutrons created by cosmic ray interactions in the atmosphere. It is suggested that error detection and correction (EDAC) circuitry be considered for all avionics designs containing large amounts of semi-conductor memory.
New Class of Quantum Error-Correcting Codes for a Bosonic Mode
NASA Astrophysics Data System (ADS)
Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.
2016-07-01
We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.
Delport, Johannes Andries; Mohorovic, Ivor; Burn, Sandi; McCormick, John Kenneth; Schaus, David; Lannigan, Robert; John, Michael
2016-07-01
Meticillin-resistant Staphylococcus aureus (MRSA) bloodstream infection is responsible for significant morbidity, with mortality rates as high as 60 % if not treated appropriately. We describe a rapid method to detect MRSA in blood cultures using a combined three-hour short-incubation BRUKER matrix-assisted laser desorption/ionization time-of-flight MS BioTyper protocol and a qualitative immunochromatographic assay, the Alere Culture Colony Test PBP2a detection test. We compared this combined method with a molecular method detecting the nuc and mecA genes currently performed in our laboratory. One hundred and seventeen S. aureus blood cultures were tested of which 35 were MRSA and 82 were meticillin-sensitive S. aureus (MSSA). The rapid combined test correctly identified 100 % (82/82) of the MSSA and 85.7 % (30/35) of the MRSA after 3 h. There were five false negative results where the isolates were correctly identified as S. aureus, but PBP2a was not detected by the Culture Colony Test. The combined method has a sensitivity of 87.5 %, specificity of 100 %, a positive predictive value of 100 % and a negative predictive value of 94.3 % with the prevalence of MRSA in our S. aureus blood cultures. The combined rapid method offers a significant benefit to early detection of MRSA in positive blood cultures.
Goense, J B M; Ratnam, R
2003-10-01
An important problem in sensory processing is deciding whether fluctuating neural activity encodes a stimulus or is due to variability in baseline activity. Neurons that subserve detection must examine incoming spike trains continuously, and quickly and reliably differentiate signals from baseline activity. Here we demonstrate that a neural integrator can perform continuous signal detection, with performance exceeding that of trial-based procedures, where spike counts in signal- and baseline windows are compared. The procedure was applied to data from electrosensory afferents of weakly electric fish (Apteronotus leptorhynchus), where weak perturbations generated by small prey add approximately 1 spike to a baseline of approximately 300 spikes s(-1). The hypothetical postsynaptic neuron, modeling an electrosensory lateral line lobe cell, could detect an added spike within 10-15 ms, achieving near ideal detection performance (80-95%) at false alarm rates of 1-2 Hz, while trial-based testing resulted in only 30-35% correct detections at that false alarm rate. The performance improvement was due to anti-correlations in the afferent spike train, which reduced both the amplitude and duration of fluctuations in postsynaptic membrane activity, and so decreased the number of false alarms. Anti-correlations can be exploited to improve detection performance only if there is memory of prior decisions.
Improved Conflict Detection for Reducing Operational Errors in Air Traffic Control
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Erzberger, Hainz
2003-01-01
An operational error is an incident in which an air traffic controller allows the separation between two aircraft to fall below the minimum separation standard. The rates of such errors in the US have increased significantly over the past few years. This paper proposes new detection methods that can help correct this trend by improving on the performance of Conflict Alert, the existing software in the Host Computer System that is intended to detect and warn controllers of imminent conflicts. In addition to the usual trajectory based on the flight plan, a "dead-reckoning" trajectory (current velocity projection) is also generated for each aircraft and checked for conflicts. Filters for reducing common types of false alerts were implemented. The new detection methods were tested in three different ways. First, a simple flightpath command language was developed t o generate precisely controlled encounters for the purpose of testing the detection software. Second, written reports and tracking data were obtained for actual operational errors that occurred in the field, and these were "replayed" to test the new detection algorithms. Finally, the detection methods were used to shadow live traffic, and performance was analysed, particularly with regard to the false-alert rate. The results indicate that the new detection methods can provide timely warnings of imminent conflicts more consistently than Conflict Alert.
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
AfterQC: automatic filtering, trimming, error removing and quality control for fastq data.
Chen, Shifu; Huang, Tanxiao; Zhou, Yanqing; Han, Yue; Xu, Mingyan; Gu, Jia
2017-03-14
Some applications, especially those clinical applications requiring high accuracy of sequencing data, usually have to face the troubles caused by unavoidable sequencing errors. Several tools have been proposed to profile the sequencing quality, but few of them can quantify or correct the sequencing errors. This unmet requirement motivated us to develop AfterQC, a tool with functions to profile sequencing errors and correct most of them, plus highly automated quality control and data filtering features. Different from most tools, AfterQC analyses the overlapping of paired sequences for pair-end sequencing data. Based on overlapping analysis, AfterQC can detect and cut adapters, and furthermore it gives a novel function to correct wrong bases in the overlapping regions. Another new feature is to detect and visualise sequencing bubbles, which can be commonly found on the flowcell lanes and may raise sequencing errors. Besides normal per cycle quality and base content plotting, AfterQC also provides features like polyX (a long sub-sequence of a same base X) filtering, automatic trimming and K-MER based strand bias profiling. For each single or pair of FastQ files, AfterQC filters out bad reads, detects and eliminates sequencer's bubble effects, trims reads at front and tail, detects the sequencing errors and corrects part of them, and finally outputs clean data and generates HTML reports with interactive figures. AfterQC can run in batch mode with multiprocess support, it can run with a single FastQ file, a single pair of FastQ files (for pair-end sequencing), or a folder for all included FastQ files to be processed automatically. Based on overlapping analysis, AfterQC can estimate the sequencing error rate and profile the error transform distribution. The results of our error profiling tests show that the error distribution is highly platform dependent. Much more than just another new quality control (QC) tool, AfterQC is able to perform quality control, data filtering, error profiling and base correction automatically. Experimental results show that AfterQC can help to eliminate the sequencing errors for pair-end sequencing data to provide much cleaner outputs, and consequently help to reduce the false-positive variants, especially for the low-frequency somatic mutations. While providing rich configurable options, AfterQC can detect and set all the options automatically and require no argument in most cases.
Increased folding and channel activity of a rare cystic fibrosis mutant with CFTR modulators
Grove, Diane E.; Houck, Scott A.
2011-01-01
Cystic fibrosis (CF) is a lethal recessive genetic disease caused by mutations in the CFTR gene. The gene product is a PKA-regulated anion channel that is important for fluid and electrolyte transport in the epithelia of lung, gut, and ducts of the pancreas and sweat glands. The most common CFTR mutation, ΔF508, causes a severe, but correctable, folding defect and gating abnormality, resulting in negligible CFTR function and disease. There are also a large number of rare CF-related mutations where disease is caused by CFTR misfolding. Yet the extent to which defective biogenesis of these CFTR mutants can be corrected is not clear. CFTRV232D is one such mutant that exhibits defective folding and trafficking. CFTRΔF508 misfolding is difficult to correct, but defective biogenesis of CFTRV232D is corrected to near wild-type levels by small-molecule folding correctors in development as CF therapeutics. To determine if CFTRV232D protein is competent as a Cl− channel, we utilized single-channel recordings from transfected human embryonic kidney (HEK-293) cells. After PKA stimulation, CFTRV232D channels were detected in patches with a unitary Cl− conductance indistinguishable from that of CFTR. Yet the frequency of detecting CFTRV232D channels was reduced to ∼20% of patches compared with 60% for CFTR. The folding corrector Corr-4a increased the CFTRV232D channel detection rate and activity to levels similar to CFTR. CFTRV232D-corrected channels were inhibited with CFTRinh-172 and stimulated fourfold by the CFTR channel potentiator VRT-532. These data suggest that CF patients with rare mutations that cause CFTR misfolding, such as CFTRV232D, may benefit from treatment with folding correctors and channel potentiators in development to restore CFTRΔF508 function. PMID:21642448
Cuckle, Howard; Aitken, David; Goodburn, Sandra; Senior, Brian; Spencer, Kevin; Standing, Sue
2004-11-01
To describe and illustrate a method of setting Down syndrome screening targets and auditing performance that allows for differences in the maternal age distribution. A reference population was determined from a Gaussian model of maternal age. Target detection and false-positive rates were determined by standard statistical modelling techniques, except that the reference population rather than an observed population was used. Second-trimester marker parameters were obtained for Down syndrome from a large meta-analysis, and for unaffected pregnancies from the combined results of more than 600,000 screens in five centres. Audited detection and false-positive rates were the weighted average of the rates in five broad age groups corrected for viability bias. Weights were based on the age distributions in the reference population. Maternal age was found to approximate reasonably well to a Gaussian distribution with mean 27 years and standard deviation 5.5 years. Depending on marker combination, the target detection rates were 59 to 64% and false-positive rate 4.2 to 5.4% for a 1 in 250 term cut-off; 65 to 68% and 6.1 to 7.3% for 1 in 270 at mid-trimester. Among the five centres, the audited detection rate ranged from 7% below target to 10% above target, with audited false-positive rates better than the target by 0.3 to 1.5%. Age-standardisation should help to improve screening quality by allowing for intrinsic differences between programmes, so that valid comparisons can be made. Copyright 2004 John Wiley & Sons, Ltd.
Disordered Gambling Prevalence: Methodological Innovations in a General Danish Population Survey.
Harrison, Glenn W; Jessen, Lasse J; Lau, Morten I; Ross, Don
2018-03-01
We study Danish adult gambling behavior with an emphasis on discovering patterns relevant to public health forecasting and economic welfare assessment of policy. Methodological innovations include measurement of formative in addition to reflective constructs, estimation of prospective risk for developing gambling disorder rather than risk of being falsely negatively diagnosed, analysis with attention to sample weights and correction for sample selection bias, estimation of the impact of trigger questions on prevalence estimates and sample characteristics, and distinguishing between total and marginal effects of risk-indicating factors. The most significant novelty in our design is that nobody was excluded on the basis of their response to a 'trigger' or 'gateway' question about previous gambling history. Our sample consists of 8405 adult Danes. We administered the Focal Adult Gambling Screen to all subjects and estimate prospective risk for disordered gambling. We find that 87.6% of the population is indicated for no detectable risk, 5.4% is indicated for early risk, 1.7% is indicated for intermediate risk, 2.6% is indicated for advanced risk, and 2.6% is indicated for disordered gambling. Correcting for sample weights and controlling for sample selection has a significant effect on prevalence rates. Although these estimates of the 'at risk' fraction of the population are significantly higher than conventionally reported, we infer a significant decrease in overall prevalence rates of detectable risk with these corrections, since gambling behavior is positively correlated with the decision to participate in gambling surveys. We also find that imposing a threshold gambling history leads to underestimation of the prevalence of gambling problems.
Item response theory scoring and the detection of curvilinear relationships.
Carter, Nathan T; Dalal, Dev K; Guan, Li; LoPilato, Alexander C; Withrow, Scott A
2017-03-01
Psychologists are increasingly positing theories of behavior that suggest psychological constructs are curvilinearly related to outcomes. However, results from empirical tests for such curvilinear relations have been mixed. We propose that correctly identifying the response process underlying responses to measures is important for the accuracy of these tests. Indeed, past research has indicated that item responses to many self-report measures follow an ideal point response process-wherein respondents agree only to items that reflect their own standing on the measured variable-as opposed to a dominance process, wherein stronger agreement, regardless of item content, is always indicative of higher standing on the construct. We test whether item response theory (IRT) scoring appropriate for the underlying response process to self-report measures results in more accurate tests for curvilinearity. In 2 simulation studies, we show that, regardless of the underlying response process used to generate the data, using the traditional sum-score generally results in high Type 1 error rates or low power for detecting curvilinearity, depending on the distribution of item locations. With few exceptions, appropriate power and Type 1 error rates are achieved when dominance-based and ideal point-based IRT scoring are correctly used to score dominance and ideal point response data, respectively. We conclude that (a) researchers should be theory-guided when hypothesizing and testing for curvilinear relations; (b) correctly identifying whether responses follow an ideal point versus dominance process, particularly when items are not extreme is critical; and (c) IRT model-based scoring is crucial for accurate tests of curvilinearity. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Effect of Different Starvation Levels on Cognitive Ability in Mice
NASA Astrophysics Data System (ADS)
Li, Xiaobing; Zhi, Guoguo; Yu, Yi; Cai, Lingyu; Li, Peng; Zhang, Danhua; Bao, Shuting; Hu, Wenlong; Shen, Haiyan; Song, Fujuan
2018-01-01
Objective: To study the effect of different starvation levels on cognitive ability in mice. Method: Mice were randomly divided into four groups: normal group, dieting group A, dieting group B, dieting group C. The mice of normal group were given normal feeding amount, the rest of groups were given 3/4 of normal feeding amount, 2/4 of normal feeding amount and 1/4 of normal feeding amount. After feeding mice four days, the weight was observed and T-maze experiment, Morris water maze test, open field test and Serum Catalase activity were detected. Result: Compared with the normal group, the correct rate of the intervention group in the T-maze experiment was decreased and dieting group A> dieting group B> dieting group C. In the Morris water maze test, Compared with the normal group, the correct rate of the intervention group was increased. Among these three intervention groups, dieting group A had the highest correct rate and the difference of dieting group B and dieting group C were similar. In the open field test, Compared with the normal group, the exploration rate of the surrounding environment in the intervention group was increased. In the Serum Catalase test, Compared with the normal group, the activities of serum peroxidase in the intervention groups were decreased and dieting group A> dieting group B> dieting group C. Conclusion: A certain level of starvation could affect the cognitive ability of mice. In a certain range, the level of starvation is inversely proportional to cognitive ability in mice.
Spectral-Based Volume Sensor Prototype, Post-VS4 Test Series Algorithm Development
2009-04-30
Computer Pcorr Probabilty / Percentage of Correct Classification (# Correct / # Total) PD PhotoDiode Pd Probabilty / Percentage of Detection (# Correct...Detections / Total of Sources) Pfa Probabilty / Percentage of False Alarm (# FAs / Total # of Sources) SBVS Spectral-Based Volume Sensor SFA Smoke and
Klambauer, Günter; Schwarzbauer, Karin; Mayr, Andreas; Clevert, Djork-Arné; Mitterecker, Andreas; Bodenhofer, Ulrich; Hochreiter, Sepp
2012-01-01
Quantitative analyses of next-generation sequencing (NGS) data, such as the detection of copy number variations (CNVs), remain challenging. Current methods detect CNVs as changes in the depth of coverage along chromosomes. Technological or genomic variations in the depth of coverage thus lead to a high false discovery rate (FDR), even upon correction for GC content. In the context of association studies between CNVs and disease, a high FDR means many false CNVs, thereby decreasing the discovery power of the study after correction for multiple testing. We propose ‘Copy Number estimation by a Mixture Of PoissonS’ (cn.MOPS), a data processing pipeline for CNV detection in NGS data. In contrast to previous approaches, cn.MOPS incorporates modeling of depths of coverage across samples at each genomic position. Therefore, cn.MOPS is not affected by read count variations along chromosomes. Using a Bayesian approach, cn.MOPS decomposes variations in the depth of coverage across samples into integer copy numbers and noise by means of its mixture components and Poisson distributions, respectively. The noise estimate allows for reducing the FDR by filtering out detections having high noise that are likely to be false detections. We compared cn.MOPS with the five most popular methods for CNV detection in NGS data using four benchmark datasets: (i) simulated data, (ii) NGS data from a male HapMap individual with implanted CNVs from the X chromosome, (iii) data from HapMap individuals with known CNVs, (iv) high coverage data from the 1000 Genomes Project. cn.MOPS outperformed its five competitors in terms of precision (1–FDR) and recall for both gains and losses in all benchmark data sets. The software cn.MOPS is publicly available as an R package at http://www.bioinf.jku.at/software/cnmops/ and at Bioconductor. PMID:22302147
Klambauer, Günter; Schwarzbauer, Karin; Mayr, Andreas; Clevert, Djork-Arné; Mitterecker, Andreas; Bodenhofer, Ulrich; Hochreiter, Sepp
2012-05-01
Quantitative analyses of next-generation sequencing (NGS) data, such as the detection of copy number variations (CNVs), remain challenging. Current methods detect CNVs as changes in the depth of coverage along chromosomes. Technological or genomic variations in the depth of coverage thus lead to a high false discovery rate (FDR), even upon correction for GC content. In the context of association studies between CNVs and disease, a high FDR means many false CNVs, thereby decreasing the discovery power of the study after correction for multiple testing. We propose 'Copy Number estimation by a Mixture Of PoissonS' (cn.MOPS), a data processing pipeline for CNV detection in NGS data. In contrast to previous approaches, cn.MOPS incorporates modeling of depths of coverage across samples at each genomic position. Therefore, cn.MOPS is not affected by read count variations along chromosomes. Using a Bayesian approach, cn.MOPS decomposes variations in the depth of coverage across samples into integer copy numbers and noise by means of its mixture components and Poisson distributions, respectively. The noise estimate allows for reducing the FDR by filtering out detections having high noise that are likely to be false detections. We compared cn.MOPS with the five most popular methods for CNV detection in NGS data using four benchmark datasets: (i) simulated data, (ii) NGS data from a male HapMap individual with implanted CNVs from the X chromosome, (iii) data from HapMap individuals with known CNVs, (iv) high coverage data from the 1000 Genomes Project. cn.MOPS outperformed its five competitors in terms of precision (1-FDR) and recall for both gains and losses in all benchmark data sets. The software cn.MOPS is publicly available as an R package at http://www.bioinf.jku.at/software/cnmops/ and at Bioconductor.
qF-SSOP: real-time optical property corrected fluorescence imaging
Valdes, Pablo A.; Angelo, Joseph P.; Choi, Hak Soo; Gioux, Sylvain
2017-01-01
Fluorescence imaging is well suited to provide image guidance during resections in oncologic and vascular surgery. However, the distorting effects of tissue optical properties on the emitted fluorescence are poorly compensated for on even the most advanced fluorescence image guidance systems, leading to subjective and inaccurate estimates of tissue fluorophore concentrations. Here we present a novel fluorescence imaging technique that performs real-time (i.e., video rate) optical property corrected fluorescence imaging. We perform full field of view simultaneous imaging of tissue optical properties using Single Snapshot of Optical Properties (SSOP) and fluorescence detection. The estimated optical properties are used to correct the emitted fluorescence with a quantitative fluorescence model to provide quantitative fluorescence-Single Snapshot of Optical Properties (qF-SSOP) images with less than 5% error. The technique is rigorous, fast, and quantitative, enabling ease of integration into the surgical workflow with the potential to improve molecular guidance intraoperatively. PMID:28856038
Dead time corrections using the backward extrapolation method
NASA Astrophysics Data System (ADS)
Gilad, E.; Dubi, C.; Geslot, B.; Blaise, P.; Kolin, A.
2017-05-01
Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1-2%) in restoring the corrected count rate.
Active Optical Sensors for Tree Stem Detection and Classification in Nurseries
Garrido, Miguel; Perez-Ruiz, Manuel; Valero, Constantino; Gliever, Chris J.; Hanson, Bradley D.; Slaughter, David C.
2014-01-01
Active optical sensing (LIDAR and light curtain transmission) devices mounted on a mobile platform can correctly detect, localize, and classify trees. To conduct an evaluation and comparison of the different sensors, an optical encoder wheel was used for vehicle odometry and provided a measurement of the linear displacement of the prototype vehicle along a row of tree seedlings as a reference for each recorded sensor measurement. The field trials were conducted in a juvenile tree nursery with one-year-old grafted almond trees at Sierra Gold Nurseries, Yuba City, CA, United States. Through these tests and subsequent data processing, each sensor was individually evaluated to characterize their reliability, as well as their advantages and disadvantages for the proposed task. Test results indicated that 95.7% and 99.48% of the trees were successfully detected with the LIDAR and light curtain sensors, respectively. LIDAR correctly classified, between alive or dead tree states at a 93.75% success rate compared to 94.16% for the light curtain sensor. These results can help system designers select the most reliable sensor for the accurate detection and localization of each tree in a nursery, which might allow labor-intensive tasks, such as weeding, to be automated without damaging crops. PMID:24949638
Detection of Δ9-tetrahydrocannabinol in exhaled breath collected from cannabis users.
Beck, Olof; Sandqvist, Sören; Dubbelboer, Ilse; Franck, Johan
2011-10-01
Exhaled breath has recently been proposed as a new possible matrix for drugs of abuse testing. A key drug is cannabis, and the present study was aimed at investigating the possibility of detecting tetrahydrocannabinol and tetrahydrocannabinol carboxylic acid in exhaled breath after cannabis smoking. Exhaled breath was sampled from 10 regular cannabis users and 8 controls by directing the exhaled breath by suction through an Empore C(18) disk. The disk was extracted with hexane/ethyl acetate, and the resulting extract was evaporated to dryness and redissolved in 100 μL hexane/ethyl acetate. A 3-μL aliquot was injected onto the LC-MS-MS system and analyzed using positive electrospray ionization and selected reaction monitoring. In samples collected 1-12 h after cannabis smoking, tetrahydrocannabinol was detected in all 10 subjects. The rate of excretion was between 9.0 and 77.3 pg/min. Identification of tetrahydrocannabinol was based on correct retention time relative to tetrahydrocannabinol-d(3) and correct product ion ratio. In three samples, peaks were observed for tetrahydrocannabinol carboxylic acid, but these did not fulfill identification criteria. Neither tetrahydrocannabinol or tetrahydrocannabinol carboxylic acid was detected in the controls. These results confirm older reports that tetrahydrocannabinol is present in exhaled breath following cannabis smoking and extend the detection time from minutes to hours. The results further support the idea that exhaled breath is a promising matrix for drugs-of-abuse testing.
Daniels, Benjamin; Dolinger, Amy; Bedoya, Guadalupe; Rogo, Khama; Goicoechea, Ana; Coarasa, Jorge; Wafula, Francis; Mwaura, Njeri; Kimeu, Redemptar; Das, Jishnu
2017-01-01
The quality of clinical care can be reliably measured in multiple settings using standardised patients (SPs), but this methodology has not been extensively used in Sub-Saharan Africa. This study validates the use of SPs for a variety of tracer conditions in Nairobi, Kenya, and provides new results on the quality of care in sampled primary care clinics. We deployed 14 SPs in private and public clinics presenting either asthma, child diarrhoea, tuberculosis or unstable angina. Case management guidelines and checklists were jointly developed with the Ministry of Health. We validated the SP method based on the ability of SPs to avoid detection or dangerous situations, without imposing a substantial time burden on providers. We also evaluated the sensitivity of quality measures to SP characteristics. We assessed quality of practice through adherence to guidelines and checklists for the entire sample, stratified by case and stratified by sector, and in comparison with previously published results from urban India, rural India and rural China. Across 166 interactions in 42 facilities, detection rates and exposure to unsafe conditions were both zero. There were no detected outcome correlations with SP characteristics that would bias the results. Across all four conditions, 53% of SPs were correctly managed with wide variation across tracer conditions. SPs paid 76% less in public clinics, but proportions of correct management were similar to private clinics for three conditions and higher for the fourth. Kenyan outcomes compared favourably with India and China in all but the angina case. The SP method is safe and effective in the urban Kenyan setting for the assessment of clinical practice. The pilot results suggest that public providers in this setting provide similar rates of correct management to private providers at significantly lower out-of-pocket costs for patients. However, comparisons across countries are sensitive to the tracer condition considered.
Star formation rate and extinction in faint z ∼ 4 Lyman break galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
To, Chun-Hao; Wang, Wei-Hao; Owen, Frazer N.
We present a statistical detection of 1.5 GHz radio continuum emission from a sample of faint z ∼ 4 Lyman break galaxies (LBGs). To constrain their extinction and intrinsic star formation rate (SFR), we combine the latest ultradeep Very Large Array 1.5 GHz radio image and the Hubble Space Telescope Advanced Camera for Surveys (ACS) optical images in the GOODS-N. We select a large sample of 1771 z ∼ 4 LBGs from the ACS catalog using B {sub F435W}-dropout color criteria. Our LBG samples have I {sub F775W} ∼ 25-28 (AB), ∼0-3 mag fainter than M{sub UV}{sup ⋆} at zmore » ∼ 4. In our stacked radio images, we find the LBGs to be point-like under our 2'' angular resolution. We measure their mean 1.5 GHz flux by stacking the measurements on the individual objects. We achieve a statistical detection of S {sub 1.5} {sub GHz} = 0.210 ± 0.075 μJy at ∼3σ for the first time on such a faint LBG population at z ∼ 4. The measurement takes into account the effects of source size and blending of multiple objects. The detection is visually confirmed by stacking the radio images of the LBGs, and the uncertainty is quantified with Monte Carlo simulations on the radio image. The stacked radio flux corresponds to an obscured SFR of 16.0 ± 5.7 M {sub ☉} yr{sup –1}, and implies a rest-frame UV extinction correction factor of 3.8. This extinction correction is in excellent agreement with that derived from the observed UV continuum spectral slope, using the local calibration of Meurer et al. This result supports the use of the local calibration on high-redshift LBGs to derive the extinction correction and SFR, and also disfavors a steep reddening curve such as that of the Small Magellanic Cloud.« less
Daniels, Benjamin; Dolinger, Amy; Bedoya, Guadalupe; Rogo, Khama; Goicoechea, Ana; Coarasa, Jorge; Wafula, Francis; Mwaura, Njeri; Kimeu, Redemptar
2017-01-01
Introduction The quality of clinical care can be reliably measured in multiple settings using standardised patients (SPs), but this methodology has not been extensively used in Sub-Saharan Africa. This study validates the use of SPs for a variety of tracer conditions in Nairobi, Kenya, and provides new results on the quality of care in sampled primary care clinics. Methods We deployed 14 SPs in private and public clinics presenting either asthma, child diarrhoea, tuberculosis or unstable angina. Case management guidelines and checklists were jointly developed with the Ministry of Health. We validated the SP method based on the ability of SPs to avoid detection or dangerous situations, without imposing a substantial time burden on providers. We also evaluated the sensitivity of quality measures to SP characteristics. We assessed quality of practice through adherence to guidelines and checklists for the entire sample, stratified by case and stratified by sector, and in comparison with previously published results from urban India, rural India and rural China. Results Across 166 interactions in 42 facilities, detection rates and exposure to unsafe conditions were both zero. There were no detected outcome correlations with SP characteristics that would bias the results. Across all four conditions, 53% of SPs were correctly managed with wide variation across tracer conditions. SPs paid 76% less in public clinics, but proportions of correct management were similar to private clinics for three conditions and higher for the fourth. Kenyan outcomes compared favourably with India and China in all but the angina case. Conclusions The SP method is safe and effective in the urban Kenyan setting for the assessment of clinical practice. The pilot results suggest that public providers in this setting provide similar rates of correct management to private providers at significantly lower out-of-pocket costs for patients. However, comparisons across countries are sensitive to the tracer condition considered. PMID:29225937
Pregnancy outcomes among patients with recurrent pregnancy loss and uterine anatomic abnormalities.
Gabbai, Daniel; Harlev, Avi; Friger, Michael; Steiner, Naama; Sergienko, Ruslan; Kreinin, Andrey; Bashiri, Asher
2017-07-25
Different etiologies for recurrent pregnancy loss have been identified, among them are: anatomical, endocrine, genetic, chromosomal and thrombophilia pathologies. To assess medical and obstetric characteristics, and pregnancy outcomes, among women with uterine abnormalities and recurrent pregnancy loss (RPL). This study also aims to assess the impact of uterine anatomic surgical correction on pregnancy outcomes. A retrospective case control study of 313 patients with two or more consecutive pregnancy losses followed by a subsequent (index) pregnancy. Anatomic abnormalities were detected in 80 patients. All patients were evaluated and treated in the RPL clinic at Soroka University Medical Center. Out of 80 patients with uterine anatomic abnormalities, 19 underwent surgical correction, 32 did not and 29 had no clear record of surgical intervention, and thus were excluded from this study. Women with anatomic abnormalities had a higher rate of previous cesarean section (18.8% vs. 8.6%, P=0.022), tended to have a lower number of previous live births (1.05 vs. 1.37, P=0.07), and a higher rate of preterm delivery (22.9% vs. 10%, P=0.037). Using multivariate logistic regression analysis, anatomic abnormality was identified as an independent risk factor for RPL in patients with previous cesarean section after controlling for place of residence, positive genetic/autoimmune/endocrine workup, and fertility problems (OR 7.22; 95% CI 1.17-44.54, P=0.03). Women suffering from anatomic abnormalities tended to have a higher rate of pregnancy loss compared to those without anatomic abnormalities (40% vs. 30.9%, P=0.2). The difference in pregnancy loss rate among women who underwent surgical correction compared to those who did not was not statistically significant. In patients with previous cesarean section, uterine abnormality is an independent risk factor for pregnancy loss. Surgical correction of uterine abnormalities among RPL patients might have the potential to improve live birth rate.
NASA Astrophysics Data System (ADS)
Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar
2018-07-01
In high count rate radiation spectroscopy and imaging, detector output pulses tend to pile up due to high interaction rate of the particles with the detector. Pile-up effects can lead to a severe distortion of the energy and timing information. Pile-up events are conventionally prevented or rejected by both analog and digital electronics. However, for decreasing the exposure times in medical imaging applications, it is important to maintain the pulses and extract their true information by pile-up correction methods. The single-event reconstruction method is a relatively new model-based approach for recovering the pulses one-by-one using a fitting procedure, for which a fast fitting algorithm is a prerequisite. This article proposes a fast non-iterative algorithm based on successive integration which fits the bi-exponential model to experimental data. After optimizing the method, the energy spectra, energy resolution and peak-to-peak count ratios are calculated for different counting rates using the proposed algorithm as well as the rejection method for comparison. The obtained results prove the effectiveness of the proposed method as a pile-up processing scheme designed for spectroscopic and medical radiation detection applications.
NASA Astrophysics Data System (ADS)
Ciany, Charles M.; Zurawski, William; Kerfoot, Ian
2001-10-01
The performance of Computer Aided Detection/Computer Aided Classification (CAD/CAC) Fusion algorithms on side-scan sonar images was evaluated using data taken at the Navy's's Fleet Battle Exercise-Hotel held in Panama City, Florida, in August 2000. A 2-of-3 binary fusion algorithm is shown to provide robust performance. The algorithm accepts the classification decisions and associated contact locations form three different CAD/CAC algorithms, clusters the contacts based on Euclidian distance, and then declares a valid target when a clustered contact is declared by at least 2 of the 3 individual algorithms. This simple binary fusion provided a 96 percent probability of correct classification at a false alarm rate of 0.14 false alarms per image per side. The performance represented a 3.8:1 reduction in false alarms over the best performing single CAD/CAC algorithm, with no loss in probability of correct classification.
Dual chamber arrhythmia detection in the implantable cardioverter defibrillator.
Dijkman, B; Wellens, H J
2000-10-01
Dual chamber implantable cardioverter defibrillator (ICD) technology extended ICD therapy to more than termination of hemodynamically unstable ventricular tachyarrhythmias. It created the basis for dual chamber arrhythmia management in which dependable detection is important for treatment and prevention of both ventricular and atrial arrhythmias. Dual chamber detection algorithms were investigated in two Medtronic dual chamber ICDs: the 7250 Jewel AF (33 patients) and the 7271 Gem DR (31 patients). Both ICDs use the same PR Logic algorithm to interpret tachycardia as ventricular tachycardia (VT), supraventricular tachycardia (SVT), or dual (VT+ SVT). The accuracy of dual chamber detection was studied in 310 of 1,367 spontaneously occurring tachycardias in which rate criterion only was not sufficient for arrhythmia diagnosis. In 78 episodes there was a double tachycardia, in 223 episodes SVT was detected in the VT or ventricular fibrillation zone, and in 9 episodes arrhythmia was detected outside the boundaries of the PR Logic functioning. In 100% of double tachycardias the VT was correctly diagnosed and received priority treatment. SVT was seen in 59 (19%) episodes diagnosed as VT. The causes of inappropriate detection were (1) algorithm failure (inability to fulfill the PR
A burst-mode photon counting receiver with automatic channel estimation and bit rate detection
NASA Astrophysics Data System (ADS)
Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.
2016-04-01
We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.
Fu, Gang; Shih, Frank Y; Wang, Haimin
2008-11-01
In this paper, we present a novel method to detect Emerging Flux Regions (EFRs) in the solar atmosphere from consecutive full-disk Michelson Doppler Imager (MDI) magnetogram sequences. To our knowledge, this is the first developed technique for automatically detecting EFRs. The method includes several steps. First, the projection distortion on the MDI magnetograms is corrected. Second, the bipolar regions are extracted by applying multiscale circular harmonic filters. Third, the extracted bipolar regions are traced in consecutive MDI frames by Kalman filter as candidate EFRs. Fourth, the properties, such as positive and negative magnetic fluxes and distance between two polarities, are measured in each frame. Finally, a feature vector is constructed for each bipolar region using the measured properties, and the Support Vector Machine (SVM) classifier is applied to distinguish EFRs from other regions. Experimental results show that the detection rate of EFRs is 96.4% and of non-EFRs is 98.0%, and the false alarm rate is 25.7%, based on all the available MDI magnetograms in 2001 and 2002.
Rommens, Nicole; Geertsema, Evelien; Jansen Holleboom, Lisanne; Cox, Fieke; Visser, Gerhard
2018-05-11
User safety and the quality of diagnostics on the epilepsy monitoring unit (EMU) depend on reaction to seizures. Online seizure detection might improve this. While good sensitivity and specificity is reported, the added value above staff response is unclear. We ascertained the added value of two electroencephalograph (EEG) seizure detection algorithms in terms of additional detected seizures or faster detection time. EEG-video seizure recordings of people admitted to an EMU over one year were included, with a maximum of two seizures per subject. All recordings were retrospectively analyzed using Encevis EpiScan and BESA Epilepsy. Detection sensitivity and latency of the algorithms were compared to staff responses. False positive rates were estimated on 30 uninterrupted recordings (roughly 24 h per subject) of consecutive subjects admitted to the EMU. EEG-video recordings used included 188 seizures. The response rate of staff was 67%, of Encevis 67%, and of BESA Epilepsy 65%. Of the 62 seizures missed by staff, 66% were recognized by Encevis and 39% by BESA Epilepsy. The median latency was 31 s (staff), 10 s (Encevis), and 14 s (BESA Epilepsy). After correcting for walking time from the observation room to the subject, both algorithms detected faster than staff in 65% of detected seizures. The full recordings included 617 h of EEG. Encevis had a median false positive rate of 4.9 per 24 h and BESA Epilepsy of 2.1 per 24 h. EEG-video seizure detection algorithms may improve reaction to seizures by improving the total number of seizures detected and the speed of detection. The false positive rate is feasible for use in a clinical situation. Implementation of these algorithms might result in faster diagnostic testing and better observation during seizures. Copyright © 2018. Published by Elsevier Inc.
Parallel Processing of Broad-Band PPM Signals
NASA Technical Reports Server (NTRS)
Gray, Andrew; Kang, Edward; Lay, Norman; Vilnrotter, Victor; Srinivasan, Meera; Lee, Clement
2010-01-01
A parallel-processing algorithm and a hardware architecture to implement the algorithm have been devised for timeslot synchronization in the reception of pulse-position-modulated (PPM) optical or radio signals. As in the cases of some prior algorithms and architectures for parallel, discrete-time, digital processing of signals other than PPM, an incoming broadband signal is divided into multiple parallel narrower-band signals by means of sub-sampling and filtering. The number of parallel streams is chosen so that the frequency content of the narrower-band signals is low enough to enable processing by relatively-low speed complementary metal oxide semiconductor (CMOS) electronic circuitry. The algorithm and architecture are intended to satisfy requirements for time-varying time-slot synchronization and post-detection filtering, with correction of timing errors independent of estimation of timing errors. They are also intended to afford flexibility for dynamic reconfiguration and upgrading. The architecture is implemented in a reconfigurable CMOS processor in the form of a field-programmable gate array. The algorithm and its hardware implementation incorporate three separate time-varying filter banks for three distinct functions: correction of sub-sample timing errors, post-detection filtering, and post-detection estimation of timing errors. The design of the filter bank for correction of timing errors, the method of estimating timing errors, and the design of a feedback-loop filter are governed by a host of parameters, the most critical one, with regard to processing very broadband signals with CMOS hardware, being the number of parallel streams (equivalently, the rate-reduction parameter).
Mesothelioma mortality in Europe: impact of asbestos consumption and simian virus 40
Leithner, Katharina; Leithner, Andreas; Clar, Heimo; Weinhaeusel, Andreas; Radl, Roman; Krippl, Peter; Rehak, Peter; Windhager, Reinhard; Haas, Oskar A; Olschewski, Horst
2006-01-01
Background It is well established that asbestos is the most important cause of mesothelioma. The role of simian virus 40 (SV40) in mesothelioma development, on the other hand, remains controversial. This potential human oncogene has been introduced into various populations through contaminated polio vaccines. The aim of this study was to investigate whether the possible presence of SV40 in various European countries, as indicated either by molecular genetic evidence or previous exposure to SV40-contaminated vaccines, had any effect on pleural cancer rates in the respective countries. Methods We conducted a Medline search that covered the period from January 1969 to August 2005 for reports on the detection of SV40 DNA in human tissue samples. In addition, we collected all available information about the types of polio vaccines that had been used in these European countries and their SV40 contamination status. Results Our ecological analysis confirms that pleural cancer mortality in males, but not in females, correlates with the extent of asbestos exposure 25 – 30 years earlier. In contrast, neither the presence of SV40 DNA in tumor samples nor a previous vaccination exposure had any detectable influence on the cancer mortality rate in neither in males (asbestos-corrected rates) nor in females. Conclusion Using the currently existing data on SV40 prevalence, no association between SV40 prevalence and asbestos-corrected male pleural cancer can be demonstrated. PMID:17090323
Computer Aided Detection (CAD) Systems for Mammography and the Use of GRID in Medicine
NASA Astrophysics Data System (ADS)
Lauria, Adele
It is well known that the most effective way to defeat breast cancer is early detection, as surgery and medical therapies are more efficient when the disease is diagnosed at an early stage. The principal diagnostic technique for breast cancer detection is X-ray mammography. Screening programs have been introduced in many European countries to invite women to have periodic radiological breast examinations. In such screenings, radiologists are often required to examine large numbers of mammograms with a double reading, that is, two radiologists examine the images independently and then compare their results. In this way an increment in sensitivity (the rate of correctly identified images with a lesion) of up to 15% is obtained.1,2 In most radiological centres, it is a rarity to find two radiologists to examine each report. In recent years different Computer Aided Detection (CAD) systems have been developed as a support to radiologists working in mammography: one may hope that the "second opinion" provided by CAD might represent a lower cost alternative to improve the diagnosis. At present, four CAD systems have obtained the FDA approval in the USA. † Studies3,4 show an increment in sensitivity when CAD systems are used. Freer and Ulissey in 2001 5 demonstrated that the use of a commercial CAD system (ImageChecker M1000, R2 Technology) increases the number of cancers detected up to 19.5% with little increment in recall rate. Ciatto et al.,5 in a study simulating a double reading with a commercial CAD system (SecondLook‡), showed a moderate increment in sensitivity while reducing specificity (the rate of correctly identified images without a lesion). Notwithstanding these optimistic results, there is an ongoing debate to define the advantages of the use of CAD as second reader: the main limits underlined, e.g., by Nishikawa6 are that retrospective studies are considered much too optimistic and that clinical studies must be performed to demonstrate a statistically significant benefit from the use of CAD.
NASA Astrophysics Data System (ADS)
Welcome, Menizibeya O.; Dane, Şenol; Mastorakis, Nikos E.; Pereverzev, Vladimir A.
2017-12-01
The term "metaplasticity" is a recent one, which means plasticity of synaptic plasticity. Correspondingly, neurometaplasticity simply means plasticity of neuroplasticity, indicating that a previous plastic event determines the current plasticity of neurons. Emerging studies suggest that neurometaplasticity underlie many neural activities and neurobehavioral disorders. In our previous work, we indicated that glucoallostasis is essential for the control of plasticity of the neural network that control error commission, detection and correction. Here we review recent works, which suggest that task precision depends on the modulatory effects of neuroplasticity on the neural networks of error commission, detection, and correction. Furthermore, we discuss neurometaplasticity and its role in error commission, detection, and correction.
Rapid diagnostic tests for malaria at sites of varying transmission intensity in Uganda.
Hopkins, Heidi; Bebell, Lisa; Kambale, Wilson; Dokomajilar, Christian; Rosenthal, Philip J; Dorsey, Grant
2008-02-15
In Africa, fever is often treated presumptively as malaria, resulting in misdiagnosis and the overuse of antimalarial drugs. Rapid diagnostic tests (RDTs) for malaria may allow improved fever management. We compared RDTs based on histidine-rich protein 2 (HRP2) and RDTs based on Plasmodium lactate dehydrogenase (pLDH) with expert microscopy and PCR-corrected microscopy for 7000 patients at sites of varying malaria transmission intensity across Uganda. When all sites were considered, the sensitivity of the HRP2-based test was 97% when compared with microscopy and 98% when corrected by PCR; the sensitivity of the pLDH-based test was 88% when compared with microscopy and 77% when corrected by PCR. The specificity of the HRP2-based test was 71% when compared with microscopy and 88% when corrected by PCR; the specificity of the pLDH-based test was 92% when compared with microscopy and >98% when corrected by PCR. Based on Plasmodium falciparum PCR-corrected microscopy, the positive predictive value (PPV) of the HRP2-based test was high (93%) at all but the site with the lowest transmission rate; the pLDH-based test and expert microscopy offered excellent PPVs (98%) for all sites. The negative predictive value (NPV) of the HRP2-based test was consistently high (>97%); in contrast, the NPV for the pLDH-based test dropped significantly (from 98% to 66%) as transmission intensity increased, and the NPV for expert microscopy decreased significantly (99% to 54%) because of increasing failure to detect subpatent parasitemia. Based on the high PPV and NPV, HRP2-based RDTs are likely to be the best diagnostic choice for areas with medium-to-high malaria transmission rates in Africa.
Morris, Meghan D; Brown, Brandon; Allen, Scott A
2017-09-11
Purpose Worldwide efforts to identify individuals infected with the hepatitis C virus (HCV) focus almost exclusively on community healthcare systems, thereby failing to reach high-risk populations and those with poor access to primary care. In the USA, community-based HCV testing policies and guidelines overlook correctional facilities, where HCV rates are believed to be as high as 40 percent. This is a missed opportunity: more than ten million Americans move through correctional facilities each year. Herein, the purpose of this paper is to examine HCV testing practices in the US correctional system, California and describe how universal opt-out HCV testing could expand early HCV detection, improve public health in correctional facilities and communities, and prove cost-effective over time. Design/methodology/approach A commentary on the value of standardizing screening programs across facilities by mandating all facilities (universal) to implement opt-out testing policies for all prisoners upon entry to the correctional facilities. Findings Current variability in facility-level testing programs results in inconsistent testing levels across correctional facilities, and therefore makes estimating the actual number of HCV-infected adults in the USA difficult. The authors argue that universal opt-out testing policies ensure earlier diagnosis of HCV among a population most affected by the disease and is more cost-effective than selective testing policies. Originality/value The commentary explores the current limitations of selective testing policies in correctional systems and provides recommendations and implications for public health and correctional organizations.
Phase-noise limitations in continuous-variable quantum key distribution with homodyne detection
NASA Astrophysics Data System (ADS)
Corvaja, Roberto
2017-02-01
In continuous-variables quantum key distribution with coherent states, the advantage of performing the detection by using standard telecoms components is counterbalanced by the lack of a stable phase reference in homodyne detection due to the complexity of optical phase-locking circuits and to the unavoidable phase noise of lasers, which introduces a degradation on the achievable secure key rate. Pilot-assisted phase-noise estimation and postdetection compensation techniques are used to implement a protocol with coherent states where a local laser is employed and it is not locked to the received signal, but a postdetection phase correction is applied. Here the reduction of the secure key rate determined by the laser phase noise, for both individual and collective attacks, is analytically evaluated and a scheme of pilot-assisted phase estimation proposed, outlining the tradeoff in the system design between phase noise and spectral efficiency. The optimal modulation variance as a function of the phase-noise amount is derived.
Implementation of continuous-variable quantum key distribution with discrete modulation
NASA Astrophysics Data System (ADS)
Hirano, Takuya; Ichikawa, Tsubasa; Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Namiki, Ryo; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro
2017-06-01
We have developed a continuous-variable quantum key distribution (CV-QKD) system that employs discrete quadrature-amplitude modulation and homodyne detection of coherent states of light. We experimentally demonstrated automated secure key generation with a rate of 50 kbps when a quantum channel is a 10 km optical fibre. The CV-QKD system utilises a four-state and post-selection protocol and generates a secure key against the entangling cloner attack. We used a pulsed light source of 1550 nm wavelength with a repetition rate of 10 MHz. A commercially available balanced receiver is used to realise shot-noise-limited pulsed homodyne detection. We used a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification. A graphical processing unit card is used to accelerate the software-based post-processing.
Multimode optomechanical system in the quantum regime.
Nielsen, William Hvidtfelt Padkær; Tsaturyan, Yeghishe; Møller, Christoffer Bo; Polzik, Eugene S; Schliesser, Albert
2017-01-03
We realize a simple and robust optomechanical system with a multitude of long-lived (Q > 10 7 ) mechanical modes in a phononic-bandgap shielded membrane resonator. An optical mode of a compact Fabry-Perot resonator detects these modes' motion with a measurement rate (96 kHz) that exceeds the mechanical decoherence rates already at moderate cryogenic temperatures (10 K). Reaching this quantum regime entails, inter alia, quantum measurement backaction exceeding thermal forces and thus strong optomechanical quantum correlations. In particular, we observe ponderomotive squeezing of the output light mediated by a multitude of mechanical resonator modes, with quantum noise suppression up to -2.4 dB (-3.6 dB if corrected for detection losses) and bandwidths ≲90 kHz. The multimode nature of the membrane and Fabry-Perot resonators will allow multimode entanglement involving electromagnetic, mechanical, and spin degrees of freedom.
Multimode optomechanical system in the quantum regime
NASA Astrophysics Data System (ADS)
Hvidtfelt Padkær Nielsen, William; Tsaturyan, Yeghishe; Møller, Christoffer Bo; Polzik, Eugene S.; Schliesser, Albert
2017-01-01
We realize a simple and robust optomechanical system with a multitude of long-lived (Q > 107) mechanical modes in a phononic-bandgap shielded membrane resonator. An optical mode of a compact Fabry-Perot resonator detects these modes’ motion with a measurement rate (96 kHz) that exceeds the mechanical decoherence rates already at moderate cryogenic temperatures (10 K). Reaching this quantum regime entails, inter alia, quantum measurement backaction exceeding thermal forces and thus strong optomechanical quantum correlations. In particular, we observe ponderomotive squeezing of the output light mediated by a multitude of mechanical resonator modes, with quantum noise suppression up to -2.4 dB (-3.6 dB if corrected for detection losses) and bandwidths ≲90 kHz. The multimode nature of the membrane and Fabry-Perot resonators will allow multimode entanglement involving electromagnetic, mechanical, and spin degrees of freedom.
Identification of Carbon loss in the production of pilot-scale Carbon nanotube using gauze reactor
NASA Astrophysics Data System (ADS)
Wulan, P. P. D. K.; Purwanto, W. W.; Yeni, N.; Lestari, Y. D.
2018-03-01
Carbon loss more than 65% was the major obstacles in the Carbon Nanotube (CNT) production using gauze pilot scale reactor. The results showed that the initial carbon loss calculation is 27.64%. The calculation of carbon loss, then, takes place with various corrections parameters of: product flow rate error measurement, feed flow rate changes, gas product composition by Gas Chromatography Flame Ionization Detector (GC FID), and the carbon particulate by glass fiber filters. Error of product flow rate due to the measurement with bubble soap gives calculation error of carbon loss for about ± 4.14%. Changes in the feed flow rate due to CNT growth in the reactor reduce carbon loss by 4.97%. The detection of secondary hydrocarbon with GC FID during CNT production process reduces carbon loss by 5.14%. Particulates carried by product stream are very few and merely correct the carbon loss about 0.05%. Taking all the factors into account, the amount of carbon loss within this study is (17.21 ± 4.14)%. Assuming that 4.14% of carbon loss is due to the error measurement of product flow rate, the amount of carbon loss is 13.07%. It means that more than 57% of carbon loss within this study is identified.
Double ErrP Detection for Automatic Error Correction in an ERP-Based BCI Speller.
Cruz, Aniana; Pires, Gabriel; Nunes, Urbano J
2018-01-01
Brain-computer interface (BCI) is a useful device for people with severe motor disabilities. However, due to its low speed and low reliability, BCI still has a very limited application in daily real-world tasks. This paper proposes a P300-based BCI speller combined with a double error-related potential (ErrP) detection to automatically correct erroneous decisions. This novel approach introduces a second error detection to infer whether wrong automatic correction also elicits a second ErrP. Thus, two single-trial responses, instead of one, contribute to the final selection, improving the reliability of error detection. Moreover, to increase error detection, the evoked potential detected as target by the P300 classifier is combined with the evoked error potential at a feature-level. Discriminable error and positive potentials (response to correct feedback) were clearly identified. The proposed approach was tested on nine healthy participants and one tetraplegic participant. The online average accuracy for the first and second ErrPs were 88.4% and 84.8%, respectively. With automatic correction, we achieved an improvement around 5% achieving 89.9% in spelling accuracy for an effective 2.92 symbols/min. The proposed approach revealed that double ErrP detection can improve the reliability and speed of BCI systems.
Acoustic signal detection of manatee calls
NASA Astrophysics Data System (ADS)
Niezrecki, Christopher; Phillips, Richard; Meyer, Michael; Beusse, Diedrich O.
2003-04-01
The West Indian manatee (trichechus manatus latirostris) has become endangered partly because of a growing number of collisions with boats. A system to warn boaters of the presence of manatees, that can signal to boaters that manatees are present in the immediate vicinity, could potentially reduce these boat collisions. In order to identify the presence of manatees, acoustic methods are employed. Within this paper, three different detection algorithms are used to detect the calls of the West Indian manatee. The detection systems are tested in the laboratory using simulated manatee vocalizations from an audio compact disc. The detection method that provides the best overall performance is able to correctly identify ~=96% of the manatee vocalizations. However the system also results in a false positive rate of ~=16%. The results of this work may ultimately lead to the development of a manatee warning system that can warn boaters of the presence of manatees.
Acoustic detection of manatee vocalizations
NASA Astrophysics Data System (ADS)
Niezrecki, Christopher; Phillips, Richard; Meyer, Michael; Beusse, Diedrich O.
2003-09-01
The West Indian manatee (trichechus manatus latirostris) has become endangered partly because of a growing number of collisions with boats. A system to warn boaters of the presence of manatees, that can signal to boaters that manatees are present in the immediate vicinity, could potentially reduce these boat collisions. In order to identify the presence of manatees, acoustic methods are employed. Within this paper, three different detection algorithms are used to detect the calls of the West Indian manatee. The detection systems are tested in the laboratory using simulated manatee vocalizations from an audio compact disk. The detection method that provides the best overall performance is able to correctly identify ~96% of the manatee vocalizations. However, the system also results in a false alarm rate of ~16%. The results of this work may ultimately lead to the development of a manatee warning system that can warn boaters of the presence of manatees.
2010-01-01
Background Primer and probe sequences are the main components of nucleic acid-based detection systems. Biologists use primers and probes for different tasks, some related to the diagnosis and prescription of infectious diseases. The biological literature is the main information source for empirically validated primer and probe sequences. Therefore, it is becoming increasingly important for researchers to navigate this important information. In this paper, we present a four-phase method for extracting and annotating primer/probe sequences from the literature. These phases are: (1) convert each document into a tree of paper sections, (2) detect the candidate sequences using a set of finite state machine-based recognizers, (3) refine problem sequences using a rule-based expert system, and (4) annotate the extracted sequences with their related organism/gene information. Results We tested our approach using a test set composed of 297 manuscripts. The extracted sequences and their organism/gene annotations were manually evaluated by a panel of molecular biologists. The results of the evaluation show that our approach is suitable for automatically extracting DNA sequences, achieving precision/recall rates of 97.98% and 95.77%, respectively. In addition, 76.66% of the detected sequences were correctly annotated with their organism name. The system also provided correct gene-related information for 46.18% of the sequences assigned a correct organism name. Conclusions We believe that the proposed method can facilitate routine tasks for biomedical researchers using molecular methods to diagnose and prescribe different infectious diseases. In addition, the proposed method can be expanded to detect and extract other biological sequences from the literature. The extracted information can also be used to readily update available primer/probe databases or to create new databases from scratch. PMID:20682041
Allen, Robert C; John, Mallory G; Rutan, Sarah C; Filgueira, Marcelo R; Carr, Peter W
2012-09-07
A singular value decomposition-based background correction (SVD-BC) technique is proposed for the reduction of background contributions in online comprehensive two-dimensional liquid chromatography (LC×LC) data. The SVD-BC technique was compared to simply subtracting a blank chromatogram from a sample chromatogram and to a previously reported background correction technique for one dimensional chromatography, which uses an asymmetric weighted least squares (AWLS) approach. AWLS was the only background correction technique to completely remove the background artifacts from the samples as evaluated by visual inspection. However, the SVD-BC technique greatly reduced or eliminated the background artifacts as well and preserved the peak intensity better than AWLS. The loss in peak intensity by AWLS resulted in lower peak counts at the detection thresholds established using standards samples. However, the SVD-BC technique was found to introduce noise which led to detection of false peaks at the lower detection thresholds. As a result, the AWLS technique gave more precise peak counts than the SVD-BC technique, particularly at the lower detection thresholds. While the AWLS technique resulted in more consistent percent residual standard deviation values, a statistical improvement in peak quantification after background correction was not found regardless of the background correction technique used. Copyright © 2012 Elsevier B.V. All rights reserved.
Method of wavefront tilt correction for optical heterodyne detection systems under strong turbulence
NASA Astrophysics Data System (ADS)
Xiang, Jing-song; Tian, Xin; Pan, Le-chun
2014-07-01
Atmospheric turbulence decreases the heterodyne mixing efficiency of the optical heterodyne detection systems. Wavefront tilt correction is often used to improve the optical heterodyne mixing efficiency. But the performance of traditional centroid tracking tilt correction is poor under strong turbulence conditions. In this paper, a tilt correction method which tracking the peak value of laser spot on focal plane is proposed. Simulation results show that, under strong turbulence conditions, the performance of peak value tracking tilt correction is distinctly better than that of traditional centroid tracking tilt correction method, and the phenomenon of large antenna's performance inferior to small antenna's performance which may be occurred in centroid tracking tilt correction method can also be avoid in peak value tracking tilt correction method.
Error Detection/Correction in Collaborative Writing
ERIC Educational Resources Information Center
Pilotti, Maura; Chodorow, Martin
2009-01-01
In the present study, we examined error detection/correction during collaborative writing. Subjects were asked to identify and correct errors in two contexts: a passage written by the subject (familiar text) and a passage written by a person other than the subject (unfamiliar text). A computer program inserted errors in function words prior to the…
Performance Analysis of a Pole and Tree Trunk Detection Method for Mobile Laser Scanning Data
NASA Astrophysics Data System (ADS)
Lehtomäki, M.; Jaakkola, A.; Hyyppä, J.; Kukko, A.; Kaartinen, H.
2011-09-01
Dense point clouds can be collected efficiently from large areas using mobile laser scanning (MLS) technology. Accurate MLS data can be used for detailed 3D modelling of the road surface and objects around it. The 3D models can be utilised, for example, in street planning and maintenance and noise modelling. Utility poles, traffic signs, and lamp posts can be considered an important part of road infrastructure. Poles and trees stand out from the environment and should be included in realistic 3D models. Detection of narrow vertical objects, such as poles and tree trunks, from MLS data was studied. MLS produces huge amounts of data and, therefore, processing methods should be as automatic as possible and for the methods to be practical, the algorithms should run in an acceptable time. The automatic pole detection method tested in this study is based on first finding point clusters that are good candidates for poles and then separating poles and tree trunks from other clusters using features calculated from the clusters and by applying a mask that acts as a model of a pole. The method achieved detection rates of 77.7% and 69.7% in the field tests while 81.0% and 86.5% of the detected targets were correct. Pole-like targets that were surrounded by other objects, such as tree trunks that were inside branches, were the most difficult to detect. Most of the false detections came from wall structures, which could be corrected in further processing.
Selig, L; Guedes, R; Kritski, A; Spector, N; Lapa E Silva, J R; Braga, J U; Trajman, A
2009-08-01
In 2006, 848 persons died from tuberculosis (TB) in Rio de Janeiro, Brazil, corresponding to a mortality rate of 5.4 per 100 000 population. No specific TB death surveillance actions are currently in place in Brazil. Two public general hospitals with large open emergency rooms in Rio de Janeiro City. To evaluate the contribution of TB death surveillance in detecting gaps in TB control. We conducted a survey of TB deaths from September 2005 to August 2006. Records of TB-related deaths and deaths due to undefined causes were investigated. Complementary data were gathered from the mortality and TB notification databases. Seventy-three TB-related deaths were investigated. Transmission hazards were identified among firefighters, health care workers and in-patients. Management errors included failure to isolate suspected cases, to confirm TB, to correct drug doses in underweight patients and to trace contacts. Following the survey, 36 cases that had not previously been notified were included in the national TB notification database and the outcome of 29 notified cases was corrected. TB mortality surveillance can contribute to TB monitoring and evaluation by detecting correctable and specific programme- and hospital-based care errors, and by improving the accuracy of TB database reporting. Specific local and programmatic interventions can be proposed as a result.
Paci, Eugenio; Miccinesi, Guido; Puliti, Donella; Baldazzi, Paola; De Lisi, Vincenzo; Falcini, Fabio; Cirilli, Claudia; Ferretti, Stefano; Mangone, Lucia; Finarelli, Alba Carola; Rosso, Stefano; Segnan, Nereo; Stracci, Fabrizio; Traina, Adele; Tumino, Rosario; Zorzi, Manuel
2006-01-01
Introduction Excess of incidence rates is the expected consequence of service screening. The aim of this paper is to estimate the quota attributable to overdiagnosis in the breast cancer screening programmes in Northern and Central Italy. Methods All patients with breast cancer diagnosed between 50 and 74 years who were resident in screening areas in the six years before and five years after the start of the screening programme were included. We calculated a corrected-for-lead-time number of observed cases for each calendar year. The number of observed incident cases was reduced by the number of screen-detected cases in that year and incremented by the estimated number of screen-detected cases that would have arisen clinically in that year. Results In total we included 13,519 and 13,999 breast cancer cases diagnosed in the pre-screening and screening years, respectively. In total, the excess ratio of observed to predicted in situ and invasive cases was 36.2%. After correction for lead time the excess ratio was 4.6% (95% confidence interval 2 to 7%) and for invasive cases only it was 3.2% (95% confidence interval 1 to 6%). Conclusion The remaining excess of cancers after individual correction for lead time was lower than 5%. PMID:17147789
New double-byte error-correcting codes for memory systems
NASA Technical Reports Server (NTRS)
Feng, Gui-Liang; Wu, Xinen; Rao, T. R. N.
1996-01-01
Error-correcting or error-detecting codes have been used in the computer industry to increase reliability, reduce service costs, and maintain data integrity. The single-byte error-correcting and double-byte error-detecting (SbEC-DbED) codes have been successfully used in computer memory subsystems. There are many methods to construct double-byte error-correcting (DBEC) codes. In the present paper we construct a class of double-byte error-correcting codes, which are more efficient than those known to be optimum, and a decoding procedure for our codes is also considered.
Yu, Zhihao; Miller, Haylea C; Puzon, Geoffrey J; Clowers, Brian H
2017-04-18
Despite comparatively low levels of infection, primary amoebic meningoencephalitis (PAM) induced by Naegleria fowleri is extremely lethal, with mortality rates above 95%. As a thermophile, this organism is often found in moderate-to-warm climates and has the potential to colonize drinking water distribution systems (DWDSs). Current detection approaches require days to obtain results, whereas swift corrective action can maximize the benefit of public health. Presently, there is little information regarding the underlying in situ metabolism for this amoeba but the potential exists to exploit differentially expressed metabolic signatures as a rapid detection technique. This research outlines the biochemical profiles of selected pathogenic and nonpathogenic Naegleria in vitro using an untargeted metabolomics approach to identify a panel of diagnostically meaningful compounds that may enable rapid detection of viable pathogenic N. fowleri and augment results from traditional monitoring approaches.
Railway obstacle detection algorithm using neural network
NASA Astrophysics Data System (ADS)
Yu, Mingyang; Yang, Peng; Wei, Sen
2018-05-01
Aiming at the difficulty of detection of obstacle in outdoor railway scene, a data-oriented method based on neural network to obtain image objects is proposed. First, we mark objects of images(such as people, trains, animals) acquired on the Internet. and then use the residual learning units to build Fast R-CNN framework. Then, the neural network is trained to get the target image characteristics by using stochastic gradient descent algorithm. Finally, a well-trained model is used to identify an outdoor railway image. if it includes trains and other objects, it will issue an alert. Experiments show that the correct rate of warning reached 94.85%.
Detection of defects on apple using B-spline lighting correction method
NASA Astrophysics Data System (ADS)
Li, Jiangbo; Huang, Wenqian; Guo, Zhiming
To effectively extract defective areas in fruits, the uneven intensity distribution that was produced by the lighting system or by part of the vision system in the image must be corrected. A methodology was used to convert non-uniform intensity distribution on spherical objects into a uniform intensity distribution. A basically plane image with the defective area having a lower gray level than this plane was obtained by using proposed algorithms. Then, the defective areas can be easily extracted by a global threshold value. The experimental results with a 94.0% classification rate based on 100 apple images showed that the proposed algorithm was simple and effective. This proposed method can be applied to other spherical fruits.
Hukerikar, Saurabh; Teranishi, Keita; Diniz, Pedro C.; ...
2017-02-11
In the presence of accelerated fault rates, which are projected to be the norm on future exascale systems, it will become increasingly difficult for high-performance computing (HPC) applications to accomplish useful computation. Due to the fault-oblivious nature of current HPC programming paradigms and execution environments, HPC applications are insufficiently equipped to deal with errors. We believe that HPC applications should be enabled with capabilities to actively search for and correct errors in their computations. The redundant multithreading (RMT) approach offers lightweight replicated execution streams of program instructions within the context of a single application process. Furthermore, the use of completemore » redundancy incurs significant overhead to the application performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hukerikar, Saurabh; Teranishi, Keita; Diniz, Pedro C.
In the presence of accelerated fault rates, which are projected to be the norm on future exascale systems, it will become increasingly difficult for high-performance computing (HPC) applications to accomplish useful computation. Due to the fault-oblivious nature of current HPC programming paradigms and execution environments, HPC applications are insufficiently equipped to deal with errors. We believe that HPC applications should be enabled with capabilities to actively search for and correct errors in their computations. The redundant multithreading (RMT) approach offers lightweight replicated execution streams of program instructions within the context of a single application process. Furthermore, the use of completemore » redundancy incurs significant overhead to the application performance.« less
Dubreuil, L; Behra-Miellet, J; Vouillot, C; Bland, S; Sedallian, A; Mory, F
2003-03-01
This study looked for beta-lactamase production in 100 Prevotella isolates. MICs were determined for amoxycillin, ticarcillin, amoxycillin+clavulanate, cephalothin, cefuroxime, cefixime, cefpodoxime and cefotaxime using the reference agar dilution method (standard M11 A4, NCCLS). Beta-lactamase activity was detected in 58 of the 100 isolates, 24 of 46 black-pigmented Provotella and 34 of 54 non-pigmented Prevotella. All beta-lactamase-negative strains were susceptible to all beta-lactam antibiotics with the exception of cefuroxime and cefixime. Overall, resistance rates of Prevotella strains were lower for ticarcillin (8%) and celefotaxime (12%) than for the other cephalosporins. All Prevotella isolates were susceptible to amoxycillin and were all inhibited by 2 mg/l or less amoxycillin [corrected].
Protocol Processing for 100 Gbit/s and Beyond - A Soft Real-Time Approach in Hardware and Software
NASA Astrophysics Data System (ADS)
Büchner, Steffen; Lopacinski, Lukasz; Kraemer, Rolf; Nolte, Jörg
2017-09-01
100 Gbit/s wireless communication protocol processing stresses all parts of a communication system until the outermost. The efficient use of upcoming 100 Gbit/s and beyond transmission technology requires the rethinking of the way protocols are processed by the communication endpoints. This paper summarizes the achievements of the project End2End100. We will present a comprehensive soft real-time stream processing approach that allows the protocol designer to develop, analyze, and plan scalable protocols for ultra high data rates of 100 Gbit/s and beyond. Furthermore, we will present an ultra-low power, adaptable, and massively parallelized FEC (Forward Error Correction) scheme that detects and corrects bit errors at line rate with an energy consumption between 1 pJ/bit and 13 pJ/bit. The evaluation results discussed in this publication show that our comprehensive approach allows end-to-end communication with a very low protocol processing overhead.
The pieces fit: Constituent structure and global coherence of visual narrative in RSVP.
Hagmann, Carl Erick; Cohn, Neil
2016-02-01
Recent research has shown that comprehension of visual narrative relies on the ordering and timing of sequential images. Here we tested if rapidly presented 6-image long visual sequences could be understood as coherent narratives. Half of the sequences were correctly ordered and half had two of the four internal panels switched. Participants reported whether the sequence was correctly ordered and rated its coherence. Accuracy in detecting a switch increased when panels were presented for 1 s rather than 0.5 s. Doubling the duration of the first panel did not affect results. When two switched panels were further apart, order was discriminated more accurately and coherence ratings were low, revealing that a strong local adjacency effect influenced order and coherence judgments. Switched panels at constituent boundaries or within constituents were most disruptive to order discrimination, indicating that the preservation of constituent structure is critical to visual narrative grammar. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
Mühlematter, Urs J; Nagel, Hannes W; Becker, Anton; Mueller, Julian; Vokinger, Kerstin N; de Galiza Barbosa, Felipe; Ter Voert, Edwin E G T; Veit-Haibach, Patrick; Burger, Irene A
2018-05-31
Accurate attenuation correction (AC) is an inherent problem of positron emission tomography magnetic resonance imaging (PET/MRI) systems. Simulation studies showed that time-of-flight (TOF) detectors can reduce PET quantification errors in MRI-based AC. However, its impact on lesion detection in a clinical setting with 18 F-choline has not yet been evaluated. Therefore, we compared TOF and non-TOF 18 F-choline PET for absolute and relative difference in standard uptake values (SUV) and investigated the detection rate of metastases in prostate cancer patients. Non-TOF SUV was significantly lower compared to TOF in all osseous structures, except the skull, in primary lesions of the prostate, and in pelvic nodal and osseous metastasis. Concerning lymph node metastases, both experienced readers detected 16/19 (84%) on TOF PET, whereas on non-TOF PET readers 1 and 2 detected 11 (58%), and 14 (73%), respectively. With TOF PET readers 1 and 2 detected 14/15 (93%) and 11/15 (73%) bone metastases, respectively, whereas detection rate with non-TOF PET was 73% (11/15) for reader 1 and 53% (8/15) for reader 2. The interreader agreement was good for osseous metastasis detection on TOF (kappa 0.636, 95% confidence interval [CI] 0.453-0.810) and moderate on non-TOF (kappa = 0.600, CI 0.438-0.780). TOF reconstruction for 18 F-choline PET/MRI shows higher SUV measurements compared to non-TOF reconstructions in physiological osseous structures as well as pelvic malignancies. Our results suggest that addition of TOF information has a positive impact on lesion detection rate for lymph node and bone metastasis in prostate cancer patients.
NASA Astrophysics Data System (ADS)
van Gastel, Mark; Balmaekers, Benoît; Bambang Oetomo, Sidarto; Verkruysse, Wim
2018-02-01
Currently, the cardiac activity of infants in the Neonatal Intensive Care Unit (NICU) is monitored with contact sensors. These techniques can cause injuries and infections, particularly in very premature infants with fragile skin. Recently, remote photoplethysmography (rPPG) showed its potential to measure cardiac activity with a camera without skin contact. The main limitations of this technique are its lack of robustness to subject motion and visible light requirements. The aim of this study is to investigate the feasibility of robust rPPG for NICU patients in near darkness. Video recordings using dedicated infrared illumination were made of 7 infants, age 30-33 weeks, at a NICU in Eindhoven, The Netherlands. The pulse rate can be detected with an average error of 1.5 BPM and 2.1 BPM when measured at the face and upper torso region, respectively. Overall, the correct pulse rate is detected for 87% of the time. A camera-based framework for robust pulse extraction in near darkness of NICU patients was proposed and successfully validated. The pulse rate could be reliably detected from all evaluated skin regions. Recordings with vigorous body movements, involving occlusion of the selected skin region, are still a challenge.
NASA Astrophysics Data System (ADS)
Sun, W.; Miura, S.; Sato, T.; Sugano, T.; Freymueller, J.; Kaufman, M.; Larsen, C. F.; Cross, R.; Inazu, D.
2010-12-01
For the past 300 years, southeastern Alaska has undergone rapid ice-melting and land uplift attributable to global warming. Corresponding crustal deformation (3 cm/yr) caused by the Little Ice Age retreat is detectable with modern geodetic techniques such as GPS and tidal gauge measurements. Geodetic deformation provides useful information for assessing ice-melting rates, global warming effects, and subcrustal viscosity. Nevertheless, integrated geodetic observations, including gravity measurements, are important. To detect crustal deformation caused by glacial isostatic adjustment and to elucidate the viscosity structure in southeastern Alaska, Japanese and U.S. researchers began a joint 3-year project in 2006 using GPS, Earth tide, and absolute gravity measurements. A new absolute gravity network was established, comprising five sites around Glacier Bay, near Juneau, Alaska. This paper reports the network's gravity measurements during 2006-2008. The bad ocean model in this area hindered ocean loading correction: Large tidal residuals remain in the observations. Accurate tidal correction necessitated on-site tidal observation. Results show high observation precision for all five stations: <1 μGal. The gravity rate of change was found to be -3.5 to -5.6 μGal/yr in the gravity network. Furthermore, gravity results obtained during the 3 years indicate a similar gravity change rate. These gravity data are anticipated for application in geophysical studies of southeastern Alaska. Using gravity and vertical displacement data, we constructed a quantity to remove viscoelastic effects. The observations are thus useful to constrain present-day ice thickness changes. A gravity bias of about -13.2 ± 0.1 mGal exists between the Potsdam and current FG5 gravity data.
Fuel cell flooding detection and correction
DiPierno Bosco, Andrew; Fronk, Matthew Howard
2000-08-15
Method and apparatus for monitoring an H.sub.2 -O.sub.2 PEM fuel cells to detect and correct flooding. The pressure drop across a given H.sub.2 or O.sub.2 flow field is monitored and compared to predetermined thresholds of unacceptability. If the pressure drop exists a threshold of unacceptability corrective measures are automatically initiated.
Cowans, Nicholas J; Stamatopoulou, Anastasia; Maiz, Nerea; Spencer, Kevin; Nicolaides, Kypros H
2009-06-01
To investigate if fetal sex has an impact on 1st trimester combined screening for aenuploidy. We studied the first trimester PAPP-A, free beta-human chorionic gonadatropin (beta-hCG) and nuchal translucency levels in 56,024 normal, singleton pregnancies with known fetal sex at birth. We also examined the distributions in 722 pregnancies with trisomy 21 of known fetal sex. We have found a 14.74% increase in first trimester maternal serum (MS) median free beta-hCG MoM, 6.25% increase of PAPP-A and a 9.41% decrease in delta NT, when the fetus was female. Analysis of data has shown that women carrying a female fetus were 1.084 times more likely to be in the 'at risk' group than those carrying a male fetus. In examining data from 722 pregnancies in which the fetus was affected by trisomy 21, we observed a similar 20.8% increase in free beta-hCG MoM, 5.7% increase in PAPP-A and a 12% decrease in delta NT when the fetus was female. Amongst the trisomy 21 cases, 88.8% of male trisomy 21 cases were detected compared with 91.2% in female cases, this difference was not statistically significant. Correcting for fetal sex redressed the balance in screen-positive rate between the sexes and had a minimal impact on detection rate. Correcting for fetal sex may be a worthwhile consideration. A cost-benefit analysis would be required to determine if it is feasible to introduce fetal gender assignment into the routine first trimester scan for the purpose of marker correction and whether this would have any significant impact. (c) 2009 John Wiley & Sons, Ltd.
Use of standardized patients to assess quality of tuberculosis care: a pilot, cross-sectional study
Das, Jishnu; Kwan, Ada; Daniels, Ben; Satyanarayana, Srinath; Subbaraman, Ramnath; Bergkvist, Sofi; Das, Ranendra K.; Das, Veena; Pai, Madhukar
2015-01-01
SUMMARY Background Existing studies on quality of tuberculosis care mostly reflect knowledge, not actual practice. Methods We conducted a validation study on the use of standardized patients (SPs) for assessing quality of TB care. Four cases, two for presumed TB and one each for confirmed TB and suspected MDR-TB, were presented by 17 SPs, with 250 SP interactions among 100 consenting providers in Delhi, including qualified (29%), alternative medicine (40%) and informal providers (31%). Validation criteria were: (1) negligible risk and ability to avoid adverse events for providers and SPs; (2) low detection rates of SPs by providers, and (3) data accuracy across SPs and audio verification of SP recall. We used medical vignettes to assess provider knowledge for presumed TB. Correct case management was benchmarked using Standards for TB Care in India (STCI). Findings SPs were deployed with low detection rates (4.7% of 232 interactions), high correlation of recall with audio recordings (r=0.63; 95% CI: 0.53 – 0.79), and no safety concerns. Average consultation length was 6 minutes with 6.2 questions/exams completed, representing 35% (95% confidence interval [CI]: 33%–38%) of essential checklist items. Across all cases, only 52 of 250 (21%; 95% CI: 16%–26%) were correctly managed. Correct management was higher among MBBS doctors (adjusted OR=2.41, 95% CI: 1.17–4.93) as compared to all others. Provider knowledge in the vignettes was markedly more consistent with STCI than their practice. Interpretation The SP methodology can be successfully implemented to assess TB care. Our data suggest a big gap between provider knowledge and practice. PMID:26268690
Bliem, Rupert; Schauer, Sonja; Plicka, Helga; Obwaller, Adelheid; Sommer, Regina; Steinrigl, Adolf; Alam, Munirul; Reischer, Georg H.; Farnleitner, Andreas H.
2015-01-01
Vibrio cholerae is a severe human pathogen and a frequent member of aquatic ecosystems. Quantification of V. cholerae in environmental water samples is therefore fundamental for ecological studies and health risk assessment. Beside time-consuming cultivation techniques, quantitative PCR (qPCR) has the potential to provide reliable quantitative data and offers the opportunity to quantify multiple targets simultaneously. A novel triplex qPCR strategy was developed in order to simultaneously quantify toxigenic and nontoxigenic V. cholerae in environmental water samples. To obtain quality-controlled PCR results, an internal amplification control was included. The qPCR assay was specific, highly sensitive, and quantitative across the tested 5-log dynamic range down to a method detection limit of 5 copies per reaction. Repeatability and reproducibility were high for all three tested target genes. For environmental application, global DNA recovery (GR) rates were assessed for drinking water, river water, and water from different lakes. GR rates ranged from 1.6% to 76.4% and were dependent on the environmental background. Uncorrected and GR-corrected V. cholerae abundances were determined in two lakes with extremely high turbidity. Uncorrected abundances ranged from 4.6 × 102 to 2.3 × 104 cell equivalents liter−1, whereas GR-corrected abundances ranged from 4.7 × 103 to 1.6 × 106 cell equivalents liter−1. GR-corrected qPCR results were in good agreement with an independent cell-based direct detection method but were up to 1.6 log higher than cultivation-based abundances. We recommend the newly developed triplex qPCR strategy as a powerful tool to simultaneously quantify toxigenic and nontoxigenic V. cholerae in various aquatic environments for ecological studies as well as for risk assessment programs. PMID:25724966
Optimizing the TESS Planet Finding Pipeline
NASA Astrophysics Data System (ADS)
Chitamitara, Aerbwong; Smith, Jeffrey C.; Tenenbaum, Peter; TESS Science Processing Operations Center
2017-10-01
The Transiting Exoplanet Survey Satellite (TESS) is a new NASA planet finding all-sky survey that will observe stars within 200 light years and 10-100 times brighter than that of the highly successful Kepler mission. TESS is expected to detect ~1000 planets smaller than Neptune and dozens of Earth size planets. As in the Kepler mission, the Science Processing Operations Center (SPOC) processing pipeline at NASA Ames Research center is tasked with calibrating the raw pixel data, generating systematic error corrected light curves and then detecting and validating transit signals. The Transiting Planet Search (TPS) component of the pipeline must be modified and tuned for the new data characteristics in TESS. For example, due to each sector being viewed for as little as 28 days, the pipeline will be identifying transiting planets based on a minimum of two transit signals rather than three, as in the Kepler mission. This may result in a significantly higher false positive rate. The study presented here is to measure the detection efficiency of the TESS pipeline using simulated data. Transiting planets identified by TPS are compared to transiting planets from the simulated transit model using the measured epochs, periods, transit durations and the expected detection statistic of injected transit signals (expected MES). From the comparisons, the recovery and false positive rates of TPS is measured. Measurements of recovery in TPS are then used to adjust TPS configuration parameters to maximize the planet recovery rate and minimize false detections. The improvements in recovery rate between initial TPS conditions and after various adjustments will be presented and discussed.
A new leakage measurement method for damaged seal material
NASA Astrophysics Data System (ADS)
Wang, Shen; Yao, Xue Feng; Yang, Heng; Yuan, Li; Dong, Yi Feng
2018-07-01
In this paper, a new leakage measurement method based on the temperature field and temperature gradient field is proposed for detecting the leakage location and measuring the leakage rate in damaged seal material. First, a heat transfer leakage model is established, which can calculate the leakage rate based on the temperature gradient field near the damaged zone. Second, a finite element model of an infinite plate with a damaged zone is built to calculate the leakage rate, which fits the simulated leakage rate well. Finally, specimens in a tubular rubber seal with different damage shapes are used to conduct the leakage experiment, validating the correctness of this new measurement principle for the leakage rate and the leakage position. The results indicate the feasibility of the leakage measurement method for damaged seal material based on the temperature gradient field from infrared thermography.
NASA Astrophysics Data System (ADS)
Edler, Karl T.
The issue of eddy currents induced by the rapid switching of magnetic field gradients is a long-standing problem in magnetic resonance imaging. A new method for dealing with this problem is presented whereby spatial harmonic components of the magnetic field are continuously sensed, through their temporal rates of change, and corrected. In this way, the effects of the eddy currents on multiple spatial harmonic components of the magnetic field can be detected and corrections applied during the rise time of the gradients. Sensing the temporal changes in each spatial harmonic is made possible with specially designed detection coils. However to make the design of these coils possible, general relationships between the spatial harmonics of the field, scalar potential, and vector potential are found within the quasi-static approximation. These relationships allow the vector potential to be found from the field -- an inverse curl operation -- and may be of use beyond the specific problem of detection coil design. Using the detection coils as sensors, methods are developed for designing a negative feedback system to control the eddy current effects and optimizing that system with respect to image noise and distortion. The design methods are successfully tested in a series of proof-of-principle experiments which lead to a discussion of how to incorporate similar designs into an operational MRI. Keywords: magnetic resonance imaging, eddy currents, dynamic shimming, negative feedback, quasi-static fields, vector potential, inverse curl
Ji, Young-Yong; Kim, Chang-Jong; Lim, Kyo-Sun; Lee, Wanno; Chang, Hyon-Sock; Chung, Kun Ho
2017-10-01
To expand the application of dose rate spectroscopy to the environment, the method using an environmental radiation monitor (ERM) based on a 3' × 3' NaI(Tl) detector was used to perform real-time monitoring of the dose rate and radioactivity for detected gamma nuclides in the ground around an ERM. Full-energy absorption peaks in the energy spectrum for dose rate were first identified to calculate the individual dose rates of Bi, Ac, Tl, and K distributed in the ground through interference correction because of the finite energy resolution of the NaI(Tl) detector used in an ERM. The radioactivity of the four natural radionuclides was then calculated from the in situ calibration factor-that is, the dose rate per unit curie-of the used ERM for the geometry of the ground in infinite half-space, which was theoretically estimated by Monte Carlo simulation. By an intercomparison using a portable HPGe and samples taken from the ground around an ERM, this method to calculate the dose rate and radioactivity of four nuclides using an ERM was experimentally verified and finally applied to remotely monitor them in real-time in the area in which the ERM had been installed.
Helium Mass Spectrometer Leak Detection: A Method to Quantify Total Measurement Uncertainty
NASA Technical Reports Server (NTRS)
Mather, Janice L.; Taylor, Shawn C.
2015-01-01
In applications where leak rates of components or systems are evaluated against a leak rate requirement, the uncertainty of the measured leak rate must be included in the reported result. However, in the helium mass spectrometer leak detection method, the sensitivity, or resolution, of the instrument is often the only component of the total measurement uncertainty noted when reporting results. To address this shortfall, a measurement uncertainty analysis method was developed that includes the leak detector unit's resolution, repeatability, hysteresis, and drift, along with the uncertainty associated with the calibration standard. In a step-wise process, the method identifies the bias and precision components of the calibration standard, the measurement correction factor (K-factor), and the leak detector unit. Together these individual contributions to error are combined and the total measurement uncertainty is determined using the root-sum-square method. It was found that the precision component contributes more to the total uncertainty than the bias component, but the bias component is not insignificant. For helium mass spectrometer leak rate tests where unit sensitivity alone is not enough, a thorough evaluation of the measurement uncertainty such as the one presented herein should be performed and reported along with the leak rate value.
Multimode optomechanical system in the quantum regime
Nielsen, William Hvidtfelt Padkær; Tsaturyan, Yeghishe; Møller, Christoffer Bo; Polzik, Eugene S.; Schliesser, Albert
2017-01-01
We realize a simple and robust optomechanical system with a multitude of long-lived (Q > 107) mechanical modes in a phononic-bandgap shielded membrane resonator. An optical mode of a compact Fabry–Perot resonator detects these modes’ motion with a measurement rate (96 kHz) that exceeds the mechanical decoherence rates already at moderate cryogenic temperatures (10 K). Reaching this quantum regime entails, inter alia, quantum measurement backaction exceeding thermal forces and thus strong optomechanical quantum correlations. In particular, we observe ponderomotive squeezing of the output light mediated by a multitude of mechanical resonator modes, with quantum noise suppression up to −2.4 dB (−3.6 dB if corrected for detection losses) and bandwidths ≲90 kHz. The multimode nature of the membrane and Fabry–Perot resonators will allow multimode entanglement involving electromagnetic, mechanical, and spin degrees of freedom. PMID:27999182
Figueroa, Priscila I; Ziman, Alyssa; Wheeler, Christine; Gornbein, Jeffrey; Monson, Michael; Calhoun, Loni
2006-09-01
To detect miscollected (wrong blood in tube [WBIT]) samples, our institution requires a second independently drawn sample (check-type [CT]) on previously untyped, non-group O patients who are likely to require transfusion. During the 17-year period addressed by this report, 94 WBIT errors were detected: 57% by comparison with a historic blood type, 7% by the CT, and 35% by other means. The CT averted 5 potential ABO-incompatible transfusions. Our corrected WBIT error rate is 1 in 3,713 for verified samples tested between 2000 and 2003, the period for which actual number of CTs performed was available. The estimated rate of WBIT for the 17-year period is 1 in 2,262 samples. ABO-incompatible transfusions due to WBIT-type errors are avoided by comparison of current blood type results with a historic type, and the CT is an effective way to create a historic type.
Haase, Steven J; Fisk, Gary D
2011-08-01
A key problem in unconscious perception research is ruling out the possibility that weak conscious awareness of stimuli might explain the results. In the present study, signal detection theory was compared with the objective threshold/strategic model as explanations of results for detection and identification sensitivity in a commonly used unconscious perception task. In the task, 64 undergraduate participants detected and identified one of four briefly displayed, visually masked letters. Identification was significantly above baseline (i.e., proportion correct > .25) at the highest detection confidence rating. This result is most consistent with signal detection theory's continuum of sensory states and serves as a possible index of conscious perception. However, there was limited support for the other model in the form of a predicted "looker's inhibition" effect, which produced identification performance that was significantly below baseline. One additional result, an interaction between the target stimulus and type of mask, raised concerns for the generality of unconscious perception effects.
Position and volume estimation of atmospheric nuclear detonations from video reconstruction
NASA Astrophysics Data System (ADS)
Schmitt, Daniel T.
Recent work in digitizing films of foundational atmospheric nuclear detonations from the 1950s provides an opportunity to perform deeper analysis on these historical tests. This work leverages multi-view geometry and computer vision techniques to provide an automated means to perform three-dimensional analysis of the blasts for several points in time. The accomplishment of this requires careful alignment of the films in time, detection of features in the images, matching of features, and multi-view reconstruction. Sub-explosion features can be detected with a 67% hit rate and 22% false alarm rate. Hotspot features can be detected with a 71.95% hit rate, 86.03% precision and a 0.015% false positive rate. Detected hotspots are matched across 57-109 degree viewpoints with 76.63% average correct matching by defining their location relative to the center of the explosion, rotating them to the alternative viewpoint, and matching them collectively. When 3D reconstruction is applied to the hotspot matching it completes an automated process that has been used to create 168 3D point clouds with 31.6 points per reconstruction with each point having an accuracy of 0.62 meters with 0.35, 0.24, and 0.34 meters of accuracy in the x-, y- and z-direction respectively. As a demonstration of using the point clouds for analysis, volumes are estimated and shown to be consistent with radius-based models and in some cases improve on the level of uncertainty in the yield calculation.
Error-correcting codes in computer arithmetic.
NASA Technical Reports Server (NTRS)
Massey, J. L.; Garcia, O. N.
1972-01-01
Summary of the most important results so far obtained in the theory of coding for the correction and detection of errors in computer arithmetic. Attempts to satisfy the stringent reliability demands upon the arithmetic unit are considered, and special attention is given to attempts to incorporate redundancy into the numbers themselves which are being processed so that erroneous results can be detected and corrected.
30 CFR 62.174 - Follow-up corrective measures when a standard threshold shift is detected.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Follow-up corrective measures when a standard threshold shift is detected. 62.174 Section 62.174 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR UNIFORM MINE HEALTH REGULATIONS OCCUPATIONAL NOISE EXPOSURE § 62.174 Follow-up corrective measures when a standard...
30 CFR 62.174 - Follow-up corrective measures when a standard threshold shift is detected.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Follow-up corrective measures when a standard threshold shift is detected. 62.174 Section 62.174 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR UNIFORM MINE HEALTH REGULATIONS OCCUPATIONAL NOISE EXPOSURE § 62.174 Follow-up corrective measures when a standard...
30 CFR 62.174 - Follow-up corrective measures when a standard threshold shift is detected.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Follow-up corrective measures when a standard threshold shift is detected. 62.174 Section 62.174 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR UNIFORM MINE HEALTH REGULATIONS OCCUPATIONAL NOISE EXPOSURE § 62.174 Follow-up corrective measures when a standard...
30 CFR 62.174 - Follow-up corrective measures when a standard threshold shift is detected.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Follow-up corrective measures when a standard threshold shift is detected. 62.174 Section 62.174 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR UNIFORM MINE HEALTH REGULATIONS OCCUPATIONAL NOISE EXPOSURE § 62.174 Follow-up corrective measures when a standard...
NASA Astrophysics Data System (ADS)
Park, S. H.; Park, W.; Jung, H. S.
2018-04-01
Forest fires are a major natural disaster that destroys a forest area and a natural environment. In order to minimize the damage caused by the forest fire, it is necessary to know the location and the time of day and continuous monitoring is required until fire is fully put out. We have tried to improve the forest fire detection algorithm by using a method to reduce the variability of surrounding pixels. We focused that forest areas of East Asia, part of the Himawari-8 AHI coverage, are mostly located in mountainous areas. The proposed method was applied to the forest fire detection in Samcheok city, Korea on May 6 to 10, 2017.
NASA Astrophysics Data System (ADS)
Zhu, Yixiao; Jiang, Mingxuan; Ruan, Xiaoke; Chen, Zeyu; Li, Chenjia; Zhang, Fan
2018-05-01
We experimentally demonstrate 6.4 Tb/s wavelength division multiplexed (WDM) direct-detection transmission based on Nyquist twin-SSB modulation over 25 km SSMF with bit error rates (BERs) below the 20% hard-decision forward error correction (HD-FEC) threshold of 1.5 × 10-2. The two sidebands of each channel are separately detected using Kramers-Kronig receiver without MIMO equalization. We also carry out numerical simulations to evaluate the system robustness against I/Q amplitude imbalance, I/Q phase deviation and the extinction ratio of modulator, respectively. Furthermore, we show in simulation that the requirement of steep edge optical filter can be relaxed if multi-input-multi-output (MIMO) equalization between the two sidebands is used.
Fusion of ultrasonic and infrared signatures for personnel detection by a mobile robot
NASA Astrophysics Data System (ADS)
Carroll, Matthew S.; Meng, Min; Cadwallender, William K.
1992-04-01
Passive Infrared sensors used for intrusion detection, especially those used on mobile robots, are vulnerable to false alarms caused by clutter objects such as radiators, steam pipes, windows, etc., as well as deliberately caused false alarms caused by decoy objects. To overcome these sources of false alarms, we are now combining thermal and ultrasonic signals, the results being a more robust system for detecting personnel. Our paper will discuss the fusion strategies used for combining sensor information. Our first strategy uses a statistical classifier using features such as the sonar cross-section, the received thermal energy, and ultrasonic range. Our second strategy uses s 3-layered neural classifier trained by backpropagation. The probability of correct classification and the false alarm rate for both strategies will be presented in the paper.
NASA Astrophysics Data System (ADS)
Atta, Abdu; Yahaya, Sharipah; Zain, Zakiyah; Ahmed, Zalikha
2017-11-01
Control chart is established as one of the most powerful tools in Statistical Process Control (SPC) and is widely used in industries. The conventional control charts rely on normality assumption, which is not always the case for industrial data. This paper proposes a new S control chart for monitoring process dispersion using skewness correction method for skewed distributions, named as SC-S control chart. Its performance in terms of false alarm rate is compared with various existing control charts for monitoring process dispersion, such as scaled weighted variance S chart (SWV-S); skewness correction R chart (SC-R); weighted variance R chart (WV-R); weighted variance S chart (WV-S); and standard S chart (STD-S). Comparison with exact S control chart with regards to the probability of out-of-control detections is also accomplished. The Weibull and gamma distributions adopted in this study are assessed along with the normal distribution. Simulation study shows that the proposed SC-S control chart provides good performance of in-control probabilities (Type I error) in almost all the skewness levels and sample sizes, n. In the case of probability of detection shift the proposed SC-S chart is closer to the exact S control chart than the existing charts for skewed distributions, except for the SC-R control chart. In general, the performance of the proposed SC-S control chart is better than all the existing control charts for monitoring process dispersion in the cases of Type I error and probability of detection shift.
NASA Technical Reports Server (NTRS)
Starr, Stanley O.
1998-01-01
NASA, at the John F. Kennedy Space Center (KSC), developed and operates a unique high-precision lightning location system to provide lightning-related weather warnings. These warnings are used to stop lightning- sensitive operations such as space vehicle launches and ground operations where equipment and personnel are at risk. The data is provided to the Range Weather Operations (45th Weather Squadron, U.S. Air Force) where it is used with other meteorological data to issue weather advisories and warnings for Cape Canaveral Air Station and KSC operations. This system, called Lightning Detection and Ranging (LDAR), provides users with a graphical display in three dimensions of 66 megahertz radio frequency events generated by lightning processes. The locations of these events provide a sound basis for the prediction of lightning hazards. This document provides the basis for the design approach and data analysis for a system of radio frequency receivers to provide azimuth and elevation data for lightning pulses detected simultaneously by the LDAR system. The intent is for this direction-finding system to correct and augment the data provided by LDAR and, thereby, increase the rate of valid data and to correct or discard any invalid data. This document develops the necessary equations and algorithms, identifies sources of systematic errors and means to correct them, and analyzes the algorithms for random error. This data analysis approach is not found in the existing literature and was developed to facilitate the operation of this Short Baseline LDAR (SBLDAR). These algorithms may also be useful for other direction-finding systems using radio pulses or ultrasonic pulse data.
Meyer, Michael G.; Hayenga, Jon; Neumann, Thomas; Katdare, Rahul; Presley, Chris; Steinhauer, David; Bell, Timothy; Lancaster, Christy; Nelson, Alan C.
2015-01-01
The war against cancer has yielded important advances in the early diagnosis and treatment of certain cancer types, but the poor detection rate and 5-year survival rate for lung cancer remains little changed over the past 40 years. Early detection through emerging lung cancer screening programs promises the most reliable means of improving mortality. Sputum cytology has been tried without success because sputum contains few malignant cells that are difficult for cytologists to detect. However, research has shown that sputum contains diagnostic malignant cells and could serve as a means of lung cancer detection if those cells could be detected and correctly characterized. Recently, the National Lung Cancer Screening Trial reported that screening by three consecutive low-dose X-ray CT scans provides a 20% reduction in lung cancer mortality compared to chest X-ray. This reduction in mortality, however, comes with an unacceptable false positive rate that increases patient risks and the overall cost of lung cancer screening. This article reviews the LuCED® test for detecting early lung cancer. LuCED is based on patient sputum that is enriched for bronchial epithelial cells. The enriched sample is then processed on the Cell-CT®, which images cells in three dimensions with sub-micron resolution. Algorithms are applied to the 3D cell images to extract morphometric features that drive a classifier to identify cells that have abnormal characteristics. The final status of these candidate abnormal cells is established by the pathologist's manual review. LuCED promotes accurate cell classification which could enable cost effective detection of lung cancer. PMID:26148817
Object detection in cinematographic video sequences for automatic indexing
NASA Astrophysics Data System (ADS)
Stauder, Jurgen; Chupeau, Bertrand; Oisel, Lionel
2003-06-01
This paper presents an object detection framework applied to cinematographic post-processing of video sequences. Post-processing is done after production and before editing. At the beginning of each shot of a video, a slate (also called clapperboard) is shown. The slate contains notably an electronic audio timecode that is necessary for audio-visual synchronization. This paper presents an object detection framework to detect slates in video sequences for automatic indexing and post-processing. It is based on five steps. The first two steps aim to reduce drastically the video data to be analyzed. They ensure high recall rate but have low precision. The first step detects images at the beginning of a shot possibly showing up a slate while the second step searches in these images for candidates regions with color distribution similar to slates. The objective is to not miss any slate while eliminating long parts of video without slate appearance. The third and fourth steps are statistical classification and pattern matching to detected and precisely locate slates in candidate regions. These steps ensure high recall rate and high precision. The objective is to detect slates with very little false alarms to minimize interactive corrections. In a last step, electronic timecodes are read from slates to automize audio-visual synchronization. The presented slate detector has a recall rate of 89% and a precision of 97,5%. By temporal integration, much more than 89% of shots in dailies are detected. By timecode coherence analysis, the precision can be raised too. Issues for future work are to accelerate the system to be faster than real-time and to extend the framework for several slate types.
Effects of ocular aberrations on contrast detection in noise.
Liang, Bo; Liu, Rong; Dai, Yun; Zhou, Jiawei; Zhou, Yifeng; Zhang, Yudong
2012-08-06
We use adaptive optics (AO) techniques to manipulate the ocular aberrations and elucidate the effects of these ocular aberrations on contrast detection in a noisy background. The detectability of sine wave gratings at frequencies of 4, 8, and 16 circles per degree (cpd) was measured in a standard two-interval force-choice staircase procedure against backgrounds of various levels of white noise. The observer's ocular aberrations were either corrected with AO or left uncorrected. In low levels of external noise, contrast detection thresholds are always lowered by AO correction, whereas in high levels of external noise, they are generally elevated by AO correction. Higher levels of external noise are required to make this threshold elevation observable when signal spatial frequencies increase from 4 to 16 cpd. The linear-amplifier-model fit shows that mostly sampling efficiency and equivalent noise both decrease with AO correction. Our findings indicate that ocular aberrations could be beneficial for contrast detection in high-level noises. The implications of these findings are discussed.
Hill, Benjamin David; Womble, Melissa N; Rohling, Martin L
2015-01-01
This study utilized logistic regression to determine whether performance patterns on Concussion Vital Signs (CVS) could differentiate known groups with either genuine or feigned performance. For the embedded measure development group (n = 174), clinical patients and undergraduate students categorized as feigning obtained significantly lower scores on the overall test battery mean for the CVS, Shipley-2 composite score, and California Verbal Learning Test-Second Edition subtests than did genuinely performing individuals. The final full model of 3 predictor variables (Verbal Memory immediate hits, Verbal Memory immediate correct passes, and Stroop Test complex reaction time correct) was significant and correctly classified individuals in their known group 83% of the time (sensitivity = .65; specificity = .97) in a mixed sample of young-adult clinical cases and simulators. The CVS logistic regression function was applied to a separate undergraduate college group (n = 378) that was asked to perform genuinely and identified 5% as having possibly feigned performance indicating a low false-positive rate. The failure rate was 11% and 16% at baseline cognitive testing in samples of high school and college athletes, respectively. These findings have particular relevance given the increasing use of computerized test batteries for baseline cognitive testing and return-to-play decisions after concussion.
Kleinsorge, F; Smetanay, K; Rom, J; Hörmansdörfer, C; Hörmannsdörfer, C; Scharf, A; Schmidt, P
2010-12-01
In 2008, 2 351 first trimester screenings were calculated by a newly developed internet database ( http:// www.firsttrimester.net ) to evaluate the risk for the presence of Down's syndrome. All data were evaluated by the conventional first trimester screening according to Nicolaides (FTS), based on the previous JOY Software, and by the advanced first trimester screening (AFS). After receiving the feedback of the karyotype as well as the rates of the correct positives, correct negatives, false positives, false negatives, the sensitivity and specificity were calculated and compared. Overall 255 cases were investigated which were analysed by both methods. These included 2 cases of Down's syndrome and one case of trisomy 18. The FTS and the AFS had a sensitivity of 100%. The specificity was 88.5% for the FTS and 93.0% for the AFS. As already shown in former studies, the higher specificity of the AFS is a result of a reduction of the false positive rate (28 to 17 cases). As a consequence of the AFS with a detection rate of 100% the rate of further invasive diagnostics in pregnant women is decreased by having 39% fewer positive tested women. © Georg Thieme Verlag KG Stuttgart · New York.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bahena, A.; Villasenor, L.
We describe a simple experimental setup to measure the rate of arrival of muons at the surface of the Earth by using a single water Cerenkov detector and home-made electronics. We find a strong anti-correlation between the muon rates averaged over one-hour periods and the atmospheric pressure, with a measured correlation coefficient of -0.67% per hPa. After applying this correction we achieve sufficient sensitivity to observe long term (hours) variations in the averaged muon rates which are greater than 2%. Forbush decreases as big as 4% have been observed with muon detectors located at similar magnetic rigidities compared to Morelia,more » therefore our experimental setup will detect Forbush decreases as soon as the Sun enters into a more active phase.« less
An intelligent subtitle detection model for locating television commercials.
Huang, Yo-Ping; Hsu, Liang-Wei; Sandnes, Frode-Eika
2007-04-01
A strategy for locating television (TV) commercials in TV programs is proposed. Based on the observation that most TV commercials do not have subtitles, the first stage exploits six subtitle constraints and an adaptive neurofuzzy inference system model to determine whether a frame contains a subtitle or not. The second stage involves locating the mark-in/mark-out points using a genetic algorithm. An interactive user interface allows users to efficiently identify and fine-tune the exact boundaries separating the commercials from the program content. Furthermore, erroneous boundaries are manually corrected. Experimental results show that the precision rate and recall rates exceed 90%.
Group discussion improves lie detection
Klein, Nadav; Epley, Nicholas
2015-01-01
Groups of individuals can sometimes make more accurate judgments than the average individual could make alone. We tested whether this group advantage extends to lie detection, an exceptionally challenging judgment with accuracy rates rarely exceeding chance. In four experiments, we find that groups are consistently more accurate than individuals in distinguishing truths from lies, an effect that comes primarily from an increased ability to correctly identify when a person is lying. These experiments demonstrate that the group advantage in lie detection comes through the process of group discussion, and is not a product of aggregating individual opinions (a “wisdom-of-crowds” effect) or of altering response biases (such as reducing the “truth bias”). Interventions to improve lie detection typically focus on improving individual judgment, a costly and generally ineffective endeavor. Our findings suggest a cheap and simple synergistic approach of enabling group discussion before rendering a judgment. PMID:26015581
NASA Astrophysics Data System (ADS)
Hervo, Maxime; Poltera, Yann; Haefele, Alexander
2016-07-01
Imperfections in a lidar's overlap function lead to artefacts in the background, range and overlap-corrected lidar signals. These artefacts can erroneously be interpreted as an aerosol gradient or, in extreme cases, as a cloud base leading to false cloud detection. A correct specification of the overlap function is hence crucial in the use of automatic elastic lidars (ceilometers) for the detection of the planetary boundary layer or of low cloud. In this study, an algorithm is presented to correct such artefacts. It is based on the assumption of a homogeneous boundary layer and a correct specification of the overlap function down to a minimum range, which must be situated within the boundary layer. The strength of the algorithm lies in a sophisticated quality-check scheme which allows the reliable identification of favourable atmospheric conditions. The algorithm was applied to 2 years of data from a CHM15k ceilometer from the company Lufft. Backscatter signals corrected for background, range and overlap were compared using the overlap function provided by the manufacturer and the one corrected with the presented algorithm. Differences between corrected and uncorrected signals reached up to 45 % in the first 300 m above ground. The amplitude of the correction turned out to be temperature dependent and was larger for higher temperatures. A linear model of the correction as a function of the instrument's internal temperature was derived from the experimental data. Case studies and a statistical analysis of the strongest gradient derived from corrected signals reveal that the temperature model is capable of a high-quality correction of overlap artefacts, in particular those due to diurnal variations. The presented correction method has the potential to significantly improve the detection of the boundary layer with gradient-based methods because it removes false candidates and hence simplifies the attribution of the detected gradients to the planetary boundary layer. A particularly significant benefit can be expected for the detection of shallow stable layers typical of night-time situations. The algorithm is completely automatic and does not require any on-site intervention but requires the definition of an adequate instrument-specific configuration. It is therefore suited for use in large ceilometer networks.
Fault Detection and Correction for the Solar Dynamics Observatory Attitude Control System
NASA Technical Reports Server (NTRS)
Starin, Scott R.; Vess, Melissa F.; Kenney, Thomas M.; Maldonado, Manuel D.; Morgenstern, Wendy M.
2007-01-01
The Solar Dynamics Observatory is an Explorer-class mission that will launch in early 2009. The spacecraft will operate in a geosynchronous orbit, sending data 24 hours a day to a devoted ground station in White Sands, New Mexico. It will carry a suite of instruments designed to observe the Sun in multiple wavelengths at unprecedented resolution. The Atmospheric Imaging Assembly includes four telescopes with focal plane CCDs that can image the full solar disk in four different visible wavelengths. The Extreme-ultraviolet Variability Experiment will collect time-correlated data on the activity of the Sun's corona. The Helioseismic and Magnetic Imager will enable study of pressure waves moving through the body of the Sun. The attitude control system on Solar Dynamics Observatory is responsible for four main phases of activity. The physical safety of the spacecraft after separation must be guaranteed. Fine attitude determination and control must be sufficient for instrument calibration maneuvers. The mission science mode requires 2-arcsecond control according to error signals provided by guide telescopes on the Atmospheric Imaging Assembly, one of the three instruments to be carried. Lastly, accurate execution of linear and angular momentum changes to the spacecraft must be provided for momentum management and orbit maintenance. In thsp aper, single-fault tolerant fault detection and correction of the Solar Dynamics Observatory attitude control system is described. The attitude control hardware suite for the mission is catalogued, with special attention to redundancy at the hardware level. Four reaction wheels are used where any three are satisfactory. Four pairs of redundant thrusters are employed for orbit change maneuvers and momentum management. Three two-axis gyroscopes provide full redundancy for rate sensing. A digital Sun sensor and two autonomous star trackers provide two-out-of-three redundancy for fine attitude determination. The use of software to maximize chances of recovery from any hardware or software fault is detailed. A generic fault detection and correction software structure is used, allowing additions, deletions, and adjustments to fault detection and correction rules. This software structure is fed by in-line fault tests that are also able to take appropriate actions to avoid corruption of the data stream.
Using Renyi entropy to detect early cardiac autonomic neuropathy.
Cornforth, David J; Tarvainen, Mika P; Jelinek, Herbert F
2013-01-01
Cardiac Autonomic Neuropathy (CAN) is a disease that involves nerve damage leading to abnormal control of heart rate. CAN affects the correct operation of the heart and in turn leads to associated arrhythmias and heart attack. An open question is to what extent this condition is detectable by the measurement of Heart Rate Variability (HRV). An even more desirable option is to detect CAN in its early, preclinical stage, to improve treatment and outcomes. In previous work we have shown a difference in the Renyi spectrum between participants identified with well-defined CAN and controls. In this work we applied the multi-scale Renyi entropy for identification of early CAN in diabetes patients. Results suggest that Renyi entropy derived from a 20 minute, Lead-II ECG recording, forms a useful contribution to the detection of CAN even in the early stages of the disease. The positive α parameters (1 ≤ α ≤ 5) associated with the Renyi distribution indicated a significant difference (p < 0.00004) between controls and early CAN as well as definite CAN. This is a significant achievement given the simple nature of the information collected, and raises prospects of a simple screening test and improved outcomes of patients.
Detection and diagnosis of bearing and cutting tool faults using hidden Markov models
NASA Astrophysics Data System (ADS)
Boutros, Tony; Liang, Ming
2011-08-01
Over the last few decades, the research for new fault detection and diagnosis techniques in machining processes and rotating machinery has attracted increasing interest worldwide. This development was mainly stimulated by the rapid advance in industrial technologies and the increase in complexity of machining and machinery systems. In this study, the discrete hidden Markov model (HMM) is applied to detect and diagnose mechanical faults. The technique is tested and validated successfully using two scenarios: tool wear/fracture and bearing faults. In the first case the model correctly detected the state of the tool (i.e., sharp, worn, or broken) whereas in the second application, the model classified the severity of the fault seeded in two different engine bearings. The success rate obtained in our tests for fault severity classification was above 95%. In addition to the fault severity, a location index was developed to determine the fault location. This index has been applied to determine the location (inner race, ball, or outer race) of a bearing fault with an average success rate of 96%. The training time required to develop the HMMs was less than 5 s in both the monitoring cases.
Error Detection and Correction in Spelling.
ERIC Educational Resources Information Center
Lydiatt, Steve
1984-01-01
Teachers can discover students' means of dealing with spelling as a problem through investigations of their error detection and correction skills. Approaches for measuring sensitivity and bias are described, as are means of developing appropriate instructional activities. (CL)
Peteye detection and correction
NASA Astrophysics Data System (ADS)
Yen, Jonathan; Luo, Huitao; Tretter, Daniel
2007-01-01
Redeyes are caused by the camera flash light reflecting off the retina. Peteyes refer to similar artifacts in the eyes of other mammals caused by camera flash. In this paper we present a peteye removal algorithm for detecting and correcting peteye artifacts in digital images. Peteye removal for animals is significantly more difficult than redeye removal for humans, because peteyes can be any of a variety of colors, and human face detection cannot be used to localize the animal eyes. In many animals, including dogs and cats, the retina has a special reflective layer that can cause a variety of peteye colors, depending on the animal's breed, age, or fur color, etc. This makes the peteye correction more challenging. We have developed a semi-automatic algorithm for peteye removal that can detect peteyes based on the cursor position provided by the user and correct them by neutralizing the colors with glare reduction and glint retention.
Correcting STIS CCD Point-Source Spectra for CTE Loss
NASA Technical Reports Server (NTRS)
Goudfrooij, Paul; Bohlin, Ralph C.; Maiz-Apellaniz, Jesus
2006-01-01
We review the on-orbit spectroscopic observations that are being used to characterize the Charge Transfer Efficiency (CTE) of the STIS CCD in spectroscopic mode. We parameterize the CTE-related loss for spectrophotometry of point sources in terms of dependencies on the brightness of the source, the background level, the signal in the PSF outside the standard extraction box, and the time of observation. Primary constraints on our correction algorithm are provided by measurements of the CTE loss rates for simulated spectra (images of a tungsten lamp taken through slits oriented along the dispersion axis) combined with estimates of CTE losses for actual spectra of spectrophotometric standard stars in the first order CCD modes. For point-source spectra at the standard reference position at the CCD center, CTE losses as large as 30% are corrected to within approx.1% RMS after application of the algorithm presented here, rendering the Poisson noise associated with the source detection itself to be the dominant contributor to the total flux calibration uncertainty.
Assessing the utility of the Oxford Nanopore MinION for snake venom gland cDNA sequencing.
Hargreaves, Adam D; Mulley, John F
2015-01-01
Portable DNA sequencers such as the Oxford Nanopore MinION device have the potential to be truly disruptive technologies, facilitating new approaches and analyses and, in some cases, taking sequencing out of the lab and into the field. However, the capabilities of these technologies are still being revealed. Here we show that single-molecule cDNA sequencing using the MinION accurately characterises venom toxin-encoding genes in the painted saw-scaled viper, Echis coloratus. We find the raw sequencing error rate to be around 12%, improved to 0-2% with hybrid error correction and 3% with de novo error correction. Our corrected data provides full coding sequences and 5' and 3' UTRs for 29 of 33 candidate venom toxins detected, far superior to Illumina data (13/40 complete) and Sanger-based ESTs (15/29). We suggest that, should the current pace of improvement continue, the MinION will become the default approach for cDNA sequencing in a variety of species.
Assessing the utility of the Oxford Nanopore MinION for snake venom gland cDNA sequencing
Hargreaves, Adam D.
2015-01-01
Portable DNA sequencers such as the Oxford Nanopore MinION device have the potential to be truly disruptive technologies, facilitating new approaches and analyses and, in some cases, taking sequencing out of the lab and into the field. However, the capabilities of these technologies are still being revealed. Here we show that single-molecule cDNA sequencing using the MinION accurately characterises venom toxin-encoding genes in the painted saw-scaled viper, Echis coloratus. We find the raw sequencing error rate to be around 12%, improved to 0–2% with hybrid error correction and 3% with de novo error correction. Our corrected data provides full coding sequences and 5′ and 3′ UTRs for 29 of 33 candidate venom toxins detected, far superior to Illumina data (13/40 complete) and Sanger-based ESTs (15/29). We suggest that, should the current pace of improvement continue, the MinION will become the default approach for cDNA sequencing in a variety of species. PMID:26623194
Shen, Jim K; Faaborg, Daniel; Rouse, Glenn; Kelly, Isaac; Li, Roger; Alsyouf, Muhannad; Myklak, Kristene; Distelberg, Brian; Staack, Andrea
2017-09-01
Translabial ultrasound (TUS) is a useful tool for identifying and assessing synthetic slings. This study evaluates the ability of urology trainees to learn basic pelvic anatomy and sling assessment on TUS. Eight urology trainees (six residents and two medical students) received a lecture reviewing basic anatomy and sling assessment on TUS followed by review of two training cases. Next, they underwent a 126-question examination assessing their ability to identify anatomic planes and structures in those planes, identify the presence of slings, and assess the location and intactness of a sling. The correct response rate was compared to that of an attending radiologist experienced in reading TUS. Non-parametric tests (Fisher's exact, chi-squared tests, and Yates correction) were used for statistical analysis, with P < 0.05 considered significant. 847/1008 (84.0%) of questions were answered correctly by eight trainees compared to 119/126 (94.4%) by the radiologist (P = 0.001). The trainees' correct response rates and Fisher's exact test P values associated with the difference in correct answers between radiologist and trainee were as follows: identification of anatomic plane (94.4%; P = 0.599), identification of structure in sagittal view (80.6%; P = 0.201), identification of structure in transverse view (88.2%; P = 0.696), presence of synthetic sling (95.8%; P = 1.000), location of sling along the urethra in (71.5%; P = 0.403), intactness of sling (82.6%; P = 0.311), and laterality of sling disruption (75.0%; P = 0.076). Urology trainees can quickly learn to identify anatomic landmarks and assess slings on TUS with reasonable proficiency compared to an experienced attending radiologist. © 2017 Wiley Periodicals, Inc.
Celik, Turgay; Lee, Hwee Kuan; Petznick, Andrea; Tong, Louis
2013-01-01
Background Infrared (IR) meibography is an imaging technique to capture the Meibomian glands in the eyelids. These ocular surface structures are responsible for producing the lipid layer of the tear film which helps to reduce tear evaporation. In a normal healthy eye, the glands have similar morphological features in terms of spatial width, in-plane elongation, length. On the other hand, eyes with Meibomian gland dysfunction show visible structural irregularities that help in the diagnosis and prognosis of the disease. However, currently there is no universally accepted algorithm for detection of these image features which will be clinically useful. We aim to develop a method of automated gland segmentation which allows images to be classified. Methods A set of 131 meibography images were acquired from patients from the Singapore National Eye Center. We used a method of automated gland segmentation using Gabor wavelets. Features of the imaged glands including orientation, width, length and curvature were extracted and the IR images enhanced. The images were classified as ‘healthy’, ‘intermediate’ or ‘unhealthy’, through the use of a support vector machine classifier (SVM). Half the images were used for training the SVM and the other half for validation. Independently of this procedure, the meibographs were classified by an expert clinician into the same 3 grades. Results The algorithm correctly detected 94% and 98% of mid-line pixels of gland and inter-gland regions, respectively, on healthy images. On intermediate images, correct detection rates of 92% and 97% of mid-line pixels of gland and inter-gland regions were achieved respectively. The true positive rate of detecting healthy images was 86%, and for intermediate images, 74%. The corresponding false positive rates were 15% and 31% respectively. Using the SVM, the proposed method has 88% accuracy in classifying images into the 3 classes. The classification of images into healthy and unhealthy classes achieved a 100% accuracy, but 7/38 intermediate images were incorrectly classified. Conclusions This technique of image analysis in meibography can help clinicians to interpret the degree of gland destruction in patients with dry eye and meibomian gland dysfunction.
Optimization of the open-loop liquid crystal adaptive optics retinal imaging system
NASA Astrophysics Data System (ADS)
Kong, Ningning; Li, Chao; Xia, Mingliang; Li, Dayu; Qi, Yue; Xuan, Li
2012-02-01
An open-loop adaptive optics (AO) system for retinal imaging was constructed using a liquid crystal spatial light modulator (LC-SLM) as the wavefront compensator. Due to the dispersion of the LC-SLM, there was only one illumination source for both aberration detection and retinal imaging in this system. To increase the field of view (FOV) for retinal imaging, a modified mechanical shutter was integrated into the illumination channel to control the size of the illumination spot on the fundus. The AO loop was operated in a pulsing mode, and the fundus was illuminated twice by two laser impulses in a single AO correction loop. As a result, the FOV for retinal imaging was increased to 1.7-deg without compromising the aberration detection accuracy. The correction precision of the open-loop AO system was evaluated in a closed-loop configuration; the residual error is approximately 0.0909λ (root-mean-square, RMS), and the Strehl ratio ranges to 0.7217. Two subjects with differing rates of myopia (-3D and -5D) were tested. High-resolution images of capillaries and photoreceptors were obtained.
Exploring "psychic transparency" during pregnancy: a mixed-methods approach.
Oriol, Cécile; Tordjman, Sylvie; Dayan, Jacques; Poulain, Patrice; Rosenblum, Ouriel; Falissard, Bruno; Dindoyal, Asha; Naudet, Florian
2016-08-12
Psychic transparency is described as a psychic crisis occurring during pregnancy. The objective was to test if it was clinically detectable. Seven primiparous and seven nulliparous subjects were recorded during 5 min of spontaneous speech about their dreams. 25 raters from five groups (psychoanalysts, psychiatrists, general practitioners, pregnant women and medical students) listened to the audiotapes. They were asked to rate the probability of the women being pregnant or not. Their ability to discriminate the primiparous women was tested. The probability of being identified correctly or not was calculated for each woman. A qualitative analysis of the speech samples was performed. No group of rater was able to correctly classify pregnant and non-pregnant women. However, the raters' choices were not completely random. The wish to be pregnant or to have a baby could be linked to a primiparous classification whereas job priorities could be linked to a nulliparous classification. It was not possible to detect Psychic transparency in this study. The wish for a child might be easier to identify. In addition, the raters' choices seemed to be connected to social representations of motherhood.
Rositch, Anne F; Nowak, Rebecca G; Gravitt, Patti E
2014-07-01
Invasive cervical cancer is thought to decline in women over 65 years old, the age at which cessation of routine cervical cancer screening is recommended. However, national cervical cancer incidence rates do not account for the high prevalence of hysterectomy in the United States. Using estimates of hysterectomy prevalence from the Behavioral Risk Factor Surveillance System (BRFSS), hysterectomy-corrected age-standardized and age-specific incidence rates of cervical cancer were calculated from the Surveillance, Epidemiology, and End Results (SEER) 18 registry in the United States from 2000 to 2009. Trends in corrected cervical cancer incidence across age were analyzed using Joinpoint regression. Unlike the relative decline in uncorrected rates, corrected rates continue to increase after age 35-39 (APC(CORRECTED) = 10.43) but at a slower rate than in 20-34 years (APC(CORRECTED) = 161.29). The highest corrected incidence was among 65- to 69-year-old women, with a rate of 27.4 cases per 100,000 women as opposed to the highest uncorrected rate of 15.6 cases per 100,000 aged 40 to 44 years. Correction for hysterectomy had the largest impact on older, black women given their high prevalence of hysterectomy. Correction for hysterectomy resulted in higher age-specific cervical cancer incidence rates, a shift in the peak incidence to older women, and an increase in the disparity in cervical cancer incidence between black and white women. Given the high and nondeclining rate of cervical cancer in women over the age of 60 to 65 years, when women are eligible to exit screening, risk and screening guidelines for cervical cancer in older women may need to be reconsidered. © 2014 American Cancer Society.
Correcting for sequencing error in maximum likelihood phylogeny inference.
Kuhner, Mary K; McGill, James
2014-11-04
Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.
NASA Astrophysics Data System (ADS)
Hu, Yu-chi; Xiong, Jian-ping; Cohan, Gilad; Zaider, Marco; Mageras, Gig; Zelefsky, Michael
2013-03-01
A fast knowledge-based radioactive seed localization method for brachytherapy was developed to automatically localize radioactive seeds in an intraoperative volumetric cone beam CT (CBCT) so that corrections, if needed, can be made during prostate implant surgery. A transrectal ultrasound (TRUS) scan is acquired for intraoperative treatment planning. Planned seed positions are transferred to intraoperative CBCT following TRUS-to-CBCT registration using a reference CBCT scan of the TRUS probe as a template, in which the probe and its external fiducial markers are pre-segmented and their positions in TRUS are known. The transferred planned seeds and probe serve as an atlas to reduce the search space in CBCT. Candidate seed voxels are identified based on image intensity. Regions are grown from candidate voxels and overlay regions are merged. Region volume and intensity variance is checked against known seed volume and intensity profile. Regions meeting the above criteria are flagged as detected seeds; otherwise they are flagged as likely seeds and sorted by a score that is based on volume, intensity profile and distance to the closest planned seed. A graphical interface allows users to review and accept or reject likely seeds. Likely seeds with approximately twice the seed volume are automatically split. Five clinical cases are tested. Without any manual correction in seed detection, the method performed the localization in 5 seconds (excluding registration time) for a CBCT scan with 512×512×192 voxels. The average precision rate per case is 99% and the recall rate is 96% for a total of 416 seeds. All false negative seeds are found with 15 in likely seeds and 1 included in a detected seed. With the new method, updating of calculations of dose distribution during the procedure is possible and thus facilitating evaluation and improvement of treatment quality.
NASA Astrophysics Data System (ADS)
Polycarpou, Irene; Tsoumpas, Charalampos; King, Andrew P.; Marsden, Paul K.
2014-02-01
The aim of this study is to investigate the impact of respiratory motion correction and spatial resolution on lesion detectability in PET as a function of lesion size and tracer uptake. Real respiratory signals describing different breathing types are combined with a motion model formed from real dynamic MR data to simulate multiple dynamic PET datasets acquired from a continuously moving subject. Lung and liver lesions were simulated with diameters ranging from 6 to 12 mm and lesion to background ratio ranging from 3:1 to 6:1. Projection data for 6 and 3 mm PET scanner resolution were generated using analytic simulations and reconstructed without and with motion correction. Motion correction was achieved using motion compensated image reconstruction. The detectability performance was quantified by a receiver operating characteristic (ROC) analysis obtained using a channelized Hotelling observer and the area under the ROC curve (AUC) was calculated as the figure of merit. The results indicate that respiratory motion limits the detectability of lung and liver lesions, depending on the variation of the breathing cycle length and amplitude. Patients with large quiescent periods had a greater AUC than patients with regular breathing cycles and patients with long-term variability in respiratory cycle or higher motion amplitude. In addition, small (less than 10 mm diameter) or low contrast (3:1) lesions showed the greatest improvement in AUC as a result of applying motion correction. In particular, after applying motion correction the AUC is improved by up to 42% with current PET resolution (i.e. 6 mm) and up to 51% for higher PET resolution (i.e. 3 mm). Finally, the benefit of increasing the scanner resolution is small unless motion correction is applied. This investigation indicates high impact of respiratory motion correction on lesion detectability in PET and highlights the importance of motion correction in order to benefit from the increased resolution of future PET scanners.
ITER Side Correction Coil Quench model and analysis
NASA Astrophysics Data System (ADS)
Nicollet, S.; Bessette, D.; Ciazynski, D.; Duchateau, J. L.; Gauthier, F.; Lacroix, B.
2016-12-01
Previous thermohydraulic studies performed for the ITER TF, CS and PF magnet systems have brought some important information on the detection and consequences of a quench as a function of the initial conditions (deposited energy, heated length). Even if the temperature margin of the Correction Coils is high, their behavior during a quench should also be studied since a quench is likely to be triggered by potential anomalies in joints, ground fault on the instrumentation wires, etc. A model has been developed with the SuperMagnet Code (Bagnasco et al., 2010) for a Side Correction Coil (SCC2) with four pancakes cooled in parallel, each of them represented by a Thea module (with the proper Cable In Conduit Conductor characteristics). All the other coils of the PF cooling loop are hydraulically connected in parallel (top/bottom correction coils and six Poloidal Field Coils) are modeled by Flower modules with equivalent hydraulics properties. The model and the analysis results are presented for five quench initiation cases with/without fast discharge: two quenches initiated by a heat input to the innermost turn of one pancake (case 1 and case 2) and two other quenches initiated at the innermost turns of four pancakes (case 3 and case 4). In the 5th case, the quench is initiated at the middle turn of one pancake. The impact on the cooling circuit, e.g. the exceedance of the opening pressure of the quench relief valves, is detailed in case of an undetected quench (i.e. no discharge of the magnet). Particular attention is also paid to a possible secondary quench detection system based on measured thermohydraulic signals (pressure, temperature and/or helium mass flow rate). The maximum cable temperature achieved in case of a fast current discharge (primary detection by voltage) is compared to the design hot spot criterion of 150 K, which includes the contribution of helium and jacket.
Qian, Bang-ping; Mao, Sai-hu; Zhu, Ze-zhang; Zhu, Feng; Liu, Zhen; Xu, Lei-lei; Wang, Bing; Yu, Yang; Qiu, Yong
2013-09-01
A computed tomography study. To identify the best scoliotic deformity components that show impact upon the spontaneous postoperative modulation of the deformed anterior chest wall contour in right convex thoracic adolescent idiopathic scoliosis. Spontaneous postoperative aggravation of the anterior concave costal projection was a common occurrence in adolescent idiopathic scoliosis, yet the risk factors that effectively bridged the gap between what the surgeons did in the interior and how the rib cages reacted on the exterior were still open to debate. Pre- and postoperative computed tomographic scans of 77 patients with right convex thoracic adolescent idiopathic scoliosis were retrieved and analyzed. According to the postoperative variation of anterior chest wall angle (CWA), the patients were divided into 2 groups with either aggravated or improved CWA. Multiple scoliotic deformity parameters and their surgical correction rates were evaluated, correlated, and then compared between the 2 groups. Moreover, patients with apex located at T9 were isolated and evaluated independently. A logistic regression analysis was used to determine the independent predictors of the spontaneous postoperative modulation of the anterior chest wall contour. The surgical correction rate of Cobb angle (supine), the rotational angle with respect to the sagittal plane (RAsag angle), the rotational angle with respect to the anterior midline of the body (RAml angle), the angle of lateral deviation of the apical vertebrae from the midline (MLdev angle), the posterior hemithorax ratio, the vertebral translation (VT), and the thoracic rotation averaged 64.6%, 19.5%, 30.8%, 39.2%, 15.0%, 41.2%, and 28.7%, respectively. Ratio of aggravated anterior chest wall contour was the highest at the T7 apex group (84.6%) as compared with T8 apex group (47.1%), T9 apex group (19.5%), and T10 apex group (0.0%). The preoperative CWA was significantly lower in the aggravated CWA group when compared with the improved group (2.1 ± 1.8°vs. 6.6 ± 2.4°, P < 0.001). Besides, in the aggravated CWA group, significantly greater surgical correction of VT and lesser correction of RAsag angle were demonstrated when compared with the improved CWA group (VT: 53.0% vs. 34.8%, P = 0.001; RAsag: 2.5% vs. 28.7%, P = 0.000). In the T9 subgroup, remarkably different correction rate of VT and RAsag were similarly observed (VT: 54.9% vs. 35.3%, P = 0.046; RAsag: 4.9% vs. 23.5%, P = 0.034). In terms of other deformity parameters, no significantly different correction rate was consistently detected. In the logistic regression analysis, apex location, CWA, and correction rate of RAsag were demonstrated to be independent factors predictive of the alteration of chest wall contour. In addition to the smaller preoperative CWA and higher apex location, lesser correction of vertebral rotation, if accompanied by great surgical correction of apical VT, could also largely result in a poor postoperative anterior chest wall contour.
Method for detection and correction of errors in speech pitch period estimates
NASA Technical Reports Server (NTRS)
Bhaskar, Udaya (Inventor)
1989-01-01
A method of detecting and correcting received values of a pitch period estimate of a speech signal for use in a speech coder or the like. An average is calculated of the nonzero values of received pitch period estimate since the previous reset. If a current pitch period estimate is within a range of 0.75 to 1.25 times the average, it is assumed correct, while if not, a correction process is carried out. If correction is required successively for more than a preset number of times, which will most likely occur when the speaker changes, the average is discarded and a new average calculated.
Shahbeig, Saleh; Pourghassem, Hossein
2013-01-01
Optic disc or optic nerve (ON) head extraction in retinal images has widespread applications in retinal disease diagnosis and human identification in biometric systems. This paper introduces a fast and automatic algorithm for detecting and extracting the ON region accurately from the retinal images without the use of the blood-vessel information. In this algorithm, to compensate for the destructive changes of the illumination and also enhance the contrast of the retinal images, we estimate the illumination of background and apply an adaptive correction function on the curvelet transform coefficients of retinal images. In other words, we eliminate the fault factors and pave the way to extract the ON region exactly. Then, we detect the ON region from retinal images using the morphology operators based on geodesic conversions, by applying a proper adaptive correction function on the reconstructed image's curvelet transform coefficients and a novel powerful criterion. Finally, using a local thresholding on the detected area of the retinal images, we extract the ON region. The proposed algorithm is evaluated on available images of DRIVE and STARE databases. The experimental results indicate that the proposed algorithm obtains an accuracy rate of 100% and 97.53% for the ON extractions on DRIVE and STARE databases, respectively.
77 FR 37421 - Reimbursement Rates for Calendar Year 2012 Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-21
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Indian Health Service Reimbursement Rates for Calendar Year 2012 Correction AGENCY: Indian Health Service, HHS. ACTION: Notice; correction. SUMMARY: The Indian Health Service published a document in the Federal Register on June 6, 2012, concerning rates for...
Equipment for neutron measurements at VR-1 Sparrow training reactor.
Kolros, Antonin; Huml, Ondrej; Kríz, Martin; Kos, Josef
2010-01-01
The VR-1 sparrow reactor is an experimental nuclear facility for training, student education and teaching purposes. The sparrow reactor is an educational platform for the basic experiments at the reactor physic and dosimetry. The aim of this article is to describe the new experimental equipment EMK310 features and possibilities for neutron detection by different gas filled detectors at VR-1 reactor. Among the EMK310 equipment typical attributes belong precise set-up, simple control, resistance to electromagnetic interference, high throughput (counting rate), versatility and remote controllability. The methods for non-linearity correction of pulse neutron detection system and reactimeter application are presented. Copyright 2009. Published by Elsevier Ltd.
Paroxysmal atrial fibrillation recognition based on multi-scale Rényi entropy of ECG.
Xin, Yi; Zhao, Yizhang; Mu, Yuanhui; Li, Qin; Shi, Caicheng
2017-07-20
Atrial fibrillation (AF) is a common type of arrhythmia disease, which has a high morbidity and can lead to some serious complications. The ability to detect and in turn prevent AF is extremely significant to the patient and clinician. Using ECG to detect AF and develop a robust and effective algorithm is the primary objective of this study. Some studies show that after AF occurs, the regulatory mechanism of vagus nerve and sympathetic nerve will change. Each R-R interval will be absolutely unequal. After studying the physiological mechanism of AF, we will calculate the Rényi entropy of the wavelet coefficients of heart rate variability (HRV) in order to measure the complexity of PAF signals, as well as extract the multi-scale features of paroxysmal atrial fibrillation (PAF). The data used in this study is obtained from MIT-BIH PAF Prediction Challenge Database and the correct rate in classifying PAF patients from normal persons is 92.48%. The results of this experiment proved that AF could be detected by using this method and, in turn, provide opinions for clinical diagnosis.
Continuous operation of four-state continuous-variable quantum key distribution system
NASA Astrophysics Data System (ADS)
Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Ichikawa, Tsubasa; Hirano, Takuya; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro
2016-10-01
We report on the development of continuous-variable quantum key distribution (CV-QKD) system that are based on discrete quadrature amplitude modulation (QAM) and homodyne detection of coherent states of light. We use a pulsed light source whose wavelength is 1550 nm and repetition rate is 10 MHz. The CV-QKD system can continuously generate secret key which is secure against entangling cloner attack. Key generation rate is 50 kbps when the quantum channel is a 10 km optical fiber. The CV-QKD system we have developed utilizes the four-state and post-selection protocol [T. Hirano, et al., Phys. Rev. A 68, 042331 (2003).]; Alice randomly sends one of four states {|+/-α⟩,|+/-𝑖α⟩}, and Bob randomly performs x- or p- measurement by homodyne detection. A commercially available balanced receiver is used to realize shot-noise-limited pulsed homodyne detection. GPU cards are used to accelerate the software-based post-processing. We use a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification.
NASA Astrophysics Data System (ADS)
Roche, Nathan; Franzetti, Paolo; Garilli, Bianca; Zamorani, Giovanni; Cimatti, Andrea; Rossetti, Emanuel
2012-02-01
We investigate the prospects of extending observations of high-redshift quasi-stellar objects (QSOs) from the current z˜ 7 to z > 8 by means of a very wide-area near-infrared slitless spectroscopic survey, considering as an example the planned survey with the European Space Agency's Euclid telescope (scheduled for a 2019 launch). For any QSOs at z > 8.06, the strong Lyman α line will enter the wavelength range of the Euclid Near-Infrared Spectometer and Imaging Photometer (NISP). We perform a detailed simulation of near infrared spectrometer and imaging photometer (Euclid) NISP slitless spectroscopy (with the parameters of the wide survey) in an artificial field containing QSO spectra at all redshifts up to z= 12 and to a faint limit H= 22.5. QSO spectra are represented with a template based on a Sloan Digital Sky Survey composite spectrum, with the added effects of absorption from neutral hydrogen in the intergalactic medium. The spectra extracted from the simulation are analysed with an automated redshift finder, and a detection rate estimated as a function of H magnitude and redshift (defined as the proportion of spectra with both correct redshift measurements and classifications). We show that, as expected, spectroscopic identification of QSOs would reach deeper limits for the redshift ranges where either ? (0.67 < z < 2.05) or Lyman α (z > 8.06) is visible. Furthermore, if photometrically selected z > 8 spectra can be re-examined and refitted to minimize the effects of spectral contamination, the QSO detection rate in the Lyman α window will be increased by an estimated ˜60 per cent and will then be better here than at any other redshift, with an effective limit H≃ 21.5. With an extrapolated rate of QSO evolution, we predict that the Euclid wide (15 000 ?) spectroscopic survey will identify and measure spectroscopic redshifts for a total of 20-35 QSOs at z > 8.06 (reduced slightly to 19-33 if we apply a small correction for missed weak-lined QSOs). However, for a model with a faster rate of evolution, this prediction goes down to four or five. In any event, the survey will give important constraints on the evolution of QSO at z > 8 and therefore the formation of the first supermassive black holes. The z > 8.06 detections would be very luminous objects (with MB=-26 to -28) and many would also be detectable by the proposed Wide Field X-ray Telescope.
Murchie, P; Chowdhury, A; Smith, S; Campbell, N C; Lee, A J; Linden, D; Burton, C D
2015-05-26
Publicly available data show variation in GPs' use of urgent suspected cancer (USC) referral pathways. We investigated whether this could be due to small numbers of cancer cases and random case-mix, rather than due to true variation in performance. We analysed individual GP practice USC referral detection rates (proportion of the practice's cancer cases that are detected via USC) and conversion rates (proportion of the practice's USC referrals that prove to be cancer) in routinely collected data from GP practices in all of England (over 4 years) and northeast Scotland (over 7 years). We explored the effect of pooling data. We then modelled the effects of adding random case-mix to practice variation. Correlations between practice detection rate and conversion rate became less positive when data were aggregated over several years. Adding random case-mix to between-practice variation indicated that the median proportion of poorly performing practices correctly identified after 25 cancer cases were examined was 20% (IQR 17 to 24) and after 100 cases was 44% (IQR 40 to 47). Much apparent variation in GPs' use of suspected cancer referral pathways can be attributed to random case-mix. The methods currently used to assess the quality of GP-suspected cancer referral performance, and to compare individual practices, are misleading. These should no longer be used, and more appropriate and robust methods should be developed.
Gulliver, Kristina; Yoder, Bradley A
2018-05-09
To determine the effect of altitude correction on bronchopulmonary dysplasia (BPD) rates and to assess validity of the NICHD "Neonatal BPD Outcome Estimator" for predicting BPD with and without altitude correction. Retrospective analysis included neonates born <30 weeks gestational age (GA) between 2010 and 2016. "Effective" FiO 2 requirements were determined at 36 weeks corrected GA. Altitude correction performed via ratio of barometric pressure (BP) in our unit to sea level BP. Probability of death and/or moderate-to-severe BPD was calculated using the NICHD BPD Outcome Estimator. Five hundred and sixty-one infants were included. Rate of moderate-to-severe BPD decreased from 71 to 40% following altitude correction. Receiver-operating characteristic curves indicated high predictability of BPD Outcome Estimator for altitude-corrected moderate-to-severe BPD diagnosis. Correction for altitude reduced moderate-to-severe BPD rate by almost 50%, to a rate consistent with recent published values. NICHD BPD Outcome Estimator is a valid tool for predicting the risk of moderate-to-severe BPD following altitude correction.
Parallel Low-Loss Measurement of Multiple Atomic Qubits
NASA Astrophysics Data System (ADS)
Kwon, Minho; Ebert, Matthew F.; Walker, Thad G.; Saffman, M.
2017-11-01
We demonstrate low-loss measurement of the hyperfine ground state of rubidium atoms by state dependent fluorescence detection in a dipole trap array of five sites. The presence of atoms and their internal states are minimally altered by utilizing circularly polarized probe light and a strictly controlled quantization axis. We achieve mean state detection fidelity of 97% without correcting for imperfect state preparation or background losses, and 98.7% when corrected. After state detection and correction for background losses, the probability of atom loss due to the state measurement is <2 % and the initial hyperfine state is preserved with >98 % probability.
NASA Astrophysics Data System (ADS)
Enderlein, Joerg; Ruhlandt, Daja; Chithik, Anna; Ebrecht, René; Wouters, Fred S.; Gregor, Ingo
2016-02-01
Fluorescence lifetime microscopy has become an important method of bioimaging, allowing not only to record intensity and spectral, but also lifetime information across an image. One of the most widely used methods of FLIM is based on Time-Correlated Single Photon Counting (TCSPC). In TCSPC, one determines this curve by exciting molecules with a periodic train of short laser pulses, and then measuring the time delay between the first recorded fluorescence photon after each exciting laser pulse. An important technical detail of TCSPC measurements is the fact that the delay times between excitation laser pulses and resulting fluorescence photons are always measured between a laser pulse and the first fluorescence photon which is detected after that pulse. At high count rates, this leads to so-called pile-up: ``early'' photons eclipse long-delay photons, resulting in heavily skewed TCSPC histograms. To avoid pile-up, a rule of thumb is to perform TCSPC measurements at photon count rates which are at least hundred times smaller than the laser-pulse excitation rate. The downside of this approach is that the fluorescence-photon count-rate is restricted to a value below one hundredth of the laser-pulse excitation-rate, reducing the overall speed with which a fluorescence signal can be measured. We present a new data evaluation method which provides pile-up corrected fluorescence decay estimates from TCSPC measurements at high count rates, and we demonstrate our method on FLIM of fluorescently labeled cells.
NASA Astrophysics Data System (ADS)
Gawlitza, Josephin; Reiss-Zimmermann, Martin; Thörmer, Gregor; Schaudinn, Alexander; Linder, Nicolas; Garnov, Nikita; Horn, Lars-Christian; Minh, Do Hoang; Ganzer, Roman; Stolzenburg, Jens-Uwe; Kahn, Thomas; Moche, Michael; Busse, Harald
2017-02-01
This work aims to assess the impact of an additional endorectal coil on image quality and cancer detection rate within the same patients. At a single academic medical center, this transversal study included 41 men who underwent T2- and diffusion-weighted imaging at 3 T using surface coils only or in combination with an endorectal coil in the same session. Two blinded readers (A and B) randomly evaluated all image data in separate sessions. Image quality with respect to localization and staging was rated on a five-point scale. Lesions were classified according to their prostate imaging reporting and data system (PIRADS) score version 1. Standard of reference was provided by whole-mount step-section analysis. Mean image quality scores averaged over all localization-related items were significantly higher with additional endorectal coil for both readers (p < 0.001), corresponding staging-related items were only higher for reader B (p < 0.001). With an endorectal coil, the rate of correctly detecting cancer per patient was significantly higher for reader B (p < 0.001) but not for reader A (p = 0.219). The numbers of histologically confirmed tumor lesions were rather similar for both settings. The subjectively rated 3-T image quality was improved with an endorectal coil. In terms of diagnostic performance, the use of an additional endorectal coil was not superior.
Gawlitza, Josephin; Reiss-Zimmermann, Martin; Thörmer, Gregor; Schaudinn, Alexander; Linder, Nicolas; Garnov, Nikita; Horn, Lars-Christian; Minh, Do Hoang; Ganzer, Roman; Stolzenburg, Jens-Uwe; Kahn, Thomas; Moche, Michael; Busse, Harald
2017-01-01
This work aims to assess the impact of an additional endorectal coil on image quality and cancer detection rate within the same patients. At a single academic medical center, this transversal study included 41 men who underwent T2- and diffusion-weighted imaging at 3 T using surface coils only or in combination with an endorectal coil in the same session. Two blinded readers (A and B) randomly evaluated all image data in separate sessions. Image quality with respect to localization and staging was rated on a five-point scale. Lesions were classified according to their prostate imaging reporting and data system (PIRADS) score version 1. Standard of reference was provided by whole-mount step-section analysis. Mean image quality scores averaged over all localization-related items were significantly higher with additional endorectal coil for both readers (p < 0.001), corresponding staging-related items were only higher for reader B (p < 0.001). With an endorectal coil, the rate of correctly detecting cancer per patient was significantly higher for reader B (p < 0.001) but not for reader A (p = 0.219). The numbers of histologically confirmed tumor lesions were rather similar for both settings. The subjectively rated 3-T image quality was improved with an endorectal coil. In terms of diagnostic performance, the use of an additional endorectal coil was not superior. PMID:28145525
Gawlitza, Josephin; Reiss-Zimmermann, Martin; Thörmer, Gregor; Schaudinn, Alexander; Linder, Nicolas; Garnov, Nikita; Horn, Lars-Christian; Minh, Do Hoang; Ganzer, Roman; Stolzenburg, Jens-Uwe; Kahn, Thomas; Moche, Michael; Busse, Harald
2017-02-01
This work aims to assess the impact of an additional endorectal coil on image quality and cancer detection rate within the same patients. At a single academic medical center, this transversal study included 41 men who underwent T2- and diffusion-weighted imaging at 3 T using surface coils only or in combination with an endorectal coil in the same session. Two blinded readers (A and B) randomly evaluated all image data in separate sessions. Image quality with respect to localization and staging was rated on a five-point scale. Lesions were classified according to their prostate imaging reporting and data system (PIRADS) score version 1. Standard of reference was provided by whole-mount step-section analysis. Mean image quality scores averaged over all localization-related items were significantly higher with additional endorectal coil for both readers (p < 0.001), corresponding staging-related items were only higher for reader B (p < 0.001). With an endorectal coil, the rate of correctly detecting cancer per patient was significantly higher for reader B (p < 0.001) but not for reader A (p = 0.219). The numbers of histologically confirmed tumor lesions were rather similar for both settings. The subjectively rated 3-T image quality was improved with an endorectal coil. In terms of diagnostic performance, the use of an additional endorectal coil was not superior.
Luo, Jiebo; Boutell, Matthew
2005-05-01
Automatic image orientation detection for natural images is a useful, yet challenging research topic. Humans use scene context and semantic object recognition to identify the correct image orientation. However, it is difficult for a computer to perform the task in the same way because current object recognition algorithms are extremely limited in their scope and robustness. As a result, existing orientation detection methods were built upon low-level vision features such as spatial distributions of color and texture. Discrepant detection rates have been reported for these methods in the literature. We have developed a probabilistic approach to image orientation detection via confidence-based integration of low-level and semantic cues within a Bayesian framework. Our current accuracy is 90 percent for unconstrained consumer photos, impressive given the findings of a psychophysical study conducted recently. The proposed framework is an attempt to bridge the gap between computer and human vision systems and is applicable to other problems involving semantic scene content understanding.
LEA Detection and Tracking Method for Color-Independent Visual-MIMO
Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo
2016-01-01
Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement. PMID:27384563
LEA Detection and Tracking Method for Color-Independent Visual-MIMO.
Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo
2016-07-02
Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement.
Swift J1822.3-1606: pre-outburst ROSAT limits (plus erratum)
NASA Astrophysics Data System (ADS)
Esposito, P.; Rea, N.; Israel, G. L.; Tieng, A.
2011-07-01
We report on a pre-outburst ROSAT PSPC observation of the new SGR discovered by Swift-BAT on 2011 July 14 (Cummings et al. Atel #3488). The PSPC observation was performed on 1993 September 12 for ~6.7ks. We find a source at: RA (2000) = 18 22 18.1 and Dec (2000)= -16 04 26.4, with a 5sigma detection significance. The count-rate (corrected for the PSPC PSF, sampling dead time, and vignetting) is about 0.012 counts/s.
Investigations of internal noise levels for different target sizes, contrasts, and noise structures
NASA Astrophysics Data System (ADS)
Han, Minah; Choi, Shinkook; Baek, Jongduk
2014-03-01
To describe internal noise levels for different target sizes, contrasts, and noise structures, Gaussian targets with four different sizes (i.e., standard deviation of 2,4,6 and 8) and three different noise structures(i.e., white, low-pass, and highpass) were generated. The generated noise images were scaled to have standard deviation of 0.15. For each noise type, target contrasts were adjusted to have the same detectability based on NPW, and the detectability of CHO was calculated accordingly. For human observer study, 3 trained observers performed 2AFC detection tasks, and correction rate, Pc, was calculated for each task. By adding proper internal noise level to numerical observer (i.e., NPW and CHO), detectability of human observer was matched with that of numerical observers. Even though target contrasts were adjusted to have the same detectability of NPW observer, detectability of human observer decreases as the target size increases. The internal noise level varies for different target sizes, contrasts, and noise structures, demonstrating different internal noise levels should be considered in numerical observer to predict the detection performance of human observer.
Gonzales, Gustavo F; Tapia, Vilma; Gasco, Manuel
2014-07-01
To determine if correction of cut-offs of haemoglobin levels to define anaemia at high altitudes affects rates of adverse perinatal outcomes. Data were obtained from 161,909 mothers and newborns whose births occurred between 1,000 and 4,500 m above sea level (masl). Anaemia was defined with or without correction of haemoglobin (Hb) for altitude as Hb <11 g/dL. Correction of haemoglobin per altitude was performed according to guidelines from the World Health Organization. Rates of stillbirths and preterm births were also calculated. Stillbirth and preterm rates were significantly reduced in cases of anaemia calculated after correction of haemoglobin for altitude compared to values obtained without Hb correction. At high altitudes (3,000-4,500 masl), after Hb correction, the rate of stillbirths was reduced from 37.7 to 18.3 per 1,000 live births (p < 0.01); similarly, preterm birth rates were reduced from 13.1 to 8.76 % (p < 0.01). The odds ratios for stillbirths and for preterm births were also reduced after haemoglobin correction. At high altitude, correction of maternal haemoglobin should not be performed to assess the risks for preterm birth and stillbirth. In fact, using low altitude Hb cut-off is associated with predicting those at risk.
77 FR 36563 - Indian Health Service; Reimbursement Rates for Calendar Year 2012 Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-19
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Indian Health Service; Reimbursement Rates for Calendar Year 2012 Correction AGENCY: Indian Health Service, HHS. ACTION: Notice; correction. SUMMARY: The Indian Health Service published a document in the Federal Register on June 6, 2012, concerning rates for...
Identification accuracy of children versus adults: a meta-analysis.
Pozzulo, J D; Lindsay, R C
1998-10-01
Identification accuracy of children and adults was examined in a meta-analysis. Preschoolers (M = 4 years) were less likely than adults to make correct identifications. Children over the age of 5 did not differ significantly from adults with regard to correct identification rate. Children of all ages examined were less likely than adults to correctly reject a target-absent lineup. Even adolescents (M = 12-13 years) did not reach an adult rate of correct rejection. Compared to simultaneous lineup presentation, sequential lineups increased the child-adult gap for correct rejections. Providing child witnesses with identification practice or training did not increase their correct rejection rates. Suggestions for children's inability to correctly reject target-absent lineups are discussed. Future directions for identification research are presented.
Measuring the human psychophysiological conditions without contact
NASA Astrophysics Data System (ADS)
Scalise, L.; Casacanditella, L.; Cosoli, G.
2017-08-01
Heart Rate Variability, HRV, studies the variations of cardiac rhythm caused by the autonomic regulation. HRV analysis can be applied to the study of the effects of mental or physical stressors on the psychophysiological conditions. The present work is a pilot study performed on a 23-year-old healthy subject. The measurement of HRV was performed by means of two sensors, that is an electrocardiograph and a Laser Doppler Vibrometer, which is a non-contact device able to detect the skin vibrations related to the cardiac activity. The present study aims to evaluate the effects of a physical task on HRV parameters (in both time and frequency domain), and consequently on the autonomic regulation, and the capability of Laser Doppler Vibrometry in correctly detecting the effects of stress on the Heart Variability. The results show a significant reduction of HRV parameters caused by the execution of the physical task (i.e. variations of 25-40% for parameters in time domain, also higher in frequency domain); this is consistent with the fact that stress causes a reduced capability of the organism in varying the Heart Rate (and, consequently, a limited HRV). LDV was able to correctly detect this phenomenon in the time domain, while the parameters in the frequency domain show significant deviations with respect to the gold standard technique (i.e. ECG). This may be due to the movement artefacts that have consistently modified the shape of the vibration signal measured by means of LDV, after having performed the physical task. In the future, in order to avoid this drawback, the LDV technique could be used to evaluate the effects of a mental task on HRV signals (i.e. the evaluation of mental stress).
Brunet-Benkhoucha, M; Verhaegen, F; Lassalle, S; Béliveau-Nadeau, D; Reniers, B; Donath, D; Taussky, D; Carrier, J-F
2008-07-01
To develop a tomosynthesis-based dose assessment procedure that can be performed after an I-125 prostate seed implantation, while the patient is still under anaesthesia on the treatment table. Our seed detection procedure involves the reconstruction of a volume of interest based on the backprojection of 7 seed-only binary images acquired over an angle of 60° with an isocentric imaging system. A binary seed-only volume is generated by a simple thresholding of the volume of interest. Seeds positions are extracted from this volume with a 3D connected component analysis and a statistical classifier that determines the number of seeds in each cluster of connected voxels. A graphical user interface (GUI) allows to visualize the result and to introduce corrections, if needed. A phantom and a clinical study (24 patients) were carried out to validate the technique. A phantom study demonstrated a very good localization accuracy of (0.4+/-0.4) mm when compared to CT-based reconstruction. This leads to dosimetric error on D90 and V100 of respectively 0.5% and 0.1%. In a patient study with an average of 56 seeds per implant, the automatic tomosynthesis-based reconstruction yields a detection rate of 96% of the seeds and less than 1.5% of false-positives. With the help of the GUI, the user can achieve a 100% detection rate in an average of 3 minutes. This technique would allow to identify possible underdosage and to correct it by potentially reimplanting additional seeds. A more uniform dose coverage could then be achieved in LDR prostate brachytherapy. © 2008 American Association of Physicists in Medicine.
Dynamic balance abilities of collegiate men for the bench press.
Piper, Timothy J; Radlo, Steven J; Smith, Thomas J; Woodward, Ryan W
2012-12-01
This study investigated the dynamic balance detection ability of college men for the bench press exercise. Thirty-five college men (mean ± SD: age = 22.4 ± 2.76 years, bench press experience = 8.3 ± 2.79 years, and estimated 1RM = 120.1 ± 21.8 kg) completed 1 repetition of the bench press repetitions for each of 3 bar loading arrangements. In a randomized fashion, subjects performed the bench press with a 20-kg barbell loaded with one of the following: a balanced load, one 20-kg plate on each side; an imbalanced asymmetrical load, one 20-kg plate on one side and a 20-kg plate plus a 1.25-kg plate on the other side; or an imbalanced asymmetrical center of mass, 20-kg plate on one side and sixteen 1.25-kg plates on the other side. Subjects were blindfolded and wore ear protection throughout all testing to decrease the ability to otherwise detect loads. Binomial data analysis indicated that subjects correctly detected the imbalance of the imbalanced asymmetrical center of mass condition (p[correct detection] = 0.89, p < 0.01) but did not correctly detect the balanced condition (p[correct detection] = 0.46, p = 0.74) or the imbalanced asymmetrical condition (p[correct detection] = 0.60, p = 0.31). Although it appears that a substantial shift in the center of mass of plates leads to the detection of barbell imbalance, minor changes of the addition of 1.25 kg (2.5 lb) to the asymmetrical condition did not result in consistent detection. Our data indicate that the establishment of a biofeedback loop capable of determining balance detection was only realized under a high degree of imbalance. Although balance detection was not present in either the even or the slightly uneven loading condition, the inclusion of balance training for upper body may be futile if exercises are unable to establish such a feedback loop and thus eliciting an improvement of balance performance.
NASA Astrophysics Data System (ADS)
Cho, A.-Ra; Suh, Myoung-Seok
2013-08-01
The present study developed and assessed a correction technique (CSaTC: Correction based on Spatial and Temporal Continuity) for the detection and correction of contaminated Normalized Difference Vegetation Index (NDVI) time series data. Global Inventory Modeling and Mapping Studies (GIMMS) NDVI data from 1982 to 2006 with a 15-day period and an 8-km spatial resolution was used. CSaTC utilizes short-term continuity of vegetation to detect contaminated pixels, and then, corrects the detected pixels using the spatio-temporal continuity of vegetation. CSaTC was applied to the NDVI data over the East Asian region, which exhibits diverse seasonal and interannual variations in vegetation activities. The correction skill of CSaTC was compared to two previously applied methods, IDR (iterative Interpolation for Data Reconstruction) and Park et al. (2011) using GIMMS NDVI data. CSaTC reasonably resolved the overcorrection and spreading phenomenon caused by excessive correction of Park et al. (2011). The validation using the simulated NDVI time series data showed that CSaTC shows a systematically better correction skill in bias and RMSE irrespective of phenology types of vegetation and noise levels. In general, CSaTC showed a good recovery of the contaminated data appearing over the short-term period on a level similar to that obtained using the IDR technique. In addition, it captured the multi-peak of NDVI, and the germination and defoliating patterns more accurately than that by IDR, which overly compensates for seasons with a high temporal variation and where NDVI data exhibit multi-peaks.
Ruiz-Gutierrez, Viviana; Hooten, Melvin B.; Campbell Grant, Evan H.
2016-01-01
Biological monitoring programmes are increasingly relying upon large volumes of citizen-science data to improve the scope and spatial coverage of information, challenging the scientific community to develop design and model-based approaches to improve inference.Recent statistical models in ecology have been developed to accommodate false-negative errors, although current work points to false-positive errors as equally important sources of bias. This is of particular concern for the success of any monitoring programme given that rates as small as 3% could lead to the overestimation of the occurrence of rare events by as much as 50%, and even small false-positive rates can severely bias estimates of occurrence dynamics.We present an integrated, computationally efficient Bayesian hierarchical model to correct for false-positive and false-negative errors in detection/non-detection data. Our model combines independent, auxiliary data sources with field observations to improve the estimation of false-positive rates, when a subset of field observations cannot be validated a posteriori or assumed as perfect. We evaluated the performance of the model across a range of occurrence rates, false-positive and false-negative errors, and quantity of auxiliary data.The model performed well under all simulated scenarios, and we were able to identify critical auxiliary data characteristics which resulted in improved inference. We applied our false-positive model to a large-scale, citizen-science monitoring programme for anurans in the north-eastern United States, using auxiliary data from an experiment designed to estimate false-positive error rates. Not correcting for false-positive rates resulted in biased estimates of occupancy in 4 of the 10 anuran species we analysed, leading to an overestimation of the average number of occupied survey routes by as much as 70%.The framework we present for data collection and analysis is able to efficiently provide reliable inference for occurrence patterns using data from a citizen-science monitoring programme. However, our approach is applicable to data generated by any type of research and monitoring programme, independent of skill level or scale, when effort is placed on obtaining auxiliary information on false-positive rates.
Edge Detection Method Based on Neural Networks for COMS MI Images
NASA Astrophysics Data System (ADS)
Lee, Jin-Ho; Park, Eun-Bin; Woo, Sun-Hee
2016-12-01
Communication, Ocean And Meteorological Satellite (COMS) Meteorological Imager (MI) images are processed for radiometric and geometric correction from raw image data. When intermediate image data are matched and compared with reference landmark images in the geometrical correction process, various techniques for edge detection can be applied. It is essential to have a precise and correct edged image in this process, since its matching with the reference is directly related to the accuracy of the ground station output images. An edge detection method based on neural networks is applied for the ground processing of MI images for obtaining sharp edges in the correct positions. The simulation results are analyzed and characterized by comparing them with the results of conventional methods, such as Sobel and Canny filters.
Damage Detection for Historical Architectures Based on Tls Intensity Data
NASA Astrophysics Data System (ADS)
Li, Q.; Cheng, X.
2018-04-01
TLS (Terrestrial Laser Scanner) has long been preferred in the cultural heritage field for 3D documentation of historical sites thanks to its ability to acquire the geometric information without any physical contact. Besides the geometric information, most TLS systems also record the intensity information, which is considered as an important measurement of the spectral property of the scanned surface. Recent studies have shown the potential of using intensity for damage detection. However, the original intensity is affected by scanning geometry such as range and incidence angle and other factors, thus making the results less accurate. Therefore, in this paper, we present a method to detect certain damage areas using the corrected intensity data. Firstly, two data-driven models have been developed to correct the range and incidence angle effect. Then the corrected intensity is used to generate 2D intensity images for classification. After the damage areas being detected, they are re-projected to the 3D point cloud for better visual representation and further investigation. The experiment results indicate the feasibility and validity of the corrected intensity for damage detection.
NASA Technical Reports Server (NTRS)
Marthaler, J. G.; Heighway, J. E.
1979-01-01
An iceberg detection and identification system consisting of a moderate resolution Side Looking Airborne Radar (SLAR) interfaced with a Radar Image Processor (RIP) based on a ROLM 1664 computer with a 32K core memory updatable to 64K is described. The system can be operated in high- or low-resolution sampling modes. Specifically designed algorithms are applied to digitized signal returns to provide automatic target detection and location, geometrically correct video image display and data recording. The real aperture Motorola AN/APS-94D SLAR operates in the X-band and is tunable between 9.10 and 9.40 GHz; its output power is 45 kW peak with a pulse repetition rate of 750 pulses per hour. Schematic diagrams of the system are provided, together with preliminary test data.
Rotational relaxation of CF+(X1Σ) in collision with He(1S)
NASA Astrophysics Data System (ADS)
Denis-Alpizar, O.; Inostroza, N.; Castro Palacio, J. C.
2018-01-01
The carbon monofluoride cation (CF+) has been detected recently in Galactic and extragalactic regions. Therefore, excitation rate coefficients of this molecule in collision with He and H2 are necessary for a correct interpretation of the astronomical observations. The main goal of this work is to study the collision of CF+ with He in full dimensionality at the close-coupling level and to report a large set of rotational rate coefficients. New ab initio interaction energies at the CCSD(T)/aug-cc-pv5z level of theory were computed, and a three-dimensional potential energy surface was represented using a reproducing kernel Hilbert space. Close-coupling scattering calculations were performed at collisional energies up to 1600 cm-1 in the ground vibrational state. The vibrational quenching cross-sections were found to be at least three orders of magnitude lower than the pure rotational cross-sections. Also, the collisional rate coefficients were reported for the lowest 20 rotational states of CF+ and an even propensity rule was found to be in action only for j > 4. Finally, the hyperfine rate coefficients were explored. These data can be useful for the determination of the interstellar conditions where this molecule has been detected.
Fast radio burst event rate counts - I. Interpreting the observations
NASA Astrophysics Data System (ADS)
Macquart, J.-P.; Ekers, R. D.
2018-02-01
The fluence distribution of the fast radio burst (FRB) population (the `source count' distribution, N (>F) ∝Fα), is a crucial diagnostic of its distance distribution, and hence the progenitor evolutionary history. We critically reanalyse current estimates of the FRB source count distribution. We demonstrate that the Lorimer burst (FRB 010724) is subject to discovery bias, and should be excluded from all statistical studies of the population. We re-examine the evidence for flat, α > -1, source count estimates based on the ratio of single-beam to multiple-beam detections with the Parkes multibeam receiver, and show that current data imply only a very weak constraint of α ≲ -1.3. A maximum-likelihood analysis applied to the portion of the Parkes FRB population detected above the observational completeness fluence of 2 Jy ms yields α = -2.6_{-1.3}^{+0.7 }. Uncertainties in the location of each FRB within the Parkes beam render estimates of the Parkes event rate uncertain in both normalizing survey area and the estimated post-beam-corrected completeness fluence; this uncertainty needs to be accounted for when comparing the event rate against event rates measured at other telescopes.
Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations.
Zala, Sarah M; Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J
2017-01-01
House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4-12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a 'gold standard' reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community.
Application of side-oblique image-motion blur correction to Kuaizhou-1 agile optical images.
Sun, Tao; Long, Hui; Liu, Bao-Cheng; Li, Ying
2016-03-21
Given the recent development of agile optical satellites for rapid-response land observation, side-oblique image-motion (SOIM) detection and blur correction have become increasingly essential for improving the radiometric quality of side-oblique images. The Chinese small-scale agile mapping satellite Kuaizhou-1 (KZ-1) was developed by the Harbin Institute of Technology and launched for multiple emergency applications. Like other agile satellites, KZ-1 suffers from SOIM blur, particularly in captured images with large side-oblique angles. SOIM detection and blur correction are critical for improving the image radiometric accuracy. This study proposes a SOIM restoration method based on segmental point spread function detection. The segment region width is determined by satellite parameters such as speed, height, integration time, and side-oblique angle. The corresponding algorithms and a matrix form are proposed for SOIM blur correction. Radiometric objective evaluation indices are used to assess the restoration quality. Beijing regional images from KZ-1 are used as experimental data. The radiometric quality is found to increase greatly after SOIM correction. Thus, the proposed method effectively corrects image motion for KZ-1 agile optical satellites.
On Motion Planning with Uncertainty. Revised.
1984-01-01
drift to the right, sticking at the right corner. See Fig. 1.6. Given the uncertainty in the position sensor, it is impossible to execute corrective ...action once * sticking is detected. This is because the corrective action depends on knowing the side at which sticking occurred. Worse than being...unable to correct errors should they occur, is the inability to detect success. In the given example, it is possible that the peg may move smoothly into
Hough transform for clustered microcalcifications detection in full-field digital mammograms
NASA Astrophysics Data System (ADS)
Fanizzi, A.; Basile, T. M. A.; Losurdo, L.; Amoroso, N.; Bellotti, R.; Bottigli, U.; Dentamaro, R.; Didonna, V.; Fausto, A.; Massafra, R.; Moschetta, M.; Tamborra, P.; Tangaro, S.; La Forgia, D.
2017-09-01
Many screening programs use mammography as principal diagnostic tool for detecting breast cancer at a very early stage. Despite the efficacy of the mammograms in highlighting breast diseases, the detection of some lesions is still doubtless for radiologists. In particular, the extremely minute and elongated salt-like particles of microcalcifications are sometimes no larger than 0.1 mm and represent approximately half of all cancer detected by means of mammograms. Hence the need for automatic tools able to support radiologists in their work. Here, we propose a computer assisted diagnostic tool to support radiologists in identifying microcalcifications in full (native) digital mammographic images. The proposed CAD system consists of a pre-processing step, that improves contrast and reduces noise by applying Sobel edge detection algorithm and Gaussian filter, followed by a microcalcification detection step performed by exploiting the circular Hough transform. The procedure performance was tested on 200 images coming from the Breast Cancer Digital Repository (BCDR), a publicly available database. The automatically detected clusters of microcalcifications were evaluated by skilled radiologists which asses the validity of the correctly identified regions of interest as well as the system error in case of missed clustered microcalcifications. The system performance was evaluated in terms of Sensitivity and False Positives per images (FPi) rate resulting comparable to the state-of-art approaches. The proposed model was able to accurately predict the microcalcification clusters obtaining performances (sensibility = 91.78% and FPi rate = 3.99) which favorably compare to other state-of-the-art approaches.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-12
... subsequent deployment of the oxygen masks. We are issuing this AD to detect and correct fatigue cracking of... deployment of the oxygen masks. We are issuing this AD to detect and correct fatigue cracking of the fuselage...
Cobinamide-Based Cyanide Analysis by Multiwavelength Spectrometry in a Liquid Core Waveguide
Ma, Jian; Dasgupta, Purnendu K.; Blackledge, William; Boss, Gerry R.
2010-01-01
A novel cyanide analyzer based on sensitive cobinamide chemistry relies on simultaneous reagent and sample injection and detection in a 50 cm liquid core waveguide (LCW) flow cell illuminated by a white light emitting diode. The transmitted light is read by a fiber-optic charge coupled device (CCD) spectrometer. Alkaline cobinamide (orange, λmax = 510 nm) changes to violet (λmax = 583 nm) upon reaction with cyanide. Multiwavelength detection permits built-in correction for artifact responses intrinsic to a single-line flow injection system and corrects for drift. With optimum choice of the reaction medium, flow rate, and mixing coil length, the limit of detection (LOD, S/N = 3) is 30 nM and the linear dynamic range extends to 10 μM. The response base width for 1% carryover is <95 s, permitting a throughput of 38 samples/h. The relative standard deviations (rsd) for repetitive determinations at 0.15, 0.5, and 1 μM were 7.6% (n = 5), 3.2% (n = 7), and 1.7% (n = 6), respectively. Common ions at 250–80 000× concentrations do not interfere except for sulfide. For the determination of 2 μM CN−, the presence of 2, 5, 10, 20, 100, and 1000 μM HS− results in 22, 27, 48, 58, 88, and 154% overestimation of cyanide. The sulfide product actually has a different characteristic absorption, and in those samples where significant presence is likely, this can be corrected for. We demonstrate applicability by analyzing the hydrolytic cyanide extract of apple and pear seeds with orange seeds as control and also measure HCN in breath air samples. Spike recoveries in these sample extracts ranged from 91 to 108%. PMID:20560532
Custodio, Nilton; Lira, David; Herrera-Perez, Eder; Montesinos, Rosa; Castro-Suarez, Sheila; Cuenca-Alfaro, José; Valeriano-Lorenzo, Lucía
2017-01-01
Background/Aims : Short tests to early detection of the cognitive impairment are necessary in primary care setting, particularly in populations with low educational level. The aim of this study was to assess the performance of Memory Alteration Test (M@T) to discriminate controls, patients with amnestic Mild Cognitive Impairment (aMCI) and patients with early Alzheimer's Dementia (AD) in a sample of individuals with low level of education. Methods : Cross-sectional study to assess the performance of the M@T (study test), compared to the neuropsychological evaluation (gold standard test) scores in 247 elderly subjects with low education level from Lima-Peru. The cognitive evaluation included three sequential stages: (1) screening (to detect cases with cognitive impairment); (2) nosological diagnosis (to determinate specific disease); and (3) classification (to differentiate disease subtypes). The subjects with negative results for all stages were considered as cognitively normal (controls). The test performance was assessed by means of area under the receiver operating characteristic (ROC) curve. We calculated validity measures (sensitivity, specificity and correctly classified percentage), the internal consistency (Cronbach's alpha coefficient), and concurrent validity (Pearson's ratio coefficient between the M@T and Clinical Dementia Rating (CDR) scores). Results : The Cronbach's alpha coefficient was 0.79 and Pearson's ratio coefficient was 0.79 ( p < 0.01). The AUC of M@T to discriminate between early AD and aMCI was 99.60% (sensitivity = 100.00%, specificity = 97.53% and correctly classified = 98.41%) and to discriminate between aMCI and controls was 99.56% (sensitivity = 99.17%, specificity = 91.11%, and correctly classified = 96.99%). Conclusions : The M@T is a short test with a good performance to discriminate controls, aMCI and early AD in individuals with low level of education from urban settings.
Alió Del Barrio, Jorge L; Tiveron, Mauro; Plaza-Puche, Ana B; Amesty, María A; Casanova, Laura; García, María J; Alió, Jorge L
2017-10-18
To evaluate the visual outcomes after femtosecond laser-assisted laser in situ keratomileusis (LASIK) surgery to correct primary compound hyperopic astigmatism with high cylinder using a fast repetition rate excimer laser platform with optimized aspheric profiles and cyclotorsion control. Eyes with primary simple or compound hyperopic astigmatism and a cylinder power ≥3.00 D had uneventful femtosecond laser-assisted LASIK with a fast repetition rate excimer laser ablation, aspheric profiles, and cyclotorsion control. Visual, refractive, and aberrometric results were evaluated at the 3- and 6-month follow-up. The astigmatic outcome was evaluated using the Alpins method and ASSORT software. This study enrolled 80 eyes at 3 months and 50 eyes at 6 months. The significant reduction in refractive sphere and cylinder 3 and 6 months postoperatively (p<0.01) was associated with an improved uncorrected distance visual acuity (p<0.01). A total of 23.75% required retreatment 3 months after surgery. Efficacy and safety indices at 6 months were 0.90 and 1.00, respectively. At 6 months, 80% of eyes had an SE within ±0.50 D and 96% within ±1.00 D. No significant differences were detected between the third and the sixth postoperative months in refractive parameters. A significant increase in the spherical aberration was detected, but not in coma. The correction index was 0.94 at 3 months. Laser in situ keratomileusis for primary compound hyperopic astigmatism with high cylinder (>3.00 D) using the latest excimer platforms with cyclotorsion control, fast repetition rate, and optimized aspheric profiles is safe, moderately effective, and predictable.
Improved forest change detection with terrain illumination corrected landsat images
USDA-ARS?s Scientific Manuscript database
An illumination correction algorithm has been developed to improve the accuracy of forest change detection from Landsat reflectance data. This algorithm is based on an empirical rotation model and was tested on the Landsat imagery pair over Cherokee National Forest, Tennessee, Uinta-Wasatch-Cache N...
Feasibility of the capnogram to monitor ventilation rate during cardiopulmonary resuscitation.
Aramendi, Elisabete; Elola, Andoni; Alonso, Erik; Irusta, Unai; Daya, Mohamud; Russell, James K; Hubner, Pia; Sterz, Fritz
2017-01-01
The rates of chest compressions (CCs) and ventilations are both important metrics to monitor the quality of cardiopulmonary resuscitation (CPR). Capnography permits monitoring ventilation, but the CCs provided during CPR corrupt the capnogram and compromise the accuracy of automatic ventilation detectors. The aim of this study was to evaluate the feasibility of an automatic algorithm based on the capnogram to detect ventilations and provide feedback on ventilation rate during CPR, specifically addressing intervals where CCs are delivered. The dataset used to develop and test the algorithm contained in-hospital and out-of-hospital cardiac arrest episodes. The method relies on adaptive thresholding to detect ventilations in the first derivative of the capnogram. The performance of the detector was reported in terms of sensitivity (SE) and Positive Predictive Value (PPV). The overall performance was reported in terms of the rate error and errors in the hyperventilation alarms. Results were given separately for the intervals with CCs. A total of 83 episodes were considered, resulting in 4880min and 46,740 ventilations (8741 during CCs). The method showed an overall SE/PPV above 99% and 97% respectively, even in intervals with CCs. The error for the ventilation rate was below 1.8min -1 in any group, and >99% of the ventilation alarms were correctly detected. A method to provide accurate feedback on ventilation rate using only the capnogram is proposed. Its accuracy was proven even in intervals where canpography signal was severely corrupted by CCs. This algorithm could be integrated into monitor/defibrillators to provide reliable feedback on ventilation rate during CPR. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Model-Based Building Detection from Low-Cost Optical Sensors Onboard Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Karantzalos, K.; Koutsourakis, P.; Kalisperakis, I.; Grammatikopoulos, L.
2015-08-01
The automated and cost-effective building detection in ultra high spatial resolution is of major importance for various engineering and smart city applications. To this end, in this paper, a model-based building detection technique has been developed able to extract and reconstruct buildings from UAV aerial imagery and low-cost imaging sensors. In particular, the developed approach through advanced structure from motion, bundle adjustment and dense image matching computes a DSM and a true orthomosaic from the numerous GoPro images which are characterised by important geometric distortions and fish-eye effect. An unsupervised multi-region, graphcut segmentation and a rule-based classification is responsible for delivering the initial multi-class classification map. The DTM is then calculated based on inpaininting and mathematical morphology process. A data fusion process between the detected building from the DSM/DTM and the classification map feeds a grammar-based building reconstruction and scene building are extracted and reconstructed. Preliminary experimental results appear quite promising with the quantitative evaluation indicating detection rates at object level of 88% regarding the correctness and above 75% regarding the detection completeness.
Hazardous sign detection for safety applications in traffic monitoring
NASA Astrophysics Data System (ADS)
Benesova, Wanda; Kottman, Michal; Sidla, Oliver
2012-01-01
The transportation of hazardous goods in public streets systems can pose severe safety threats in case of accidents. One of the solutions for these problems is an automatic detection and registration of vehicles which are marked with dangerous goods signs. We present a prototype system which can detect a trained set of signs in high resolution images under real-world conditions. This paper compares two different methods for the detection: bag of visual words (BoW) procedure and our approach presented as pairs of visual words with Hough voting. The results of an extended series of experiments are provided in this paper. The experiments show that the size of visual vocabulary is crucial and can significantly affect the recognition success rate. Different code-book sizes have been evaluated for this detection task. The best result of the first method BoW was 67% successfully recognized hazardous signs, whereas the second method proposed in this paper - pairs of visual words and Hough voting - reached 94% of correctly detected signs. The experiments are designed to verify the usability of the two proposed approaches in a real-world scenario.
Sunglass detection method for automation of video surveillance system
NASA Astrophysics Data System (ADS)
Sikandar, Tasriva; Samsudin, Wan Nur Azhani W.; Hawari Ghazali, Kamarul; Mohd, Izzeldin I.; Fazle Rabbi, Mohammad
2018-04-01
Wearing sunglass to hide face from surveillance camera is a common activity in criminal incidences. Therefore, sunglass detection from surveillance video has become a demanding issue in automation of security systems. In this paper we propose an image processing method to detect sunglass from surveillance images. Specifically, a unique feature using facial height and width has been employed to identify the covered region of the face. The presence of covered area by sunglass is evaluated using facial height-width ratio. Threshold value of covered area percentage is used to classify the glass wearing face. Two different types of glasses have been considered i.e. eye glass and sunglass. The results of this study demonstrate that the proposed method is able to detect sunglasses in two different illumination conditions such as, room illumination as well as in the presence of sunlight. In addition, due to the multi-level checking in facial region, this method has 100% accuracy of detecting sunglass. However, in an exceptional case where fabric surrounding the face has similar color as skin, the correct detection rate was found 93.33% for eye glass.
ERIC Educational Resources Information Center
McCane-Bowling, Sara J.; Strait, Andrea D.; Guess, Pamela E.; Wiedo, Jennifer R.; Muncie, Eric
2014-01-01
This study examined the predictive utility of five formative reading measures: words correct per minute, number of comprehension questions correct, reading comprehension rate, number of maze correct responses, and maze accurate response rate (MARR). Broad Reading cluster scores obtained via the Woodcock-Johnson III (WJ III) Tests of Achievement…
Impact of task-related changes in heart rate on estimation of hemodynamic response and model fit.
Hillenbrand, Sarah F; Ivry, Richard B; Schlerf, John E
2016-05-15
The blood oxygen level dependent (BOLD) signal, as measured using functional magnetic resonance imaging (fMRI), is widely used as a proxy for changes in neural activity in the brain. Physiological variables such as heart rate (HR) and respiratory variation (RV) affect the BOLD signal in a way that may interfere with the estimation and detection of true task-related neural activity. This interference is of particular concern when these variables themselves show task-related modulations. We first establish that a simple movement task reliably induces a change in HR but not RV. In group data, the effect of HR on the BOLD response was larger and more widespread throughout the brain than were the effects of RV or phase regressors. The inclusion of HR regressors, but not RV or phase regressors, had a small but reliable effect on the estimated hemodynamic response function (HRF) in M1 and the cerebellum. We next asked whether the inclusion of a nested set of physiological regressors combining phase, RV, and HR significantly improved the model fit in individual participants' data sets. There was a significant improvement from HR correction in M1 for the greatest number of participants, followed by RV and phase correction. These improvements were more modest in the cerebellum. These results indicate that accounting for task-related modulation of physiological variables can improve the detection and estimation of true neural effects of interest. Copyright © 2016 Elsevier Inc. All rights reserved.
Passive acoustic monitoring to detect spawning in large-bodied catostomids
Straight, Carrie A.; Freeman, Byron J.; Freeman, Mary C.
2014-01-01
Documenting timing, locations, and intensity of spawning can provide valuable information for conservation and management of imperiled fishes. However, deep, turbid or turbulent water, or occurrence of spawning at night, can severely limit direct observations. We have developed and tested the use of passive acoustics to detect distinctive acoustic signatures associated with spawning events of two large-bodied catostomid species (River Redhorse Moxostoma carinatum and Robust Redhorse Moxostoma robustum) in river systems in north Georgia. We deployed a hydrophone with a recording unit at four different locations on four different dates when we could both record and observe spawning activity. Recordings captured 494 spawning events that we acoustically characterized using dominant frequency, 95% frequency, relative power, and duration. We similarly characterized 46 randomly selected ambient river noises. Dominant frequency did not differ between redhorse species and ranged from 172.3 to 14,987.1 Hz. Duration of spawning events ranged from 0.65 to 11.07 s, River Redhorse having longer durations than Robust Redhorse. Observed spawning events had significantly higher dominant and 95% frequencies than ambient river noises. We additionally tested software designed to automate acoustic detection. The automated detection configurations correctly identified 80–82% of known spawning events, and falsely indentified spawns 6–7% of the time when none occurred. These rates were combined over all recordings; rates were more variable among individual recordings. Longer spawning events were more likely to be detected. Combined with sufficient visual observations to ascertain species identities and to estimate detection error rates, passive acoustic recording provides a useful tool to study spawning frequency of large-bodied fishes that displace gravel during egg deposition, including several species of imperiled catostomids.
Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.
2015-01-01
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200
Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M
2015-04-29
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.
Towards automated assistance for operating home medical devices.
Gao, Zan; Detyniecki, Marcin; Chen, Ming-Yu; Wu, Wen; Hauptmann, Alexander G; Wactlar, Howard D
2010-01-01
To detect errors when subjects operate a home medical device, we observe them with multiple cameras. We then perform action recognition with a robust approach to recognize action information based on explicitly encoding motion information. This algorithm detects interest points and encodes not only their local appearance but also explicitly models local motion. Our goal is to recognize individual human actions in the operations of a home medical device to see if the patient has correctly performed the required actions in the prescribed sequence. Using a specific infusion pump as a test case, requiring 22 operation steps from 6 action classes, our best classifier selects high likelihood action estimates from 4 available cameras, to obtain an average class recognition rate of 69%.
Jones, Sandra C
2004-01-01
Early detection of breast cancer by mammographic screening has the potential to dramatically reduce mortality rates, but many women do not comply with screening recommendations. The media are an important source of health information for many women--through both direct social marketing advertisements and indirect dissemination of information via editorial content. This study investigated the accuracy of breast cancer detection messages in the top-selling Australian women's magazines and three weekend newspapers in the six-month period from December 2000 to May 2001 that included any reference to breast cancer and found that current coverage of breast cancer in the Australian print media conveys messages that are unlikely to encourage appropriate screening.
Scalable video transmission over Rayleigh fading channels using LDPC codes
NASA Astrophysics Data System (ADS)
Bansal, Manu; Kondi, Lisimachos P.
2005-03-01
In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.
Position Corrections for Airspeed and Flow Angle Measurements on Fixed-Wing Aircraft
NASA Technical Reports Server (NTRS)
Grauer, Jared A.
2017-01-01
This report addresses position corrections made to airspeed and aerodynamic flow angle measurements on fixed-wing aircraft. These corrections remove the effects of angular rates, which contribute to the measurements when the sensors are installed away from the aircraft center of mass. Simplified corrections, which are routinely used in practice and assume small flow angles and angular rates, are reviewed. The exact, nonlinear corrections are then derived. The simplified corrections are sufficient in most situations; however, accuracy diminishes for smaller aircraft that incur higher angular rates, and for flight at high air flow angles. This is demonstrated using both flight test data and a nonlinear flight dynamics simulation of a subscale transport aircraft in a variety of low-speed, subsonic flight conditions.
NASA Astrophysics Data System (ADS)
Bell, Michael Stephen
Sixty-four trained musicians listened to four -bar excerpts of selected chorales by J. S. Bach, which were presented both in four-part texture (harmonic context) and as a single voice part (melodic context). These digitally synthesized examples were created by combining the first twelve partials, and all voice parts had the same generic timbre. A within-subjects design was used, so subjects heard each example in both contexts. Included in the thirty -two excerpts for each subject were four soprano, four alto, four tenor, and four bass parts as the target voices. The intonation of the target voice was varied such that the voice stayed in tune or changed by a half cent, two cents, or eight cents per second (a cent is 1/100 of a half step). Although direction of the deviation (sharp or flat) was not a significant factor in intonation perception, main effects for context (melodic vs. harmonic) and rate of deviation were highly significant, as was the interaction between rate of deviation and context. Specifically, selections that stayed in tune or changed only by half cents were not perceived differently; for larger deviations, the error was detected earlier and the intonation was judged to be worse in the harmonic contexts compared to the melodic contexts. Additionally, the direction of the error was correctly identified in the melodic context more often than the hamonic context only for the examples that mistuned at a rate of eight cents per second. Correct identification of the voice part that went out of tune in the four-part textures depended only on rate of deviation: the in tune excerpts (no voice going out of tune) and the eight cent deviations were correctly identified most often, the two cent deviations were next, and the half cent deviation excerpts were the least accurately identified.
Li, Xin; Varallyay, Csanad G; Gahramanov, Seymur; Fu, Rongwei; Rooney, William D; Neuwelt, Edward A
2017-11-01
Dynamic susceptibility contrast-magnetic resonance imaging (DSC-MRI) is widely used to obtain informative perfusion imaging biomarkers, such as the relative cerebral blood volume (rCBV). The related post-processing software packages for DSC-MRI are available from major MRI instrument manufacturers and third-party vendors. One unique aspect of DSC-MRI with low-molecular-weight gadolinium (Gd)-based contrast reagent (CR) is that CR molecules leak into the interstitium space and therefore confound the DSC signal detected. Several approaches to correct this leakage effect have been proposed throughout the years. Amongst the most popular is the Boxerman-Schmainda-Weisskoff (BSW) K 2 leakage correction approach, in which the K 2 pseudo-first-order rate constant quantifies the leakage. In this work, we propose a new method for the BSW leakage correction approach. Based on the pharmacokinetic interpretation of the data, the commonly adopted R 2 * expression accounting for contributions from both intravascular and extravasating CR components is transformed using a method mathematically similar to Gjedde-Patlak linearization. Then, the leakage rate constant (K L ) can be determined as the slope of the linear portion of a plot of the transformed data. Using the DSC data of high-molecular-weight (~750 kDa), iron-based, intravascular Ferumoxytol (FeO), the pharmacokinetic interpretation of the new paradigm is empirically validated. The primary objective of this work is to empirically demonstrate that a linear portion often exists in the graph of the transformed data. This linear portion provides a clear definition of the Gd CR pseudo-leakage rate constant, which equals the slope derived from the linear segment. A secondary objective is to demonstrate that transformed points from the initial transient period during the CR wash-in often deviate from the linear trend of the linearized graph. The inclusion of these points will have a negative impact on the accuracy of the leakage rate constant, and even make it time dependent. Copyright © 2017 John Wiley & Sons, Ltd.
76 FR 50726 - Integrated System Power Rates: Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-16
... DEPARTMENT OF ENERGY Southwestern Power Administration Integrated System Power Rates: Correction AGENCY: Southwestern Power Administration, DOE. ACTION: Notice of public review and comment; Correction. SUMMARY: Southwestern Power Administration published a document in the Federal Register (76 FR 48159) on...
75 FR 77796 - Airworthiness Directives; Saab AB, Saab Aerosystems Model SAAB 2000 Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-14
... of the horizontal stabilizer. Corrosion damage in these areas, if not detected and corrected, can... of the horizontal stabilizer. Corrosion damage in these areas, if not detected and corrected, can... convoluted tubing on the harness, applying corrosion prevention compound to the inspected area, making sure...
Testing Moderating Detection Systems with {sup 252}Cf-Based Reference Neutron Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hertel, Nolan E.; Sweezy, Jeremy; Sauber, Jeremiah S.
Calibration measurements were carried out on a probe designed to measure ambient dose equivalent in accordance with ICRP Pub 60 recommendations. It consists of a cylindrical {sup 3}He proportional counter surrounded by a 25-cm-diameter spherical polyethylene moderator. Its neutron response is optimized for dose rate measurements of neutrons between thermal energies and 20 MeV. The instrument was used to measure the dose rate in four separate neutron fields: unmoderated {sup 252}Cf, D{sub 2}O-moderated {sup 252}Cf, polyethylene-moderated {sup 252}Cf, and WEP neutron howitzer with {sup 252}Cf at its center. Dose equivalent measurements were performed at source-detector centerline distances from 50 tomore » 200 cm. The ratio of air-scatter- and room-return-corrected ambient dose equivalent rates to ambient dose equivalent rates calculated with the code MCNP are tabulated.« less
A signal-detection-based diagnostic-feature-detection model of eyewitness identification.
Wixted, John T; Mickes, Laura
2014-04-01
The theoretical understanding of eyewitness identifications made from a police lineup has long been guided by the distinction between absolute and relative decision strategies. In addition, the accuracy of identifications associated with different eyewitness memory procedures has long been evaluated using measures like the diagnosticity ratio (the correct identification rate divided by the false identification rate). Framed in terms of signal-detection theory, both the absolute/relative distinction and the diagnosticity ratio are mainly relevant to response bias while remaining silent about the key issue of diagnostic accuracy, or discriminability (i.e., the ability to tell the difference between innocent and guilty suspects in a lineup). Here, we propose a signal-detection-based model of eyewitness identification, one that encourages the use of (and helps to conceptualize) receiver operating characteristic (ROC) analysis to measure discriminability. Recent ROC analyses indicate that the simultaneous presentation of faces in a lineup yields higher discriminability than the presentation of faces in isolation, and we propose a diagnostic feature-detection hypothesis to account for that result. According to this hypothesis, the simultaneous presentation of faces allows the eyewitness to appreciate that certain facial features (viz., those that are shared by everyone in the lineup) are non-diagnostic of guilt. To the extent that those non-diagnostic features are discounted in favor of potentially more diagnostic features, the ability to discriminate innocent from guilty suspects will be enhanced.
Jessop, Maryam; Thompson, John D; Coward, Joanne; Sanderud, Audun; Jorge, José; de Groot, Martijn; Lança, Luís; Hogg, Peter
2015-03-01
Incidental findings on low-dose CT images obtained during hybrid imaging are an increasing phenomenon as CT technology advances. Understanding the diagnostic value of incidental findings along with the technical limitations is important when reporting image results and recommending follow-up, which may result in an additional radiation dose from further diagnostic imaging and an increase in patient anxiety. This study assessed lesions incidentally detected on CT images acquired for attenuation correction on two SPECT/CT systems. An anthropomorphic chest phantom containing simulated lesions of varying size and density was imaged on an Infinia Hawkeye 4 and a Symbia T6 using the low-dose CT settings applied for attenuation correction acquisitions in myocardial perfusion imaging. Twenty-two interpreters assessed 46 images from each SPECT/CT system (15 normal images and 31 abnormal images; 41 lesions). Data were evaluated using a jackknife alternative free-response receiver-operating-characteristic analysis (JAFROC). JAFROC analysis showed a significant difference (P < 0.0001) in lesion detection, with the figures of merit being 0.599 (95% confidence interval, 0.568, 0.631) and 0.810 (95% confidence interval, 0.781, 0.839) for the Infinia Hawkeye 4 and Symbia T6, respectively. Lesion detection on the Infinia Hawkeye 4 was generally limited to larger, higher-density lesions. The Symbia T6 allowed improved detection rates for midsized lesions and some lower-density lesions. However, interpreters struggled to detect small (5 mm) lesions on both image sets, irrespective of density. Lesion detection is more reliable on low-dose CT images from the Symbia T6 than from the Infinia Hawkeye 4. This phantom-based study gives an indication of potential lesion detection in the clinical context as shown by two commonly used SPECT/CT systems, which may assist the clinician in determining whether further diagnostic imaging is justified. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
SU-E-J-15: Automatically Detect Patient Treatment Position and Orientation in KV Portal Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Yang, D
2015-06-15
Purpose: In the course of radiation therapy, the complex information processing workflow will Result in potential errors, such as incorrect or inaccurate patient setups. With automatic image check and patient identification, such errors could be effectively reduced. For this purpose, we developed a simple and rapid image processing method, to automatically detect the patient position and orientation in 2D portal images, so to allow automatic check of positions and orientations for patient daily RT treatments. Methods: Based on the principle of portal image formation, a set of whole body DRR images were reconstructed from multiple whole body CT volume datasets,more » and fused together to be used as the matching template. To identify the patient setup position and orientation shown in a 2D portal image, the 2D portal image was preprocessed (contrast enhancement, down-sampling and couch table detection), then matched to the template image so to identify the laterality (left or right), position, orientation and treatment site. Results: Five day’s clinical qualified portal images were gathered randomly, then were processed by the automatic detection and matching method without any additional information. The detection results were visually checked by physicists. 182 images were correct detection in a total of 200kV portal images. The correct rate was 91%. Conclusion: The proposed method can detect patient setup and orientation quickly and automatically. It only requires the image intensity information in KV portal images. This method can be useful in the framework of Electronic Chart Check (ECCK) to reduce the potential errors in workflow of radiation therapy and so to improve patient safety. In addition, the auto-detection results, as the patient treatment site position and patient orientation, could be useful to guide the sequential image processing procedures, e.g. verification of patient daily setup accuracy. This work was partially supported by research grant from Varian Medical System.« less
Comparative analysis of peak-detection techniques for comprehensive two-dimensional chromatography.
Latha, Indu; Reichenbach, Stephen E; Tao, Qingping
2011-09-23
Comprehensive two-dimensional gas chromatography (GC×GC) is a powerful technology for separating complex samples. The typical goal of GC×GC peak detection is to aggregate data points of analyte peaks based on their retention times and intensities. Two techniques commonly used for two-dimensional peak detection are the two-step algorithm and the watershed algorithm. A recent study [4] compared the performance of the two-step and watershed algorithms for GC×GC data with retention-time shifts in the second-column separations. In that analysis, the peak retention-time shifts were corrected while applying the two-step algorithm but the watershed algorithm was applied without shift correction. The results indicated that the watershed algorithm has a higher probability of erroneously splitting a single two-dimensional peak than the two-step approach. This paper reconsiders the analysis by comparing peak-detection performance for resolved peaks after correcting retention-time shifts for both the two-step and watershed algorithms. Simulations with wide-ranging conditions indicate that when shift correction is employed with both algorithms, the watershed algorithm detects resolved peaks with greater accuracy than the two-step method. Copyright © 2011 Elsevier B.V. All rights reserved.
An automated assay for the assessment of cardiac arrest in fish embryo.
Puybareau, Elodie; Genest, Diane; Barbeau, Emilie; Léonard, Marc; Talbot, Hugues
2017-02-01
Studies on fish embryo models are widely developed in research. They are used in several research fields including drug discovery or environmental toxicology. In this article, we propose an entirely automated assay to detect cardiac arrest in Medaka (Oryzias latipes) based on image analysis. We propose a multi-scale pipeline based on mathematical morphology. Starting from video sequences of entire wells in 24-well plates, we focus on the embryo, detect its heart, and ascertain whether or not the heart is beating based on intensity variation analysis. Our image analysis pipeline only uses commonly available operators. It has a low computational cost, allowing analysis at the same rate as acquisition. From an initial dataset of 3192 videos, 660 were discarded as unusable (20.7%), 655 of them correctly so (99.25%) and only 5 incorrectly so (0.75%). The 2532 remaining videos were used for our test. On these, 45 errors were made, leading to a success rate of 98.23%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Tracking employment shocks using mobile phone data
Toole, Jameson L.; Lin, Yu-Ru; Muehlegger, Erich; Shoag, Daniel; González, Marta C.; Lazer, David
2015-01-01
Can data from mobile phones be used to observe economic shocks and their consequences at multiple scales? Here we present novel methods to detect mass layoffs, identify individuals affected by them and predict changes in aggregate unemployment rates using call detail records (CDRs) from mobile phones. Using the closure of a large manufacturing plant as a case study, we first describe a structural break model to correctly detect the date of a mass layoff and estimate its size. We then use a Bayesian classification model to identify affected individuals by observing changes in calling behaviour following the plant's closure. For these affected individuals, we observe significant declines in social behaviour and mobility following job loss. Using the features identified at the micro level, we show that the same changes in these calling behaviours, aggregated at the regional level, can improve forecasts of macro unemployment rates. These methods and results highlight promise of new data resources to measure microeconomic behaviour and improve estimates of critical economic indicators. PMID:26018965
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojahn, Christopher K.
2015-10-20
This HDL code (hereafter referred to as "software") implements circuitry in Xilinx Virtex-5QV Field Programmable Gate Array (FPGA) hardware. This software allows the device to self-check the consistency of its own configuration memory for radiation-induced errors. The software then provides the capability to correct any single-bit errors detected in the memory using the device's inherent circuitry, or reload corrupted memory frames when larger errors occur that cannot be corrected with the device's built-in error correction and detection scheme.
Impact of color blindness on recognition of blood in body fluids.
Reiss, M J; Labowitz, D A; Forman, S; Wormser, G P
2001-02-12
Color blindness is a common hereditary X-linked disorder. To investigate whether color blindness affects the ability to detect the presence of blood in body fluids. Ten color-blind subjects and 20 sex- and age-matched control subjects were shown 94 photographs of stool, urine, or sputum. Frank blood was present in 57 (61%) of the photographs. Surveys were done to determine if board-certified internists had ever considered whether color blindness would affect detection of blood and whether an inquiry on color blindness was included in their standard medical interview. Color-blind subjects were significantly less able to identify correctly whether pictures of body fluids showed blood compared with non-color-blind controls (P =.001); the lowest rate of correct identifications occurred with pictures of stool (median of 26 [70%] of 37 for color-blind subjects vs 36.5 [99%] of 37 for controls; P<.001). The more severely color-blind subjects were significantly less accurate than those with less severe color deficiency (P =.009). Only 2 (10%) of the 21 physicians had ever considered the possibility that color blindness might affect the ability of patients to detect blood, and none routinely asked their patients about color blindness. Color blindness impairs recognition of blood in body fluids. Color-blind individuals and their health care providers need to be made aware of this limitation.
NASA Astrophysics Data System (ADS)
Wallace, Tess E.; Manavaki, Roido; Graves, Martin J.; Patterson, Andrew J.; Gilbert, Fiona J.
2017-01-01
Physiological fluctuations are expected to be a dominant source of noise in blood oxygenation level-dependent (BOLD) magnetic resonance imaging (MRI) experiments to assess tumour oxygenation and angiogenesis. This work investigates the impact of various physiological noise regressors: retrospective image correction (RETROICOR), heart rate (HR) and respiratory volume per unit time (RVT), on signal variance and the detection of BOLD contrast in the breast in response to a modulated respiratory stimulus. BOLD MRI was performed at 3 T in ten volunteers at rest and during cycles of oxygen and carbogen gas breathing. RETROICOR was optimized using F-tests to determine which cardiac and respiratory phase terms accounted for a significant amount of signal variance. A nested regression analysis was performed to assess the effect of RETROICOR, HR and RVT on the model fit residuals, temporal signal-to-noise ratio, and BOLD activation parameters. The optimized RETROICOR model accounted for the largest amount of signal variance ( Δ R\\text{adj}2 = 3.3 ± 2.1%) and improved the detection of BOLD activation (P = 0.002). Inclusion of HR and RVT regressors explained additional signal variance, but had a negative impact on activation parameter estimation (P < 0.001). Fluctuations in HR and RVT appeared to be correlated with the stimulus and may contribute to apparent BOLD signal reactivity.
NASA Astrophysics Data System (ADS)
Fernández Pozo, Rubén; Blanco Murillo, Jose Luis; Hernández Gómez, Luis; López Gonzalo, Eduardo; Alcázar Ramírez, José; Toledano, Doroteo T.
2009-12-01
This study is part of an ongoing collaborative effort between the medical and the signal processing communities to promote research on applying standard Automatic Speech Recognition (ASR) techniques for the automatic diagnosis of patients with severe obstructive sleep apnoea (OSA). Early detection of severe apnoea cases is important so that patients can receive early treatment. Effective ASR-based detection could dramatically cut medical testing time. Working with a carefully designed speech database of healthy and apnoea subjects, we describe an acoustic search for distinctive apnoea voice characteristics. We also study abnormal nasalization in OSA patients by modelling vowels in nasal and nonnasal phonetic contexts using Gaussian Mixture Model (GMM) pattern recognition on speech spectra. Finally, we present experimental findings regarding the discriminative power of GMMs applied to severe apnoea detection. We have achieved an 81% correct classification rate, which is very promising and underpins the interest in this line of inquiry.
Early Detection of Severe Apnoea through Voice Analysis and Automatic Speaker Recognition Techniques
NASA Astrophysics Data System (ADS)
Fernández, Ruben; Blanco, Jose Luis; Díaz, David; Hernández, Luis A.; López, Eduardo; Alcázar, José
This study is part of an on-going collaborative effort between the medical and the signal processing communities to promote research on applying voice analysis and Automatic Speaker Recognition techniques (ASR) for the automatic diagnosis of patients with severe obstructive sleep apnoea (OSA). Early detection of severe apnoea cases is important so that patients can receive early treatment. Effective ASR-based diagnosis could dramatically cut medical testing time. Working with a carefully designed speech database of healthy and apnoea subjects, we present and discuss the possibilities of using generative Gaussian Mixture Models (GMMs), generally used in ASR systems, to model distinctive apnoea voice characteristics (i.e. abnormal nasalization). Finally, we present experimental findings regarding the discriminative power of speaker recognition techniques applied to severe apnoea detection. We have achieved an 81.25 % correct classification rate, which is very promising and underpins the interest in this line of inquiry.
Roy-Choudhury, Shuvro H; Gallacher, David J; Pilmer, John; Rankin, Sheila; Fowler, Geoff; Steers, Jeff; Dourado, Renato; Woodburn, Paul; Adam, Andreas
2007-11-01
The objective of our study was to determine the relative sensitivity and the lowest threshold of bleeding detectable with digital subtraction angiography (DSA) and with MDCT using an in vitro physiologic system. A closed pulsatile cardiopulmonary bypass circuit was connected to tubes traversing a water bath to simulate the abdominal aorta and inferior vena cava. Three smaller interconnecting acrylic plastic tubes were connected as branches to the aortic tubing to simulate branch vessels. One of the three tubes, the control, had no holes in it, one had a 100-microm hole, and one had a 280-microm hole. The leakage rates were predetermined with a cardiac output of 2 and 4 L/min and with a mean arterial pressure (MAP) ranging from 30 to 100 mm Hg for each hole size. The following studies were performed for each of the predetermined leakage rates. For study 1, 16-MDCT was performed using bolus tracking after 35 mL of contrast medium had been injected into a simulated peripheral vein. For study 2, DSA was performed using a 4-French straight catheter placed 10 cm proximal to the holes (selective first aortic branch cannulation). For study 3, DSA was performed with a catheter placed in the small branch at the site of the hole (highly superselective). For study 4, 16-MDCT was performed with a catheter placed as in study 2, 10 cm proximal to the holes, for the detection of lower leakage rates. Cine loops of MDCT and DSA images were examined by two blinded observers to detect extravasation from the holes in the tubes (i.e., the branch arteries). Interobserver agreement was studied using Cohen's kappa statistic. The threshold to detect bleeding was as follows for each study: For IV contrast-enhanced MDCT (study 1), it was 0.35 mL/min; DSA with a catheter 10 cm proximal to the holes (study 2), 0.96 mL/min; DSA with a catheter at the holes (study 3), 0.05 mL/min [corrected] or lower; and intraarterial selective MDCT (study 4), 0.05 mL/min [corrected] or lower. The ease of detection improved with increasing MAPs and larger volumes of leakage. Interobserver correlation was excellent. In vitro, i.v. contrast-enhanced MDCT is more sensitive than first-order aortic branch-selective DSA in detecting active hemorrhage unless the catheter position is highly superselective and is close to the bleeding artery. These results suggest that MDCT can be used as the initial imaging technique in the diagnosis of active hemorrhage if the clinical condition of the patient allows.
Adaptive error correction codes for face identification
NASA Astrophysics Data System (ADS)
Hussein, Wafaa R.; Sellahewa, Harin; Jassim, Sabah A.
2012-06-01
Face recognition in uncontrolled environments is greatly affected by fuzziness of face feature vectors as a result of extreme variation in recording conditions (e.g. illumination, poses or expressions) in different sessions. Many techniques have been developed to deal with these variations, resulting in improved performances. This paper aims to model template fuzziness as errors and investigate the use of error detection/correction techniques for face recognition in uncontrolled environments. Error correction codes (ECC) have recently been used for biometric key generation but not on biometric templates. We have investigated error patterns in binary face feature vectors extracted from different image windows of differing sizes and for different recording conditions. By estimating statistical parameters for the intra-class and inter-class distributions of Hamming distances in each window, we encode with appropriate ECC's. The proposed approached is tested for binarised wavelet templates using two face databases: Extended Yale-B and Yale. We shall demonstrate that using different combinations of BCH-based ECC's for different blocks and different recording conditions leads to in different accuracy rates, and that using ECC's results in significantly improved recognition results.
Detection of eviscerated poultry spleen enlargement by machine vision
NASA Astrophysics Data System (ADS)
Tao, Yang; Shao, June J.; Skeeles, John K.; Chen, Yud-Ren
1999-01-01
The size of a poultry spleen is an indication of whether the bird is wholesomeness or has a virus-related disease. This study explored the possibility of detecting poultry spleen enlargement with a computer imaging system to assist human inspectors in food safety inspections. Images of 45-day-old hybrid turkey internal viscera were taken using fluorescent and UV lighting systems. Image processing algorithms including linear transformation, morphological operations, and statistical analyses were developed to distinguish the spleen from its surroundings and then to detect abnormal spleens. Experimental results demonstrated that the imaging method could effectively distinguish spleens from other organ and intestine. Based on a total sample of 57 birds, the classification rates were 92% from a self-test set, and 95% from an independent test set for the correct detection of normal and abnormal birds. The methodology indicated the feasibility of using automated machine vision systems in the future to inspect internal organs and check the wholesomeness of poultry carcasses.
Chen, Yiwen; Zhang, Lahong; Hong, Liquan; Luo, Xian; Chen, Juping; Tang, Leiming; Chen, Jiahuan; Liu, Xia; Chen, Zhaojun
2018-06-01
Making a correct and rapid diagnosis is essential for managing pulmonary tuberculosis (PTB), particularly multidrug-resistant tuberculosis. We aimed to evaluate the efficacy of the combination of simultaneous amplification testing (SAT) and reverse dot blot (RDB) for the rapid detection of Mycobacterium tuberculosis (MTB) and drug-resistant mutants in respiratory samples. 225 suspected PTB and 32 non-TB pulmonary disease samples were collected. All sputum samples were sent for acid-fast bacilli smear, SAT, culture and drug susceptibility testing (DST) by the BACTEC TM MGIT TM 960 system. 53 PTB samples were tested by both RDB and DNA sequencing to identify drug resistance genes and mutated sites. The SAT positive rate (64.9%) was higher than the culture positive rate (55.1%), with a coincidence rate of 83.7%. The sensitivity and specificity of SAT for diagnosing PTB were 66.7% and 100%, respectively, while those for culture were 53.9% and 84.2%, respectively. RDB has high sensitivity and specificity in identifying drug resistance genes and mutated sites. The results of RDB correlated well with those of DST and DNA sequencing, with coincidence rates of 92.5% and 98.1%, respectively. The combination of SAT and RDB is promising for rapidly detecting PTB and monitoring drug resistance in clinical laboratories. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
An aerial sightability model for estimating ferruginous hawk population size
Ayers, L.W.; Anderson, S.H.
1999-01-01
Most raptor aerial survey projects have focused on numeric description of visibility bias without identifying the contributing factors or developing predictive models to account for imperfect detection rates. Our goal was to develop a sightability model for nesting ferruginous hawks (Buteo regalis) that could account for nests missed during aerial surveys and provide more accurate population estimates. Eighteen observers, all unfamiliar with nest locations in a known population, searched for nests within 300 m of flight transects via a Maule fixed-wing aircraft. Flight variables tested for their influence on nest-detection rates included aircraft speed, height, direction of travel, time of day, light condition, distance to nest, and observer experience level. Nest variables included status (active vs. inactive), condition (i.e., excellent, good, fair, poor, bad), substrate type, topography, and tree density. A multiple logistic regression model identified nest substrate type, distance to nest, and observer experience level as significant predictors of detection rates (P < 0.05). The overall model was significant (??26 = 124.4, P < 0.001, n = 255 nest observations), and the correct classification rate was 78.4%. During 2 validation surveys, observers saw 23.7% (14/59) and 36.5% (23/63) of the actual population. Sightability model predictions, with 90% confidence intervals, captured the true population in both tests. Our results indicate standardized aerial surveys, when used in conjunction with the predictive sightability model, can provide unbiased population estimates for nesting ferruginous hawks.
Akachi, Yoko; Zumla, Alimuddin; Atun, Rifat
2012-05-15
To assess the impact of investment in national tuberculosis programs (NTPs) on NTP performance and tuberculosis burden in 22 high-burden countries, as determined by the World Health Organization (WHO). Estimates of annual tuberculosis burden and NTP performance indicators and control variables during 2002-2009 were obtained from the Organization for Economic Cooperation and Development, the WHO, the World Bank, and the Penn World Table for the 22 high-burden countries. Panel data analysis was performed using the outcome variables tuberculosis incidence, prevalence, and mortality and the key explanatory variables Partnership case detection rate and treatment success rate, controlling for gross domestic product per capita, population structure, and human immunodeficiency virus (HIV) prevalence. A $1 per capita (general population) higher NTP budget (including domestic and external sources) was associated with a 1.9% (95% confidence interval, .12%-3.6%) higher estimated case detection rate the following year for the 22 high-burden countries between 2002 and 2009. In the final models, which corrected for autocorrelation and heteroskedasticity, achieving the STOP TB Partnership case detection rate target of >70% was associated with significantly (P < .01) lower tuberculosis incidence, prevalence, and mortality the following year, even when controlling for general economic development and HIV prevalence as potential confounding variables. Increased investment in NTPs was significantly associated with improved performance and with a downward trend in the tuberculosis burden in the 22 high-burden countries during 2002-2009.
Vázquez-Avila, Isidro; Vera-Peralta, Jorge Manuel; Alvarez-Nemegyei, José; Rodríguez-Carvajal, Otilia
2007-01-01
In order to decrease the burden of suffering and the costs derived from confirmatory molecular assays, a better strategy is badly needed to decrease the rate of false positive results of the enzyme-linked immunoassay (ELISA) for detection of hepatitis C virus (HCV) antibodies (Anti). To establish the best cutoff of the S/CO rate in subjects with a positive result of a microparticule, third generation ELISA assay for Anti-HCV, for predicting viremia as detected by polymerase chain reaction (PCR) assay. Using the result of the PCR assay as "gold standard", a ROC curve was build with the results of the S/CO rate values in subjects with a positive result for ELISA HCV assay. Fifty two subjects (30 male, 22 female, 40 +/- 12.5 years old) were included. Thirty four (65.3%) had a positive RNA HCV PCR assay. The area under the curve was 0.99 (95% CI: 0.98-1.0). The optimal cutoff for the S/CO rate was established in 29: sensitivity: 97%; specificity: 100%: PPV: 100%; NPV: 94%. Setting the cutoff of the S/CO in 29 results in a high predictive value for viremia as detected by PCR in subjects with a positive ELISA HVC assay. This knowledge may result in a better decision taking for the clinical follow up of those subjects with a positive result in the ELISA screening assay for HCV infection.
Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun
2017-08-01
The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.
Applying cognitive acuity theory to the development and scoring of situational judgment tests.
Leeds, J Peter
2017-11-09
The theory of cognitive acuity (TCA) treats the response options within items as signals to be detected and uses psychophysical methods to estimate the respondents' sensitivity to these signals. Such a framework offers new methods to construct and score situational judgment tests (SJT). Leeds (2012) defined cognitive acuity as the capacity to discern correctness and distinguish between correctness differences among simultaneously presented situation-specific response options. In this study, SJT response options were paired in order to offer the respondent a two-option choice. The contrast in correctness valence between the two options determined the magnitude of signal emission, with larger signals portending a higher probability of detection. A logarithmic relation was found between correctness valence contrast (signal stimulus) and its detectability (sensation response). Respondent sensitivity to such signals was measured and found to be related to the criterion variables. The linkage between psychophysics and elemental psychometrics may offer new directions for measurement theory.
Seeing the Errors You Feel Enhances Locomotor Performance but Not Learning.
Roemmich, Ryan T; Long, Andrew W; Bastian, Amy J
2016-10-24
In human motor learning, it is thought that the more information we have about our errors, the faster we learn. Here, we show that additional error information can lead to improved motor performance without any concomitant improvement in learning. We studied split-belt treadmill walking that drives people to learn a new gait pattern using sensory prediction errors detected by proprioceptive feedback. When we also provided visual error feedback, participants acquired the new walking pattern far more rapidly and showed accelerated restoration of the normal walking pattern during washout. However, when the visual error feedback was removed during either learning or washout, errors reappeared with performance immediately returning to the level expected based on proprioceptive learning alone. These findings support a model with two mechanisms: a dual-rate adaptation process that learns invariantly from sensory prediction error detected by proprioception and a visual-feedback-dependent process that monitors learning and corrects residual errors but shows no learning itself. We show that our voluntary correction model accurately predicted behavior in multiple situations where visual feedback was used to change acquisition of new walking patterns while the underlying learning was unaffected. The computational and behavioral framework proposed here suggests that parallel learning and error correction systems allow us to rapidly satisfy task demands without necessarily committing to learning, as the relative permanence of learning may be inappropriate or inefficient when facing environments that are liable to change. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.
2015-01-01
Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798
Application of hidden Markov models to biological data mining: a case study
NASA Astrophysics Data System (ADS)
Yin, Michael M.; Wang, Jason T.
2000-04-01
In this paper we present an example of biological data mining: the detection of splicing junction acceptors in eukaryotic genes. Identification or prediction of transcribed sequences from within genomic DNA has been a major rate-limiting step in the pursuit of genes. Programs currently available are far from being powerful enough to elucidate the gene structure completely. Here we develop a hidden Markov model (HMM) to represent the degeneracy features of splicing junction acceptor sites in eukaryotic genes. The HMM system is fully trained using an expectation maximization (EM) algorithm and the system performance is evaluated using the 10-way cross- validation method. Experimental results show that our HMM system can correctly classify more than 94% of the candidate sequences (including true and false acceptor sites) into right categories. About 90% of the true acceptor sites and 96% of the false acceptor sites in the test data are classified correctly. These results are very promising considering that only the local information in DNA is used. The proposed model will be a very important component of an effective and accurate gene structure detection system currently being developed in our lab.
ERIC Educational Resources Information Center
Huang, Jie; Francis, Andrea P.; Carr, Thomas H.
2008-01-01
A quantitative method is introduced for detecting and correcting artifactual signal changes in BOLD time series data arising from the magnetic field warping caused by motion of the articulatory apparatus when speaking aloud, with extensions to detection of subvocal articulatory activity during silent reading. Whole-head images allow the large,…
Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)
NASA Astrophysics Data System (ADS)
Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.
2018-04-01
Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.
Statistical modeling, detection, and segmentation of stains in digitized fabric images
NASA Astrophysics Data System (ADS)
Gururajan, Arunkumar; Sari-Sarraf, Hamed; Hequet, Eric F.
2007-02-01
This paper will describe a novel and automated system based on a computer vision approach, for objective evaluation of stain release on cotton fabrics. Digitized color images of the stained fabrics are obtained, and the pixel values in the color and intensity planes of these images are probabilistically modeled as a Gaussian Mixture Model (GMM). Stain detection is posed as a decision theoretic problem, where the null hypothesis corresponds to absence of a stain. The null hypothesis and the alternate hypothesis mathematically translate into a first order GMM and a second order GMM respectively. The parameters of the GMM are estimated using a modified Expectation-Maximization (EM) algorithm. Minimum Description Length (MDL) is then used as the test statistic to decide the verity of the null hypothesis. The stain is then segmented by a decision rule based on the probability map generated by the EM algorithm. The proposed approach was tested on a dataset of 48 fabric images soiled with stains of ketchup, corn oil, mustard, ragu sauce, revlon makeup and grape juice. The decision theoretic part of the algorithm produced a correct detection rate (true positive) of 93% and a false alarm rate of 5% on these set of images.
Exploring the Effects of Stellar Multiplicity on Exoplanet Occurrence Rates
NASA Astrophysics Data System (ADS)
Barclay, Thomas; Shabram, Megan
2017-06-01
Determining the frequency of habitable worlds is a key goal of the Kepler mission. During Kepler's four year investigation it detected thousands of transiting exoplanets with sizes varying from smaller than Mercury to larger than Jupiter. Finding planets was just the first step to determining frequency, and for the past few years the mission team has been modeling the reliability and completeness of the Kepler planet sample. One effect that has not typically been built into occurrence rate statistics is that of stellar multiplicity. If a planet orbits the primary star in a binary or triple star system then the transit depth will be somewhat diluted resulting in a modest underestimation in the planet size. However, if a detected planet orbits a fainter star then the error in measured planet radius can be very significant. We have taken a hypothetical star and planet population and passed that through a Kepler detection model. From this we have derived completeness corrections for a realistic case of a Universe with binary stars and compared that with a model Universe where all stars are single. We report on the impact that binaries have on exoplanet population statistics.
Prostate Brachytherapy Seed Reconstruction with Gaussian Blurring and Optimal Coverage Cost
Lee, Junghoon; Liu, Xiaofeng; Jain, Ameet K.; Song, Danny Y.; Burdette, E. Clif; Prince, Jerry L.; Fichtinger, Gabor
2009-01-01
Intraoperative dosimetry in prostate brachytherapy requires localization of the implanted radioactive seeds. A tomosynthesis-based seed reconstruction method is proposed. A three-dimensional volume is reconstructed from Gaussian-blurred projection images and candidate seed locations are computed from the reconstructed volume. A false positive seed removal process, formulated as an optimal coverage problem, iteratively removes “ghost” seeds that are created by tomosynthesis reconstruction. In an effort to minimize pose errors that are common in conventional C-arms, initial pose parameter estimates are iteratively corrected by using the detected candidate seeds as fiducials, which automatically “focuses” the collected images and improves successive reconstructed volumes. Simulation results imply that the implanted seed locations can be estimated with a detection rate of ≥ 97.9% and ≥ 99.3% from three and four images, respectively, when the C-arm is calibrated and the pose of the C-arm is known. The algorithm was also validated on phantom data sets successfully localizing the implanted seeds from four or five images. In a Phase-1 clinical trial, we were able to localize the implanted seeds from five intraoperative fluoroscopy images with 98.8% (STD=1.6) overall detection rate. PMID:19605321
Clevert, Djork-Arné; Mitterecker, Andreas; Mayr, Andreas; Klambauer, Günter; Tuefferd, Marianne; De Bondt, An; Talloen, Willem; Göhlmann, Hinrich; Hochreiter, Sepp
2011-07-01
Cost-effective oligonucleotide genotyping arrays like the Affymetrix SNP 6.0 are still the predominant technique to measure DNA copy number variations (CNVs). However, CNV detection methods for microarrays overestimate both the number and the size of CNV regions and, consequently, suffer from a high false discovery rate (FDR). A high FDR means that many CNVs are wrongly detected and therefore not associated with a disease in a clinical study, though correction for multiple testing takes them into account and thereby decreases the study's discovery power. For controlling the FDR, we propose a probabilistic latent variable model, 'cn.FARMS', which is optimized by a Bayesian maximum a posteriori approach. cn.FARMS controls the FDR through the information gain of the posterior over the prior. The prior represents the null hypothesis of copy number 2 for all samples from which the posterior can only deviate by strong and consistent signals in the data. On HapMap data, cn.FARMS clearly outperformed the two most prevalent methods with respect to sensitivity and FDR. The software cn.FARMS is publicly available as a R package at http://www.bioinf.jku.at/software/cnfarms/cnfarms.html.
Three New Methods for Analysis of Answer Changes
ERIC Educational Resources Information Center
Sinharay, Sandip; Johnson, Matthew S.
2017-01-01
In a pioneering research article, Wollack and colleagues suggested the "erasure detection index" (EDI) to detect test tampering. The EDI can be used with or without a continuity correction and is assumed to follow the standard normal distribution under the null hypothesis of no test tampering. When used without a continuity correction,…
Interference detection and correction applied to incoherent-scatter radar power spectrum measurement
NASA Technical Reports Server (NTRS)
Ying, W. P.; Mathews, J. D.; Rastogi, P. K.
1986-01-01
A median filter based interference detection and correction technique is evaluated and the method applied to the Arecibo incoherent scatter radar D-region ionospheric power spectrum is discussed. The method can be extended to other kinds of data when the statistics involved in the process are still valid.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Ji; Fischer, Debra A.; Xie, Ji-Wei
2014-08-20
Almost half of the stellar systems in the solar neighborhood are made up of multiple stars. In multiple-star systems, planet formation is under the dynamical influence of stellar companions, and the planet occurrence rate is expected to be different from that of single stars. There have been numerous studies on the planet occurrence rate of single star systems. However, to fully understand planet formation, the planet occurrence rate in multiple-star systems needs to be addressed. In this work, we infer the planet occurrence rate in multiple-star systems by measuring the stellar multiplicity rate for planet host stars. For a subsamplemore » of 56 Kepler planet host stars, we use adaptive optics (AO) imaging and the radial velocity (RV) technique to search for stellar companions. The combination of these two techniques results in high search completeness for stellar companions. We detect 59 visual stellar companions to 25 planet host stars with AO data. Three stellar companions are within 2'' and 27 within 6''. We also detect two possible stellar companions (KOI 5 and KOI 69) showing long-term RV acceleration. After correcting for a bias against planet detection in multiple-star systems due to flux contamination, we find that planet formation is suppressed in multiple-star systems with separations smaller than 1500 AU. Specifically, we find that compared to single star systems, planets in multiple-star systems occur 4.5 ± 3.2, 2.6 ± 1.0, and 1.7 ± 0.5 times less frequently when a stellar companion is present at a distance of 10, 100, and 1000 AU, respectively. This conclusion applies only to circumstellar planets; the planet occurrence rate for circumbinary planets requires further investigation.« less
NASA Astrophysics Data System (ADS)
Lockhart, M.; Henzlova, D.; Croft, S.; Cutler, T.; Favalli, A.; McGahee, Ch.; Parker, R.
2018-01-01
Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli(DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory and implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. The current paper discusses and presents the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. In order to assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. The DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.
Estimation and correction of visibility bias in aerial surveys of wintering ducks
Pearse, A.T.; Gerard, P.D.; Dinsmore, S.J.; Kaminski, R.M.; Reinecke, K.J.
2008-01-01
Incomplete detection of all individuals leading to negative bias in abundance estimates is a pervasive source of error in aerial surveys of wildlife, and correcting that bias is a critical step in improving surveys. We conducted experiments using duck decoys as surrogates for live ducks to estimate bias associated with surveys of wintering ducks in Mississippi, USA. We found detection of decoy groups was related to wetland cover type (open vs. forested), group size (1?100 decoys), and interaction of these variables. Observers who detected decoy groups reported counts that averaged 78% of the decoys actually present, and this counting bias was not influenced by either covariate cited above. We integrated this sightability model into estimation procedures for our sample surveys with weight adjustments derived from probabilities of group detection (estimated by logistic regression) and count bias. To estimate variances of abundance estimates, we used bootstrap resampling of transects included in aerial surveys and data from the bias-correction experiment. When we implemented bias correction procedures on data from a field survey conducted in January 2004, we found bias-corrected estimates of abundance increased 36?42%, and associated standard errors increased 38?55%, depending on species or group estimated. We deemed our method successful for integrating correction of visibility bias in an existing sample survey design for wintering ducks in Mississippi, and we believe this procedure could be implemented in a variety of sampling problems for other locations and species.
Wold, Jens Petter; Veiseth-Kent, Eva; Høst, Vibeke; Løvland, Atle
2017-01-01
The main objective of this work was to develop a method for rapid and non-destructive detection and grading of wooden breast (WB) syndrome in chicken breast fillets. Near-infrared (NIR) spectroscopy was chosen as detection method, and an industrial NIR scanner was applied and tested for large scale on-line detection of the syndrome. Two approaches were evaluated for discrimination of WB fillets: 1) Linear discriminant analysis based on NIR spectra only, and 2) a regression model for protein was made based on NIR spectra and the estimated concentrations of protein were used for discrimination. A sample set of 197 fillets was used for training and calibration. A test set was recorded under industrial conditions and contained spectra from 79 fillets. The classification methods obtained 99.5-100% correct classification of the calibration set and 100% correct classification of the test set. The NIR scanner was then installed in a commercial chicken processing plant and could detect incidence rates of WB in large batches of fillets. Examples of incidence are shown for three broiler flocks where a high number of fillets (9063, 6330 and 10483) were effectively measured. Prevalence of WB of 0.1%, 6.6% and 8.5% were estimated for these flocks based on the complete sample volumes. Such an on-line system can be used to alleviate the challenges WB represents to the poultry meat industry. It enables automatic quality sorting of chicken fillets to different product categories. Manual laborious grading can be avoided. Incidences of WB from different farms and flocks can be tracked and information can be used to understand and point out main causes for WB in the chicken production. This knowledge can be used to improve the production procedures and reduce today's extensive occurrence of WB.
Hautvast, Gilion L T F; Salton, Carol J; Chuang, Michael L; Breeuwer, Marcel; O'Donnell, Christopher J; Manning, Warren J
2012-05-01
Quantitative analysis of short-axis functional cardiac magnetic resonance images can be performed using automatic contour detection methods. The resulting myocardial contours must be reviewed and possibly corrected, which can be time-consuming, particularly when performed across all cardiac phases. We quantified the impact of manual contour corrections on both analysis time and quantitative measurements obtained from left ventricular short-axis cine images acquired from 1555 participants of the Framingham Heart Study Offspring cohort using computer-aided contour detection methods. The total analysis time for a single case was 7.6 ± 1.7 min for an average of 221 ± 36 myocardial contours per participant. This included 4.8 ± 1.6 min for manual contour correction of 2% of all automatically detected endocardial contours and 8% of all automatically detected epicardial contours. However, the impact of these corrections on global left ventricular parameters was limited, introducing differences of 0.4 ± 4.1 mL for end-diastolic volume, -0.3 ± 2.9 mL for end-systolic volume, 0.7 ± 3.1 mL for stroke volume, and 0.3 ± 1.8% for ejection fraction. We conclude that left ventricular functional parameters can be obtained under 5 min from short-axis functional cardiac magnetic resonance images using automatic contour detection methods. Manual correction more than doubles analysis time, with minimal impact on left ventricular volumes and ejection fraction. Copyright © 2011 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Weber, Bruce A.
2005-07-01
We have performed an experiment that compares the performance of human observers with that of a robust algorithm for the detection of targets in difficult, nonurban forward-looking infrared imagery. Our purpose was to benchmark the comparison and document performance differences for future algorithm improvement. The scale-insensitive detection algorithm, used as a benchmark by the Night Vision Electronic Sensors Directorate for algorithm evaluation, employed a combination of contrastlike features to locate targets. Detection receiver operating characteristic curves and observer-confidence analyses were used to compare human and algorithmic responses and to gain insight into differences. The test database contained ground targets, in natural clutter, whose detectability, as judged by human observers, ranged from easy to very difficult. In general, as compared with human observers, the algorithm detected most of the same targets, but correlated confidence with correct detections poorly and produced many more false alarms at any useful level of performance. Though characterizing human performance was not the intent of this study, results suggest that previous observational experience was not a strong predictor of human performance, and that combining individual human observations by majority vote significantly reduced false-alarm rates.
Repeat-aware modeling and correction of short read errors.
Yang, Xiao; Aluru, Srinivas; Dorman, Karin S
2011-02-15
High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors for genomes with high repeat content.
Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations
Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J.
2017-01-01
House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4–12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a ‘gold standard’ reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community. PMID:28727808
Klinck, Holger; Mellinger, David K
2011-04-01
The energy ratio mapping algorithm (ERMA) was developed to improve the performance of energy-based detection of odontocete echolocation clicks, especially for application in environments with limited computational power and energy such as acoustic gliders. ERMA systematically evaluates many frequency bands for energy ratio-based detection of echolocation clicks produced by a target species in the presence of the species mix in a given geographic area. To evaluate the performance of ERMA, a Teager-Kaiser energy operator was applied to the series of energy ratios as derived by ERMA. A noise-adaptive threshold was then applied to the Teager-Kaiser function to identify clicks in data sets. The method was tested for detecting clicks of Blainville's beaked whales while rejecting echolocation clicks of Risso's dolphins and pilot whales. Results showed that the ERMA-based detector correctly identified 81.6% of the beaked whale clicks in an extended evaluation data set. Average false-positive detection rate was 6.3% (3.4% for Risso's dolphins and 2.9% for pilot whales).
Robust crop and weed segmentation under uncontrolled outdoor illumination.
Jeon, Hong Y; Tian, Lei F; Zhu, Heping
2011-01-01
An image processing algorithm for detecting individual weeds was developed and evaluated. Weed detection processes included were normalized excessive green conversion, statistical threshold value estimation, adaptive image segmentation, median filter, morphological feature calculation and Artificial Neural Network (ANN). The developed algorithm was validated for its ability to identify and detect weeds and crop plants under uncontrolled outdoor illuminations. A machine vision implementing field robot captured field images under outdoor illuminations and the image processing algorithm automatically processed them without manual adjustment. The errors of the algorithm, when processing 666 field images, ranged from 2.1 to 2.9%. The ANN correctly detected 72.6% of crop plants from the identified plants, and considered the rest as weeds. However, the ANN identification rates for crop plants were improved up to 95.1% by addressing the error sources in the algorithm. The developed weed detection and image processing algorithm provides a novel method to identify plants against soil background under the uncontrolled outdoor illuminations, and to differentiate weeds from crop plants. Thus, the proposed new machine vision and processing algorithm may be useful for outdoor applications including plant specific direct applications (PSDA).
Technologies of high-performance thermography systems
NASA Astrophysics Data System (ADS)
Breiter, R.; Cabanski, Wolfgang A.; Mauk, K. H.; Kock, R.; Rode, W.
1997-08-01
A family of 2 dimensional detection modules based on 256 by 256 and 486 by 640 platinum silicide (PtSi) focal planes, or 128 by 128 and 256 by 256 mercury cadmium telluride (MCT) focal planes for applications in either the 3 - 5 micrometer (MWIR) or 8 - 10 micrometer (LWIR) range was recently developed by AIM. A wide variety of applications is covered by the specific features unique for these two material systems. The PtSi units provide state of the art correctability with long term stable gain and offset coefficients. The MCT units provide extremely fast frame rates like 400 Hz with snapshot integration times as short as 250 microseconds and with a thermal resolution NETD less than 20 mK for e.g. the 128 by 128 LWIR module. The unique design idea general for all of these modules is the exclusively digital interface, using 14 bit analog to digital conversion to provide state of the art correctability, access to highly dynamic scenes without any loss of information and simplified exchangeability of the units. Device specific features like bias voltages etc. are identified during the final test and stored in a memory on the driving electronics. This concept allows an easy exchange of IDCAs of the same type without any need for tuning or e.g. the possibility to upgrade a PtSi based unit to an MCT module by just loading the suitable software. Miniaturized digital signal processor (DSP) based image correction units were developed for testing and operating the units with output data rates of up to 16 Mpixels/s. These boards provide the ability for freely programmable realtime functions like two point correction and various data manipulations in thermography applications.
2016-01-01
Reports an error in "A violation of the conditional independence assumption in the two-high-threshold model of recognition memory" by Tina Chen, Jeffrey J. Starns and Caren M. Rotello (Journal of Experimental Psychology: Learning, Memory, and Cognition, 2015[Jul], Vol 41[4], 1215-1222). In the article, Chen et al. compared three models: a continuous signal detection model (SDT), a standard two-high-threshold discrete-state model in which detect states always led to correct responses (2HT), and a full-mapping version of the 2HT model in which detect states could lead to either correct or incorrect responses. After publication, Rani Moran (personal communication, April 21, 2015) identified two errors that impact the reported fit statistics for the Bayesian information criterion (BIC) metric of all models as well as the Akaike information criterion (AIC) results for the full-mapping model. The errors are described in the erratum. (The following abstract of the original article appeared in record 2014-56216-001.) The 2-high-threshold (2HT) model of recognition memory assumes that test items result in distinct internal states: they are either detected or not, and the probability of responding at a particular confidence level that an item is "old" or "new" depends on the state-response mapping parameters. The mapping parameters are independent of the probability that an item yields a particular state (e.g., both strong and weak items that are detected as old have the same probability of producing a highest-confidence "old" response). We tested this conditional independence assumption by presenting nouns 1, 2, or 4 times. To maximize the strength of some items, "superstrong" items were repeated 4 times and encoded in conjunction with pleasantness, imageability, anagram, and survival processing tasks. The 2HT model failed to simultaneously capture the response rate data for all item classes, demonstrating that the data violated the conditional independence assumption. In contrast, a Gaussian signal detection model, which posits that the level of confidence that an item is "old" or "new" is a function of its continuous strength value, provided a good account of the data. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
78 FR 67951 - Price Cap Rules for Certain Postal Rate Adjustments; Corrections
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-13
... POSTAL REGULATORY COMMISSION 39 CFR Part 3010 [Docket No. RM2013-2; Order No. 1786] Price Cap Rules for Certain Postal Rate Adjustments; Corrections AGENCY: Postal Regulatory Commission. ACTION: Correcting amendments. SUMMARY: The Postal Regulatory Commission published a document in the Federal Register...
[Local recurrence following anterior rectum resection--manual versus stapler suture].
Metzger, U; Weber, W; Weber, E; Linggi, J; Buchmann, P; Largiadèr, F
1985-04-01
A retrospective study was carried out on 88 hand sewn and 34 stapled anastomoses following anterior resection to evaluate the impact of suture technique on local recurrence rate. The patient groups were comparable with one exception: there were significantly more Dukes C lesions resected and sutured using the stapling gun (35% versus 15%, X2 = 6.33, p less than 0.05). Stage-corrected recurrence rate was similar in both groups, Dukes A: 8%, Dukes B 21%, Dukes C 52%, all recurrences being detected within 24 months following operation. Significantly fewer protective colostomies were needed using the staple gun (15% versus 34%, X2 = 4.50, p less than 0.05). Otherwise, no significant difference or benefit was observed comparing the two suture techniques.
NASA Astrophysics Data System (ADS)
Dang, H.; Stayman, J. W.; Sisniega, A.; Xu, J.; Zbijewski, W.; Yorkston, J.; Aygun, N.; Koliatsos, V.; Siewerdsen, J. H.
2015-03-01
Traumatic brain injury (TBI) is a major cause of death and disability. The current front-line imaging modality for TBI detection is CT, which reliably detects intracranial hemorrhage (fresh blood contrast 30-50 HU, size down to 1 mm) in non-contrast-enhanced exams. Compared to CT, flat-panel detector (FPD) cone-beam CT (CBCT) systems offer lower cost, greater portability, and smaller footprint suitable for point-of-care deployment. We are developing FPD-CBCT to facilitate TBI detection at the point-of-care such as in emergent, ambulance, sports, and military applications. However, current FPD-CBCT systems generally face challenges in low-contrast, soft-tissue imaging. Model-based reconstruction can improve image quality in soft-tissue imaging compared to conventional filtered back-projection (FBP) by leveraging high-fidelity forward model and sophisticated regularization. In FPD-CBCT TBI imaging, measurement noise characteristics undergo substantial change following artifact correction, resulting in non-negligible noise amplification. In this work, we extend the penalized weighted least-squares (PWLS) image reconstruction to include the two dominant artifact corrections (scatter and beam hardening) in FPD-CBCT TBI imaging by correctly modeling the variance change following each correction. Experiments were performed on a CBCT test-bench using an anthropomorphic phantom emulating intra-parenchymal hemorrhage in acute TBI, and the proposed method demonstrated an improvement in blood-brain contrast-to-noise ratio (CNR = 14.2) compared to FBP (CNR = 9.6) and PWLS using conventional weights (CNR = 11.6) at fixed spatial resolution (1 mm edge-spread width at the target contrast). The results support the hypothesis that FPD-CBCT can fulfill the image quality requirements for reliable TBI detection, using high-fidelity artifact correction and statistical reconstruction with accurate post-artifact-correction noise models.
Experimental results of 5-Gbps free-space coherent optical communications with adaptive optics
NASA Astrophysics Data System (ADS)
Chen, Mo; Liu, Chao; Rui, Daoman; Xian, Hao
2018-07-01
In a free-space optical communication system with fiber optical components, the received signal beam must be coupled into a single-mode fiber (SMF) before being amplified and detected. The impacts analysis of tracking errors and wavefront distortion on SMF coupling show that under the condition of relatively strong turbulence, only the tracking errors compensation is not enough, and turbulence wavefront aberration is required to be corrected. Based on our previous study and design of SMF coupling system with a 137-element continuous surface deformable mirror AO unit, we perform an experiment of a 5-Gbps Free-space Coherent Optical Communication (FSCOC) system, in which the eye pattern and Bit-error Rate (BER) are displayed. The comparative results are shown that the influence of the atmospheric is fatal in FSCOC systems. The BER of coherent communication is under 10-6 with AO compensation, which drops significantly compared with the BER without AO correction.
NASA Astrophysics Data System (ADS)
Lin, D.; Jarzabek-Rychard, M.; Schneider, D.; Maas, H.-G.
2018-05-01
An automatic building façade thermal texture mapping approach, using uncooled thermal camera data, is proposed in this paper. First, a shutter-less radiometric thermal camera calibration method is implemented to remove the large offset deviations caused by changing ambient environment. Then, a 3D façade model is generated from a RGB image sequence using structure-from-motion (SfM) techniques. Subsequently, for each triangle in the 3D model, the optimal texture is selected by taking into consideration local image scale, object incident angle, image viewing angle as well as occlusions. Afterwards, the selected textures can be further corrected using thermal radiant characteristics. Finally, the Gauss filter outperforms the voted texture strategy at the seams smoothing and thus for instance helping to reduce the false alarm rate in façade thermal leakages detection. Our approach is evaluated on a building row façade located at Dresden, Germany.
Diffusion scrubber-ion chromatography for the measurement of trace levels of atmospheric HCl
NASA Astrophysics Data System (ADS)
Lindgren, Per F.
A diffusion scrubber-ion chromatographic (DS-IC) instrument has been characterized and employed for the measurement of trace levels of gaseous HCl in the atmosphere. The instrument operates with a temporal resolution of 5 min and the detection limit is estimated to be 5 pptv. Collection efficiencies for HCl with two identical diffusion scrubbers were 28±2% and 20±2%, respectively, at a sampling flow rate of 2 SLPM. Instrument response decreases with increased relative humidity. An equation, correction factor=2.45 × 10 -7 × %r.h 3 + 1.00, is used to correct for the relative humidity dependency. The instrument was tested in ambient air studies by measuring background mixing ratios between 0.02 and 0.5 ppbv at a suburban sampling site. Calibration of the instrument was carried out with a novel source of gaseous HCl based on sublimation of ammonium chloride.
Optimizing the rapid measurement of detection thresholds in infants
Jones, Pete R.; Kalwarowsky, Sarah; Braddick, Oliver J.; Atkinson, Janette; Nardini, Marko
2015-01-01
Accurate measures of perceptual threshold are difficult to obtain in infants. In a clinical context, the challenges are particularly acute because the methods must yield meaningful results quickly and within a single individual. The present work considers how best to maximize speed, accuracy, and reliability when testing infants behaviorally and suggests some simple principles for improving test efficiency. Monte Carlo simulations, together with empirical (visual acuity) data from 65 infants, are used to demonstrate how psychophysical methods developed with adults can produce misleading results when applied to infants. The statistical properties of an effective clinical infant test are characterized, and based on these, it is shown that (a) a reduced (false-positive) guessing rate can greatly increase test efficiency, (b) the ideal threshold to target is often below 50% correct, and (c) simply taking the max correct response can often provide the best measure of an infant's perceptual sensitivity. PMID:26237298
Buset, Jonathan M; El-Sahn, Ziad A; Plant, David V
2012-06-18
We demonstrate an improved overlapped-subcarrier multiplexed (O-SCM) WDM PON architecture transmitting over a single feeder using cost sensitive intensity modulation/direct detection transceivers, data re-modulation and simple electronics. Incorporating electronic equalization and Reed-Solomon forward-error correction codes helps to overcome the bandwidth limitation of a remotely seeded reflective semiconductor optical amplifier (RSOA)-based ONU transmitter. The O-SCM architecture yields greater spectral efficiency and higher bit rates than many other SCM techniques while maintaining resilience to upstream impairments. We demonstrate full-duplex 5 Gb/s transmission over 20 km and analyze BER performance as a function of transmitted and received power. The architecture provides flexibility to network operators by relaxing common design constraints and enabling full-duplex operation at BER ∼ 10(-10) over a wide range of OLT launch powers from 3.5 to 8 dBm.
Stoolmiller, M; Eddy, J M; Reid, J B
2000-04-01
This study examined theoretical, methodological, and statistical problems involved in evaluating the outcome of aggression on the playground for a universal preventive intervention for conduct disorder. Moderately aggressive children were hypothesized most likely to benefit. Aggression was measured on the playground using observers blind to the group status of the children. Behavior was micro-coded in real time to minimize potential expectancy biases. The effectiveness of the intervention was strongly related to initial levels of aggressiveness. The most aggressive children improved the most. Models that incorporated corrections for low reliability (the ratio of variance due to true time-stable individual differences to total variance) and censoring (a floor effect in the rate data due to short periods of observation) obtained effect sizes 5 times larger than models without such corrections with respect to children who were initially 2 SDs above the mean on aggressiveness.
Katzenellenbogen, Judith M; Sanfilippo, Frank M; Hobbs, Michael S T; Briffa, Tom G; Ridout, Steve C; Knuiman, Matthew W; Dimer, Lyn; Taylor, Kate P; Thompson, Peter L; Thompson, Sandra C
2011-06-01
To investigate the impact of prevalence correction of population denominators on myocardial infarction (MI) incidence rates, rate ratios, and rate differences in Aboriginal vs. non-Aboriginal Western Australians aged 25-74 years during the study period 2000-2004. Person-based linked hospital and mortality data sets were used to estimate the number of prevalent and first-ever MI cases each year from 2000 to 2004 using a 15-year look-back period. Age-specific and -standardized MI incidence rates were calculated using both prevalence-corrected and -uncorrected population denominators, by sex and Aboriginality. The impact of prevalence correction on rates increased with age, was higher for men than women, and substantially greater for Aboriginal than non-Aboriginal people. Despite the systematic underestimation of incidence, prevalence correction had little impact on the Aboriginal to non-Aboriginal age-standardized rate ratios (6% and 4% underestimate in men and women, respectively), although the impact on rate differences was more marked (12% and 6%, respectively). The percentage underestimate of differentials was greater at older ages. Prevalence correction of denominators, while more accurate, is difficult to apply and may add modestly to the quantification of relative disparities in MI incidence between populations. Absolute incidence disparities using uncorrected denominators may have an error >10%. Copyright © 2011 Elsevier Inc. All rights reserved.
Correcting reaction rates measured by saturation-transfer magnetic resonance spectroscopy
NASA Astrophysics Data System (ADS)
Gabr, Refaat E.; Weiss, Robert G.; Bottomley, Paul A.
2008-04-01
Off-resonance or spillover irradiation and incomplete saturation can introduce significant errors in the estimates of chemical rate constants measured by saturation-transfer magnetic resonance spectroscopy (MRS). Existing methods of correction are effective only over a limited parameter range. Here, a general approach of numerically solving the Bloch-McConnell equations to calculate exchange rates, relaxation times and concentrations for the saturation-transfer experiment is investigated, but found to require more measurements and higher signal-to-noise ratios than in vivo studies can practically afford. As an alternative, correction formulae for the reaction rate are provided which account for the expected parameter ranges and limited measurements available in vivo. The correction term is a quadratic function of experimental measurements. In computer simulations, the new formulae showed negligible bias and reduced the maximum error in the rate constants by about 3-fold compared to traditional formulae, and the error scatter by about 4-fold, over a wide range of parameters for conventional saturation transfer employing progressive saturation, and for the four-angle saturation-transfer method applied to the creatine kinase (CK) reaction in the human heart at 1.5 T. In normal in vivo spectra affected by spillover, the correction increases the mean calculated forward CK reaction rate by 6-16% over traditional and prior correction formulae.
Bates, Anthony; Miles, Kenneth
2017-12-01
To validate MR textural analysis (MRTA) for detection of transition zone (TZ) prostate cancer through comparison with co-registered prostate-specific membrane antigen (PSMA) PET-MR. Retrospective analysis was performed for 30 men who underwent simultaneous PSMA PET-MR imaging for staging of prostate cancer. Thirty texture features were derived from each manually contoured T2-weighted, transaxial, prostatic TZ using texture analysis software that applies a spatial band-pass filter and quantifies texture through histogram analysis. Texture features of the TZ were compared to PSMA expression on the corresponding PET images. The Benjamini-Hochberg correction controlled the false discovery rate at <5%. Eighty-eight T2-weighted images in 18 patients demonstrated abnormal PSMA expression within the TZ on PET-MR. 123 images were PSMA negative. Based on the corrected p-value of 0.005, significant differences between PSMA positive and negative slices were found for 16 texture parameters: Standard deviation and mean of positive pixels for all spatial filters (p = <0.0001 for both at all spatial scaling factor (SSF) values) and mean intensity following filtration for SSF 3-6 mm (p = 0.0002-0.0018). Abnormal expression of PSMA within the TZ is associated with altered texture on T2-weighted MR, providing validation of MRTA for the detection of TZ prostate cancer. • Prostate transition zone (TZ) MR texture analysis may assist in prostate cancer detection. • Abnormal transition zone PSMA expression correlates with altered texture on T2-weighted MR. • TZ with abnormal PSMA expression demonstrates significantly reduced MI, SD and MPP.
Wang, Bin; Wang, Xiaokai; Hua, Lin; Li, Juanjuan; Xiang, Qing
2017-04-01
Electromagnetic acoustic resonance (EMAR) is a considerable method to determine the mean grain size of the metal material with a high precision. The basic ultrasonic attenuation theory used for the mean grain size detection of EMAR is come from the single phase theory. In this paper, the EMAR testing was carried out based on the ultrasonic attenuation theory. The detection results show that the double peaks phenomenon occurs in the EMAR testing of DP590 steel plate. The dual phase structure of DP590 steel is the inducement of the double peaks phenomenon in the EMAR testing. In reaction to the phenomenon, a corrected method with EMAR was put forward to detect the mean grain size of dual phase steel. Compared with the traditional attenuation evaluation method and the uncorrected method with EMAR, the corrected method with EMAR shows great effectiveness and superiority for the mean grain size detection of DP590 steel plate. Copyright © 2016. Published by Elsevier B.V.
Juan-Albarracín, Javier; Fuster-Garcia, Elies; Pérez-Girbés, Alexandre; Aparici-Robles, Fernando; Alberich-Bayarri, Ángel; Revert-Ventura, Antonio; Martí-Bonmatí, Luis; García-Gómez, Juan M
2018-06-01
Purpose To determine if preoperative vascular heterogeneity of glioblastoma is predictive of overall survival of patients undergoing standard-of-care treatment by using an unsupervised multiparametric perfusion-based habitat-discovery algorithm. Materials and Methods Preoperative magnetic resonance (MR) imaging including dynamic susceptibility-weighted contrast material-enhanced perfusion studies in 50 consecutive patients with glioblastoma were retrieved. Perfusion parameters of glioblastoma were analyzed and used to automatically draw four reproducible habitats that describe the tumor vascular heterogeneity: high-angiogenic and low-angiogenic regions of the enhancing tumor, potentially tumor-infiltrated peripheral edema, and vasogenic edema. Kaplan-Meier and Cox proportional hazard analyses were conducted to assess the prognostic potential of the hemodynamic tissue signature to predict patient survival. Results Cox regression analysis yielded a significant correlation between patients' survival and maximum relative cerebral blood volume (rCBV max ) and maximum relative cerebral blood flow (rCBF max ) in high-angiogenic and low-angiogenic habitats (P < .01, false discovery rate-corrected P < .05). Moreover, rCBF max in the potentially tumor-infiltrated peripheral edema habitat was also significantly correlated (P < .05, false discovery rate-corrected P < .05). Kaplan-Meier analysis demonstrated significant differences between the observed survival of populations divided according to the median of the rCBV max or rCBF max at the high-angiogenic and low-angiogenic habitats (log-rank test P < .05, false discovery rate-corrected P < .05), with an average survival increase of 230 days. Conclusion Preoperative perfusion heterogeneity contains relevant information about overall survival in patients who undergo standard-of-care treatment. The hemodynamic tissue signature method automatically describes this heterogeneity, providing a set of vascular habitats with high prognostic capabilities. © RSNA, 2018.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1994-01-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
Brettschneider, Anna-Kristin; Brettschneidera, Anna-Kristin; Schaffrath Rosario, Angelika; Kuhnert, Ronny; Schmidt, Steffen; Wiegand, Susanna; Ellert, Ute; Kurth, Bärbel-Maria
2015-11-06
The nationwide "German Health Interview and Examination Survey for Children and Adolescents" (KiGGS), conducted in 2003-2006, showed an increase in the prevalence rates of overweight and obesity compared to the early 1990s, indicating the need for regularly monitoring. Recently, a follow-up-KiGGS Wave 1 (2009-2012)-was carried out as a telephone-based survey, providing self-reported height and weight. Since self-reports lead to a bias in prevalence rates of weight status, a correction is needed. The aim of the present study is to obtain updated prevalence rates for overweight and obesity for 11- to 17-year olds living in Germany after correction for bias in self-reports. In KiGGS Wave 1, self-reported height and weight were collected from 4948 adolescents during a telephone interview. Participants were also asked about their body perception. From a subsample of KiGGS Wave 1 participants, measurements for height and weight were collected in a physical examination. In order to correct prevalence rates derived from self-reports, weight status categories based on self-reported and measured height and weight were used to estimate a correction formula according to an established procedure under consideration of body perception. The correction procedure was applied and corrected rates were estimated. The corrected prevalence of overweight, including obesity, derived from KiGGS Wave 1, showed that the rate has not further increased compared to the KiGGS baseline survey (18.9 % vs. 18.8 % based on the German reference). The rates of overweight still remain at a high level. The results of KiGGS Wave 1 emphasise the significance of this health issue and the need for prevention of overweight and obesity in children and adolescents.
Errors, error detection, error correction and hippocampal-region damage: data and theories.
MacKay, Donald G; Johnson, Laura W
2013-11-01
This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.
Fiedler, Klaus; Kareev, Yaakov; Avrahami, Judith; Beier, Susanne; Kutzner, Florian; Hütter, Mandy
2016-01-01
Detecting changes, in performance, sales, markets, risks, social relations, or public opinions, constitutes an important adaptive function. In a sequential paradigm devised to investigate detection of change, every trial provides a sample of binary outcomes (e.g., correct vs. incorrect student responses). Participants have to decide whether the proportion of a focal feature (e.g., correct responses) in the population from which the sample is drawn has decreased, remained constant, or increased. Strong and persistent anomalies in change detection arise when changes in proportional quantities vary orthogonally to changes in absolute sample size. Proportional increases are readily detected and nonchanges are erroneously perceived as increases when absolute sample size increases. Conversely, decreasing sample size facilitates the correct detection of proportional decreases and the erroneous perception of nonchanges as decreases. These anomalies are however confined to experienced samples of elementary raw events from which proportions have to be inferred inductively. They disappear when sample proportions are described as percentages in a normalized probability format. To explain these challenging findings, it is essential to understand the inductive-learning constraints imposed on decisions from experience.
Ianakiev, Kiril D [Los Alamos, NM; Hsue, Sin Tao [Santa Fe, NM; Browne, Michael C [Los Alamos, NM; Audia, Jeffrey M [Abiquiu, NM
2006-07-25
The present invention includes an apparatus and corresponding method for temperature correction and count rate expansion of inorganic scintillation detectors. A temperature sensor is attached to an inorganic scintillation detector. The inorganic scintillation detector, due to interaction with incident radiation, creates light pulse signals. A photoreceiver processes the light pulse signals to current signals. Temperature correction circuitry that uses a fast light component signal, a slow light component signal, and the temperature signal from the temperature sensor to corrected an inorganic scintillation detector signal output and expanded the count rate.
77 FR 5728 - Airworthiness Directives; Airbus Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-06
... between bonding lead and the harness, due to over length of the bonding lead. As the affected wire is not... chafing of the wires, and corrective actions, if necessary. We are proposing this AD to detect and correct contact or chafing of wires and bonding leads which, if not detected could be a source of sparks in the...
Correcting Erroneous N+N Structures in the Productions of French Users of English
ERIC Educational Resources Information Center
Garnier, Marie
2012-01-01
This article presents the preliminary steps to the implementation of detection and correction strategies for the erroneous use of N+N structures in the written productions of French-speaking advanced users of English. This research is carried out as part of the grammar checking project "CorrecTools", in which errors are detected and corrected…
Censoring approach to the detection limits in X-ray fluorescence analysis
NASA Astrophysics Data System (ADS)
Pajek, M.; Kubala-Kukuś, A.
2004-10-01
We demonstrate that the effect of detection limits in the X-ray fluorescence analysis (XRF), which limits the determination of very low concentrations of trace elements and results in appearance of the so-called "nondetects", can be accounted for using the statistical concept of censoring. More precisely, the results of such measurements can be viewed as the left random censored data, which can further be analyzed using the Kaplan-Meier method correcting the data for the presence of nondetects. Using this approach, the results of measured, detection limit censored concentrations can be interpreted in a nonparametric manner including the correction for the nondetects, i.e. the measurements in which the concentrations were found to be below the actual detection limits. Moreover, using the Monte Carlo simulation technique we show that by using the Kaplan-Meier approach the corrected mean concentrations for a population of the samples can be estimated within a few percent uncertainties with respect of the simulated, uncensored data. This practically means that the final uncertainties of estimated mean values are limited in fact by the number of studied samples and not by the correction procedure itself. The discussed random-left censoring approach was applied to analyze the XRF detection-limit-censored concentration measurements of trace elements in biomedical samples.
77 FR 2910 - Schedule for Rating Disabilities; Evaluation of Scars; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-20
...; Evaluation of Scars; Correction AGENCY: Department of Veterans Affairs. ACTION: Final rule; correction... that addresses the Skin, so that it more clearly reflected VA's policies concerning the evaluation of... Rating Disabilities that addresses the Skin, 38 CFR 4.118, by revising the criteria for the evaluation of...
Automatic red eye correction and its quality metric
NASA Astrophysics Data System (ADS)
Safonov, Ilia V.; Rychagov, Michael N.; Kang, KiMin; Kim, Sang Ho
2008-01-01
The red eye artifacts are troublesome defect of amateur photos. Correction of red eyes during printing without user intervention and making photos more pleasant for an observer are important tasks. The novel efficient technique of automatic correction of red eyes aimed for photo printers is proposed. This algorithm is independent from face orientation and capable to detect paired red eyes as well as single red eyes. The approach is based on application of 3D tables with typicalness levels for red eyes and human skin tones and directional edge detection filters for processing of redness image. Machine learning is applied for feature selection. For classification of red eye regions a cascade of classifiers including Gentle AdaBoost committee from Classification and Regression Trees (CART) is applied. Retouching stage includes desaturation, darkening and blending with initial image. Several versions of approach implementation using trade-off between detection and correction quality, processing time, memory volume are possible. The numeric quality criterion of automatic red eye correction is proposed. This quality metric is constructed by applying Analytic Hierarchy Process (AHP) for consumer opinions about correction outcomes. Proposed numeric metric helped to choose algorithm parameters via optimization procedure. Experimental results demonstrate high accuracy and efficiency of the proposed algorithm in comparison with existing solutions.
Woo, Jonghye; Tamarappoo, Balaji; Dey, Damini; Nakazato, Ryo; Le Meunier, Ludovic; Ramesh, Amit; Lazewatsky, Joel; Germano, Guido; Berman, Daniel S; Slomka, Piotr J
2011-11-01
The authors aimed to develop an image-based registration scheme to detect and correct patient motion in stress and rest cardiac positron emission tomography (PET)/CT images. The patient motion correction was of primary interest and the effects of patient motion with the use of flurpiridaz F 18 and (82)Rb were demonstrated. The authors evaluated stress/rest PET myocardial perfusion imaging datasets in 30 patients (60 datasets in total, 21 male and 9 female) using a new perfusion agent (flurpiridaz F 18) (n = 16) and (82)Rb (n = 14), acquired on a Siemens Biograph-64 scanner in list mode. Stress and rest images were reconstructed into 4 ((82)Rb) or 10 (flurpiridaz F 18) dynamic frames (60 s each) using standard reconstruction (2D attenuation weighted ordered subsets expectation maximization). Patient motion correction was achieved by an image-based registration scheme optimizing a cost function using modified normalized cross-correlation that combined global and local features. For comparison, visual scoring of motion was performed on the scale of 0 to 2 (no motion, moderate motion, and large motion) by two experienced observers. The proposed registration technique had a 93% success rate in removing left ventricular motion, as visually assessed. The maximum detected motion extent for stress and rest were 5.2 mm and 4.9 mm for flurpiridaz F 18 perfusion and 3.0 mm and 4.3 mm for (82)Rb perfusion studies, respectively. Motion extent (maximum frame-to-frame displacement) obtained for stress and rest were (2.2 ± 1.1, 1.4 ± 0.7, 1.9 ± 1.3) mm and (2.0 ± 1.1, 1.2 ±0 .9, 1.9 ± 0.9) mm for flurpiridaz F 18 perfusion studies and (1.9 ± 0.7, 0.7 ± 0.6, 1.3 ± 0.6) mm and (2.0 ± 0.9, 0.6 ± 0.4, 1.2 ± 1.2) mm for (82)Rb perfusion studies, respectively. A visually detectable patient motion threshold was established to be ≥2.2 mm, corresponding to visual user scores of 1 and 2. After motion correction, the average increases in contrast-to-noise ratio (CNR) from all frames for larger than the motion threshold were 16.2% in stress flurpiridaz F 18 and 12.2% in rest flurpiridaz F 18 studies. The average increases in CNR were 4.6% in stress (82)Rb studies and 4.3% in rest (82)Rb studies. Fully automatic motion correction of dynamic PET frames can be performed accurately, potentially allowing improved image quantification of cardiac PET data.
Inhibitory control differentiates rare target search performance in children.
Li, Hongting; Chan, John S Y; Cheung, Sui-Yin; Yan, Jin H
2012-02-01
Age-related differences in rare-target search are primarily explained by the speed-accuracy trade-off, primed responses, or decision making. The goal was to examine how motor inhibition influences visual search. Children pressed a key when a rare target was detected. On no-target trials, children withheld reactions. Response time (RT), hits, misses, correct rejection, and false alarms were measured. Tapping tests assessed motor control. Older children tapped faster, were more sensitive to rare targets (higher d'), and reacted more slowly than younger ones. Girls outperformed boys in search sensitivity but not in RT. Motor speed was closely associated with hit rate and RT. Results suggest that development of inhibitory control plays a key role in visual detection. The potential implications for cognitive-motor development and individual differences are discussed.
Donovan, Michael S; Kassop, David; Liotta, Robert A; Hulten, Edward A
2015-01-01
Sinus venosus atrial septal defects (SV-ASD) have nonspecific clinical presentations and represent a diagnostic imaging challenge. Transthoracic echocardiography (TTE) remains the initial diagnostic imaging modality. However, detection rates have been as low as 12%. Transesophageal echocardiography (TEE) improves diagnostic accuracy though it may not detect commonly associated partial anomalous pulmonary venous return (PAPVR). Cardiac magnetic resonance (CMR) imaging provides a noninvasive, highly sensitive and specific imaging modality of SV-ASD. We describe a case of an adult male with exercise-induced, paroxysmal supraventricular tachycardia who presented with palpitations and dyspnea. Despite nondiagnostic imaging results on TTE, CMR proved to be instrumental in visualizing a hemodynamically significant SV-ASD with PAPVR that ultimately led to surgical correction.
Donovan, Michael S.; Kassop, David; Liotta, Robert A.; Hulten, Edward A.
2015-01-01
Sinus venosus atrial septal defects (SV-ASD) have nonspecific clinical presentations and represent a diagnostic imaging challenge. Transthoracic echocardiography (TTE) remains the initial diagnostic imaging modality. However, detection rates have been as low as 12%. Transesophageal echocardiography (TEE) improves diagnostic accuracy though it may not detect commonly associated partial anomalous pulmonary venous return (PAPVR). Cardiac magnetic resonance (CMR) imaging provides a noninvasive, highly sensitive and specific imaging modality of SV-ASD. We describe a case of an adult male with exercise-induced, paroxysmal supraventricular tachycardia who presented with palpitations and dyspnea. Despite nondiagnostic imaging results on TTE, CMR proved to be instrumental in visualizing a hemodynamically significant SV-ASD with PAPVR that ultimately led to surgical correction. PMID:25705227
Lockhart, M.; Henzlova, D.; Croft, S.; ...
2017-09-20
Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lockhart, M.; Henzlova, D.; Croft, S.
Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less
Improved method for fluorescence cytometric immunohematology testing.
Roback, John D; Barclay, Sheilagh; Hillyer, Christopher D
2004-02-01
A method for accurate immunohematology testing by fluorescence cytometry (FC) was previously described. Nevertheless, the use of vacuum filtration to wash RBCs and a standard-flow cytometer for data acquisition hindered efforts to incorporate this method into an automated platform. A modified procedure was developed that used low-speed centrifugation of 96-well filter plates for RBC staining. Small-footprint benchtop capillary cytometers (PCA and PCA-96, Guava Technologies, Inc.) were used for data acquisition. Authentic clinical samples from hospitalized patients were tested for ABO group and the presence of D antigen (n = 749) as well as for the presence of RBC alloantibodies (n = 428). Challenging samples with mixed-field reactions and weak antibodies were included. Results were compared to those obtained by column agglutination technology (CAT), and discrepancies were resolved by standard tube methods. Detailed investigations of FC sensitivity and reproducibility were also performed. The modified FC method with the PCA determined the correct ABO group and D type for 98.7 percent of 520 samples, compared to 98.8 percent for CAT (p > 0.05). No-type-determined (NTD) rates were 1.2 percent for both methods. In testing for unexpected alloantibodies, FC determined the correct result for 98.6 percent of 215 samples, compared to 96.3 percent for CAT (p > 0.05). When samples were automatically acquired in the 96-well plate format with the PCA-96, 98.7 percent of 229 samples had correct ABO group and D type determined by FC, compared to 97.4 percent for CAT (p > 0.05). NTD rates were 0.9 and 2.6 percent, respectively. Antibody screens were accurate for 99.1 percent of 213 samples with the PCA-96, compared to 99.5 percent for CAT (p > 0.05). Further investigations demonstrated that FC with the PCA-96 was better than CAT at detecting weak anti-A (p < 0.0001) and alloantibodies. An improved method for FC immunohematology testing has been described. This assay was comparable in accuracy to standard CAT techniques, but had better sensitivity for detecting weak antibodies and was superior in detecting mixed-field reactions (p < 0.005). The FC method demonstrated excellent reproducibility. The compatibility of this assay with the PCA-96 capillary cytometer with plate-handling capabilities should simplify development of a completely automated platform.
MRI-Based Nonrigid Motion Correction in Simultaneous PET/MRI
Chun, Se Young; Reese, Timothy G.; Ouyang, Jinsong; Guerin, Bastien; Catana, Ciprian; Zhu, Xuping; Alpert, Nathaniel M.; El Fakhri, Georges
2014-01-01
Respiratory and cardiac motion is the most serious limitation to whole-body PET, resulting in spatial resolution close to 1 cm. Furthermore, motion-induced inconsistencies in the attenuation measurements often lead to significant artifacts in the reconstructed images. Gating can remove motion artifacts at the cost of increased noise. This paper presents an approach to respiratory motion correction using simultaneous PET/MRI to demonstrate initial results in phantoms, rabbits, and nonhuman primates and discusses the prospects for clinical application. Methods Studies with a deformable phantom, a free-breathing primate, and rabbits implanted with radioactive beads were performed with simultaneous PET/MRI. Motion fields were estimated from concurrently acquired tagged MR images using 2 B-spline nonrigid image registration methods and incorporated into a PET list-mode ordered-subsets expectation maximization algorithm. Using the measured motion fields to transform both the emission data and the attenuation data, we could use all the coincidence data to reconstruct any phase of the respiratory cycle. We compared the resulting SNR and the channelized Hotelling observer (CHO) detection signal-to-noise ratio (SNR) in the motion-corrected reconstruction with the results obtained from standard gating and uncorrected studies. Results Motion correction virtually eliminated motion blur without reducing SNR, yielding images with SNR comparable to those obtained by gating with 5–8 times longer acquisitions in all studies. The CHO study in dynamic phantoms demonstrated a significant improvement (166%–276%) in lesion detection SNR with MRI-based motion correction as compared with gating (P < 0.001). This improvement was 43%–92% for large motion compared with lesion detection without motion correction (P < 0.001). CHO SNR in the rabbit studies confirmed these results. Conclusion Tagged MRI motion correction in simultaneous PET/MRI significantly improves lesion detection compared with respiratory gating and no motion correction while reducing radiation dose. In vivo primate and rabbit studies confirmed the improvement in PET image quality and provide the rationale for evaluation in simultaneous whole-body PET/MRI clinical studies. PMID:22743250
Analysis of Solar Astrolabe Measurements during 20 Years
NASA Astrophysics Data System (ADS)
Poppe, P. C. R.; Leister, N. V.; Laclare, F.; Delmas, C.
1998-11-01
Recent observations of the Sun made between 1974 and 1995 at two observatories were examined to determine the constant and/or linear terms to the equinox and equator of the FK5 reference frame, the mean obliquity of the ecliptic, the mean longitude of the Sun, the mean eccentricity of the Earth's orbit, and the mean longitude of perihelion. The VSOP82 theory was used to reduce the data. The global solution of the weighted least-squares adjustment shows that the equinox of the FK5 requires a correction of +0.072" +/- 0.005" at the mean epoch 1987.24. The FK5 and dynamical equinox agree closely at J2000.0 (-0.040" +/- 0.020"), but an anomalous negative secular variation with respect to the dynamical equinox was detected: -0.881" +/- 0.116" century^-1. The FK5 equator requires a correction of +0.088" +/- 0.016", and there is no indication of a time rate of change. The corrections to the mean longitude of the Sun (-0.020" +/- 0.010") and to the mean obliquity of the ecliptic (-0.041" +/- 0.016") do appear to be statistically significant, although only marginally. The time rates of change for these quantities are not significant on the system to which the observations are referred. In spite of the short time span used in this analysis, the strong correlation between constant and linear terms was completely eliminated with the complete covering of the orbit by the data sets of both sites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Letant, S E; .Ortiz, J I; Tammero, L
2007-04-11
We have developed a nucleic acid-based assay that is rapid, sensitive, specific, and can be used for the simultaneous detection of 5 common human respiratory pathogens including influenza A, influenza B, parainfluenza type 1 and 3, respiratory syncytial virus, and adenovirus group B, C, and E. Typically, diagnosis on an un-extracted clinical sample can be provided in less than 3 hours, including sample collection, preparation, and processing, as well as data analysis. Such a multiplexed panel would enable rapid broad-spectrum pathogen testing on nasal swabs, and therefore allow implementation of infection control measures, and timely administration of antiviral therapies. Thismore » article presents a summary of the assay performance in terms of sensitivity and specificity. Limits of detection are provided for each targeted respiratory pathogen, and result comparisons are performed on clinical samples, our goal being to compare the sensitivity and specificity of the multiplexed assay to the combination of immunofluorescence and shell vial culture currently implemented at the UCDMC hospital. Overall, the use of the multiplexed RT-PCR assay reduced the rate of false negatives by 4% and reduced the rate of false positives by up to 10%. The assay correctly identified 99.3% of the clinical negatives, 97% of adenovirus, 95% of RSV, 92% of influenza B, and 77% of influenza A without any extraction performed on the clinical samples. The data also showed that extraction will be needed for parainfluenza virus, which was only identified correctly 24% of the time on un-extracted samples.« less
Friedrich, Udo; Naismith, Michèle M.; Altendorf, Karlheinz; Lipski, André
1999-01-01
Domain-, class-, and subclass-specific rRNA-targeted probes were applied to investigate the microbial communities of three industrial and three laboratory-scale biofilters. The set of probes also included a new probe (named XAN818) specific for the Xanthomonas branch of the class Proteobacteria; this probe is described in this study. The members of the Xanthomonas branch do not hybridize with previously developed rRNA-targeted oligonucleotide probes for the α-, β-, and γ-Proteobacteria. Bacteria of the Xanthomonas branch accounted for up to 4.5% of total direct counts obtained with 4′,6-diamidino-2-phenylindole. In biofilter samples, the relative abundance of these bacteria was similar to that of the γ-Proteobacteria. Actinobacteria (gram-positive bacteria with a high G+C DNA content) and α-Proteobacteria were the most dominant groups. Detection rates obtained with probe EUB338 varied between about 40 and 70%. For samples with high contents of gram-positive bacteria, these percentages were substantially improved when the calculations were corrected for the reduced permeability of gram-positive bacteria when formaldehyde was used as a fixative. The set of applied bacterial class- and subclass-specific probes yielded, on average, 58.5% (± a standard deviation of 23.0%) of the corrected eubacterial detection rates, thus indicating the necessity of additional probes for studies of biofilter communities. The Xanthomonas-specific probe presented here may serve as an efficient tool for identifying potential phytopathogens. In situ hybridization proved to be a practical tool for microbiological studies of biofiltration systems. PMID:10427047
Hyperspectral image segmentation using a cooperative nonparametric approach
NASA Astrophysics Data System (ADS)
Taher, Akar; Chehdi, Kacem; Cariou, Claude
2013-10-01
In this paper a new unsupervised nonparametric cooperative and adaptive hyperspectral image segmentation approach is presented. The hyperspectral images are partitioned band by band in parallel and intermediate classification results are evaluated and fused, to get the final segmentation result. Two unsupervised nonparametric segmentation methods are used in parallel cooperation, namely the Fuzzy C-means (FCM) method, and the Linde-Buzo-Gray (LBG) algorithm, to segment each band of the image. The originality of the approach relies firstly on its local adaptation to the type of regions in an image (textured, non-textured), and secondly on the introduction of several levels of evaluation and validation of intermediate segmentation results before obtaining the final partitioning of the image. For the management of similar or conflicting results issued from the two classification methods, we gradually introduced various assessment steps that exploit the information of each spectral band and its adjacent bands, and finally the information of all the spectral bands. In our approach, the detected textured and non-textured regions are treated separately from feature extraction step, up to the final classification results. This approach was first evaluated on a large number of monocomponent images constructed from the Brodatz album. Then it was evaluated on two real applications using a respectively multispectral image for Cedar trees detection in the region of Baabdat (Lebanon) and a hyperspectral image for identification of invasive and non invasive vegetation in the region of Cieza (Spain). A correct classification rate (CCR) for the first application is over 97% and for the second application the average correct classification rate (ACCR) is over 99%.
The single event upset environment for avionics at high latitude
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sims, A.J.; Dyer, C.S.; Peerless, C.L.
1994-12-01
Modern avionic systems for civil and military applications are becoming increasingly reliant upon embedded microprocessors and associated memory devices. The phenomenon of single event upset (SEU) is well known in space systems and designers have generally been careful to use SEU tolerant devices or to implement error detection and correction (EDAC) techniques where appropriate. In the past, avionics designers have had no reason to consider SEU effects but is clear that the more prevalent use of memory devices combined with increasing levels of IC integration will make SEU mitigation an important design consideration for future avionic systems. To this end,more » it is necessary to work towards producing models of the avionics SEU environment which will permit system designers to choose components and EDAC techniques which are based on predictions of SEU rates correct to much better than an order of magnitude. Measurements of the high latitude SEU environment at avionics altitude have been made on board a commercial airliner. Results are compared with models of primary and secondary cosmic rays and atmospheric neutrons. Ground based SEU tests of static RAMs are used to predict rates in flight.« less
Soble, Jason R; Bain, Kathleen M; Bailey, K Chase; Kirton, Joshua W; Marceaux, Janice C; Critchfield, Edan A; McCoy, Karin J M; O'Rourke, Justin J F
2018-01-08
Embedded performance validity tests (PVTs) allow for continuous assessment of invalid performance throughout neuropsychological test batteries. This study evaluated the utility of the Wechsler Memory Scale-Fourth Edition (WMS-IV) Logical Memory (LM) Recognition score as an embedded PVT using the Advanced Clinical Solutions (ACS) for WAIS-IV/WMS-IV Effort System. This mixed clinical sample was comprised of 97 total participants, 71 of whom were classified as valid and 26 as invalid based on three well-validated, freestanding criterion PVTs. Overall, the LM embedded PVT demonstrated poor concordance with the criterion PVTs and unacceptable psychometric properties using ACS validity base rates (42% sensitivity/79% specificity). Moreover, 15-39% of participants obtained an invalid ACS base rate despite having a normatively-intact age-corrected LM Recognition total score. Receiving operating characteristic curve analysis revealed a Recognition total score cutoff of < 61% correct improved specificity (92%) while sensitivity remained weak (31%). Thus, results indicated the LM Recognition embedded PVT is not appropriate for use from an evidence-based perspective, and that clinicians may be faced with reconciling how a normatively intact cognitive performance on the Recognition subtest could simultaneously reflect invalid performance validity.
Performance of MIMO-OFDM using convolution codes with QAM modulation
NASA Astrophysics Data System (ADS)
Astawa, I. Gede Puja; Moegiharto, Yoedy; Zainudin, Ahmad; Salim, Imam Dui Agus; Anggraeni, Nur Annisa
2014-04-01
Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct errors that occur during data transmission. One can use the convolution code. This paper present performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate ½. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 subcarrier which transmits Rayleigh multipath fading channel in OFDM system. To achieve a BER of 10-3 is required 10dB SNR in SISO-OFDM scheme. For 2×2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4×4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4×4 MIMO-OFDM system without coding, power saving 7 dB of 2×2 MIMO-OFDM and significant power savings from SISO-OFDM system.
Acoustic detection of cracks in the anvil of a large-volume cubic high-pressure apparatus
NASA Astrophysics Data System (ADS)
Yan, Zhaoli; Chen, Bin; Tian, Hao; Cheng, Xiaobin; Yang, Jun
2015-12-01
A large-volume cubic high-pressure apparatus with three pairs of tungsten carbide anvils is the most popular device for synthetic diamond production. Currently, the consumption of anvils is one of the important costs for the diamond production industry. If one of the anvils is fractured during the production process, the other five anvils in the apparatus may be endangered as a result of a sudden loss of pressure. It is of critical importance to detect and replace cracked anvils before they fracture for reduction of the cost of diamond production and safety. An acoustic detection method is studied in this paper. Two new features, nested power spectrum centroid and modified power spectrum variance, are proposed and combined with linear prediction coefficients to construct a feature vector. A support vector machine model is trained for classification. A sliding time window is proposed for decision-level information fusion. The experiments and analysis show that the recognition rate of anvil cracks is 95%, while the false-alarm rate is as low as 5.8 × 10-4 during a time window; this false-alarm rate indicates that at most one false alarm occurs every 2 months at a confidence level of 90%. An instrument to monitor anvil cracking was designed based on a digital signal processor and has been running for more than eight months in a diamond production field. In this time, two anvil-crack incidents occurred and were detected by the instrument correctly. In addition, no false alarms occurred.
Acoustic detection of cracks in the anvil of a large-volume cubic high-pressure apparatus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Zhaoli, E-mail: zl-yan@mail.ioa.ac.cn; Tian, Hao; Cheng, Xiaobin
2015-12-15
A large-volume cubic high-pressure apparatus with three pairs of tungsten carbide anvils is the most popular device for synthetic diamond production. Currently, the consumption of anvils is one of the important costs for the diamond production industry. If one of the anvils is fractured during the production process, the other five anvils in the apparatus may be endangered as a result of a sudden loss of pressure. It is of critical importance to detect and replace cracked anvils before they fracture for reduction of the cost of diamond production and safety. An acoustic detection method is studied in this paper.more » Two new features, nested power spectrum centroid and modified power spectrum variance, are proposed and combined with linear prediction coefficients to construct a feature vector. A support vector machine model is trained for classification. A sliding time window is proposed for decision-level information fusion. The experiments and analysis show that the recognition rate of anvil cracks is 95%, while the false-alarm rate is as low as 5.8 × 10{sup −4} during a time window; this false-alarm rate indicates that at most one false alarm occurs every 2 months at a confidence level of 90%. An instrument to monitor anvil cracking was designed based on a digital signal processor and has been running for more than eight months in a diamond production field. In this time, two anvil-crack incidents occurred and were detected by the instrument correctly. In addition, no false alarms occurred.« less
SU-E-T-458: Determining Threshold-Of-Failure for Dead Pixel Rows in EPID-Based Dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gersh, J; Wiant, D
Purpose: A pixel correction map is applied to all EPID-based applications on the TrueBeam (Varian Medical Systems, Palo Alto, CA). When dead pixels are detected, an interpolative smoothing algorithm is applied using neighboring-pixel information to supplement missing-pixel information. The vendor suggests that when the number of dead pixels exceeds 70,000, the panel should be replaced. It is common for entire detector rows to be dead, as well as their neighboring rows. Approximately 70 rows can be dead before the panel reaches this threshold. This study determines the number of neighboring dead-pixel rows that would create a large enough deviation inmore » measured fluence to cause failures in portal dosimetry (PD). Methods: Four clinical two-arc VMAT plans were generated using Eclipse's AXB algorithm and PD plans were created using the PDIP algorithm. These plans were chosen to represent those commonly encountered in the clinic: prostate, lung, abdomen, and neck treatments. During each iteration of this study, an increasing number of dead-pixel rows are artificially applied to the correction map and a fluence QA is performed using the EPID (corrected with this map). To provide a worst-case-scenario, the dead-pixel rows are chosen so that they present artifacts in the highfluence region of the field. Results: For all eight arc-fields deemed acceptable via a 3%/3mm gamma analysis (pass rate greater than 99%), VMAT QA yielded identical results with a 5 pixel-width dead zone. When 10 dead lines were present, half of the fields had pass rates below the 99% pass rate. With increasing dead rows, the pass rates were reduced substantially. Conclusion: While the vendor still suggests to request service at the point where 70,000 dead rows are measured (as recommended by the vendor), the authors suggest that service should be requested when there are greater than 5 consecutive dead rows.« less
NASA Technical Reports Server (NTRS)
Richards, W. Lance
1996-01-01
Significant strain-gage errors may exist in measurements acquired in transient-temperature environments if conventional correction methods are applied. As heating or cooling rates increase, temperature gradients between the strain-gage sensor and substrate surface increase proportionally. These temperature gradients introduce strain-measurement errors that are currently neglected in both conventional strain-correction theory and practice. Therefore, the conventional correction theory has been modified to account for these errors. A new experimental method has been developed to correct strain-gage measurements acquired in environments experiencing significant temperature transients. The new correction technique has been demonstrated through a series of tests in which strain measurements were acquired for temperature-rise rates ranging from 1 to greater than 100 degrees F/sec. Strain-gage data from these tests have been corrected with both the new and conventional methods and then compared with an analysis. Results show that, for temperature-rise rates greater than 10 degrees F/sec, the strain measurements corrected with the conventional technique produced strain errors that deviated from analysis by as much as 45 percent, whereas results corrected with the new technique were in good agreement with analytical results.
Lin, Mu-Han; Veltchev, Iavor; Koren, Sion; Ma, Charlie; Li, Jinsgeng
2015-07-08
Robotic radiosurgery system has been increasingly employed for extracranial treatments. This work is aimed to study the feasibility of a cylindrical diode array and a planar ion chamber array for patient-specific QA with this robotic radiosurgery system and compare their performance. Fiducial markers were implanted in both systems to enable image-based setup. An in-house program was developed to postprocess the movie file of the measurements and apply the beam-by-beam angular corrections for both systems. The impact of noncoplanar delivery was then assessed by evaluating the angles created by the incident beams with respect to the two detector arrangements and cross-comparing the planned dose distribution to the measured ones with/without the angular corrections. The sensitivity of detecting the translational (1-3 mm) and the rotational (1°-3°) delivery errors were also evaluated for both systems. Six extracranial patient plans (PTV 7-137 cm³) were measured with these two systems and compared with the calculated doses. The plan dose distributions were calculated with ray-tracing and the Monte Carlo (MC) method, respectively. With 0.8 by 0.8 mm² diodes, the output factors measured with the cylindrical diode array agree better with the commissioning data. The maximum angular correction for a given beam is 8.2% for the planar ion chamber array and 2.4% for the cylindrical diode array. The two systems demonstrate a comparable sensitivity of detecting the translational targeting errors, while the cylindrical diode array is more sensitive to the rotational targeting error. The MC method is necessary for dose calculations in the cylindrical diode array phantom because the ray-tracing algorithm fails to handle the high-Z diodes and the acrylic phantom. For all the patient plans, the cylindrical diode array/ planar ion chamber array demonstrate 100% / > 92% (3%/3 mm) and > 96% / ~ 80% (2%/2 mm) passing rates. The feasibility of using both systems for robotic radiosurgery system patient-specific QA has been demonstrated. For gamma evaluation, 2%/2 mm criteria for cylindrical diode array and 3%/3 mm criteria for planar ion chamber array are suggested. The customized angular correction is necessary as proven by the improved passing rate, especially with the planar ion chamber array system.
External quality assessment of dengue and chikungunya diagnostics in the Asia Pacific region, 2015
Soh, Li Ting; Squires, Raynal C; Tan, Li Kiang; Pok, Kwoon Yong; Yang, HuiTing; Liew, Christina; Shah, Aparna Singh; Aaskov, John; Abubakar, Sazaly; Hasabe, Futoshi; Ng, Lee Ching
2016-01-01
Objective To conduct an external quality assessment (EQA) of dengue and chikungunya diagnostics among national-level public health laboratories in the Asia Pacific region following the first round of EQA for dengue diagnostics in 2013. Methods Twenty-four national-level public health laboratories performed routine diagnostic assays on a proficiency testing panel consisting of two modules. Module A contained serum samples spiked with cultured dengue virus (DENV) or chikungunya virus (CHIKV) for the detection of nucleic acid and DENV non-structural protein 1 (NS1) antigen. Module B contained human serum samples for the detection of anti-DENV antibodies. Results Among 20 laboratories testing Module A, 17 (85%) correctly detected DENV RNA by reverse transcription polymerase chain reaction (RT–PCR), 18 (90%) correctly determined serotype and 19 (95%) correctly identified CHIKV by RT–PCR. Ten of 15 (66.7%) laboratories performing NS1 antigen assays obtained the correct results. In Module B, 18/23 (78.3%) and 20/20 (100%) of laboratories correctly detected anti-DENV IgM and IgG, respectively. Detection of acute/recent DENV infection by both molecular (RT–PCR) and serological methods (IgM) was available in 19/24 (79.2%) participating laboratories. Discussion Accurate laboratory testing is a critical component of dengue and chikungunya surveillance and control. This second round of EQA reveals good proficiency in molecular and serological diagnostics of these diseases in the Asia Pacific region. Further comprehensive diagnostic testing, including testing for Zika virus, should comprise future iterations of the EQA. PMID:27508088
Arif, Sania; Qudsia, Syeda; Urooj, Samina; Chaudry, Nazia; Arshad, Aneeqa; Andleeb, Saadia
2015-03-15
Breast cancer represents a significant health problem because of its high prevalence. Tests like mammography, which are used abundantly for the detection of breast cancer, suffer from serious limitations. Mammography correctly detects malignancy about 80-90% of the times, failing in places when (1) the tumor is small at early stage, (2) breast tissue is dense or (3) in women of less than 40 years. Serum-based detection of biomarkers involves risk of disease transfer, along with other concerns. These techniques compromise in the early detection of breast cancer. Early detection of breast cancer is a crucial factor to enhance the survival rate of patient. Development of regular screening tests for early diagnosis of breast cancer is a challenge. This review highlights the design of a handy and household biosensor device aimed for self-screening and early diagnosis of breast cancer. The design makes use of salivary autoantibodies for specificity to develop a noninvasive procedure, breast cancer specific biomarkers for precision for the development of device, and biosensor technology for sensitivity to screen the early cases of breast cancer more efficiently. Copyright © 2014 Elsevier B.V. All rights reserved.
Zuckerman, Samantha P.; Keller, Brad M.; Maidment, Andrew D. A.; Barufaldi, Bruno; Weinstein, Susan P.; Synnestvedt, Marie; McDonald, Elizabeth S.
2016-01-01
Purpose To evaluate the early implementation of synthesized two-dimensional (s2D) mammography in a population screened entirely with s2D and digital breast tomosynthesis (DBT) (referred to as s2D/DBT) and compare recall rates and cancer detection rates to historic outcomes of digital mammography combined with DBT (referred to as digital mammography/DBT) screening. Materials and Methods This was an institutional review board–approved and HIPAA-compliant retrospective interpretation of prospectively acquired data with waiver of informed consent. Compared were recall rates, biopsy rates, cancer detection rates, and radiation dose for 15 571 women screened with digital mammography/DBT from October 1, 2011, to February 28, 2013, and 5366 women screened with s2D/DBT from January 7, 2015, to June 30, 2015. Two-sample z tests of equal proportions were used to determine statistical significance. Results Recall rate for s2D/DBT versus digital mammography/DBT was 7.1% versus 8.8%, respectively (P < .001). Biopsy rate for s2D/DBT versus digital mammography/DBT decreased (1.3% vs 2.0%, respectively; P = .001). There was no significant difference in cancer detection rate for s2D/DBT versus digital mammography/DBT (5.03 of 1000 vs 5.45 of 1000, respectively; P = .72). The average glandular dose was 39% lower in s2D/DBT versus digital mammography/DBT (4.88 mGy vs 7.97 mGy, respectively; P < .001). Conclusion Screening with s2D/DBT in a large urban practice resulted in similar outcomes compared with digital mammography/DBT imaging. Screening with s2D/DBT allowed for the benefits of DBT with a decrease in radiation dose compared with digital mammography/DBT. © RSNA, 2016 An earlier incorrect version of this article appeared online. This article was corrected on August 11, 2016. PMID:27467468
77 FR 47582 - Great Lakes Pilotage Rates-2013 Annual Review and Adjust; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-09
... DEPARTMENT OF HOMELAND SECURITY Coast Guard 46 CFR Part 401 [Docket No. USCG-2012-0409] RIN 1625-AB89 Great Lakes Pilotage Rates--2013 Annual Review and Adjust; Correction AGENCY: Coast Guard, DHS. ACTION: Notice of proposed rulemaking; correction. SUMMARY: The Coast Guard published a Notice of...
Self-Correcting Electronically-Scanned Pressure Sensor
NASA Technical Reports Server (NTRS)
Gross, C.; Basta, T.
1982-01-01
High-data-rate sensor automatically corrects for temperature variations. Multichannel, self-correcting pressure sensor can be used in wind tunnels, aircraft, process controllers and automobiles. Offers data rates approaching 100,000 measurements per second with inaccuracies due to temperature shifts held below 0.25 percent (nominal) of full scale over a temperature span of 55 degrees C.
Evaluation of an integrated graphical display to promote acute change detection in ICU patients
Anders, Shilo; Albert, Robert; Miller, Anne; Weinger, Matthew B.; Doig, Alexa K.; Behrens, Michael; Agutter, Jim
2012-01-01
Objective The purpose of this study was to evaluate ICU nurses’ ability to detect patient change using an integrated graphical information display (IGID) versus a conventional tabular ICU patient information display (i.e. electronic chart). Design Using participants from two different sites, we conducted a repeated measures simulator-based experiment to assess ICU nurses’ ability to detect abnormal patient variables using a novel IGID versus a conventional tabular information display. Patient scenarios and display presentations were fully counterbalanced. Measurements We measured percent correct detection of abnormal patient variables, nurses’ perceived workload (NASA-TLX), and display usability ratings. Results 32 ICU nurses (87% female, median age of 29 years, and median ICU experience of 2.5 years) using the IGID detected more abnormal variables compared to the tabular display [F (1,119)=13.0, p < 0.05]. There was a significant main effect of site [F (1, 119)=14.2], with development site participants doing better. There were no significant differences in nurses’ perceived workload. The IGID display was rated as more usable than the conventional display, [F (1, 60)=31.7]. Conclusion Overall, nurses reported more important physiological information with the novel IGID than tabular display. Moreover, the finding of site differences may reflect local influences in work practice and involvement in iterative display design methodology. Information displays developed using user-centered design should accommodate the full diversity of the intended user population across use sites. PMID:22534099
Malakhov, V N; Dovgalev, A S; Astanina, S Iu; Serdiuk, A P
2014-01-01
In 2010-2013, the quality of microscopic detection of the causative agents ofparasitic diseases in the feces has been assessed by the specialists of the laboratories of the therapeutic-and-prophylactic institutions (TPIs) and Hygiene and Epidemiology Centers, Russian Inspectorate for the Protection of Consumer Rights and Human Welfare, which are participants of the Federal System of External Quality Assessment of Clinical Laboratory Testing. Thirty-two specimens containing 16 species of human helminths and 4 species of enteric protozoa in different combinations were examined. The findings suggest that the quality of microscopic detection of the causative agents of parasitic diseases is low in the laboratories of health care facilities and that the specialists of the laboratories of TPIs and Hygiene and Epidemiology Centers, Russian Inspectorate for the Protection of Consumer Rights and Human Welfare, do not not possess the knowledge and skills necessary to make a laboratory diagnosis of helminths and enteric protozoa. The average detection rates of helminths and protozoa were at a level of 64 and 36%, respectively. The correct results showed that the proportion of helminths and protozoa were 94.5 and 5.5%, respectively. According to the biological and epidemiological classification of helminths, there were higher detection rates for contact group parasites (Enterobius vermicularis and Hymenolepis nana) and geohelminths (Ascaris, Trichuris trichiura, and others). Biohelminths (Opisthorchis, tapeworms, and others) Were detectable slightly worse.
2015-03-01
62 5.13 Probabilty of correct SC modulation detection for 95 OFDM bursts using sixth order cumulants during interference techniques...0.9 1 Tx Node RF Gain P c m o d u la ti o n Figure 5.13: Probabilty of correct SC modulation detection for 95 OFDM bursts using sixth order
ERIC Educational Resources Information Center
Abedi, Razie; Latifi, Mehdi; Moinzadeh, Ahmad
2010-01-01
This study tries to answer some ever-existent questions in writing fields regarding approaching the most effective ways to give feedback to students' errors in writing by comparing the effect of error correction and error detection on the improvement of students' writing ability. In order to achieve this goal, 60 pre-intermediate English learners…
Code of Federal Regulations, 2010 CFR
2010-07-01
... or DLS/FF i. If you use a bag leak detection system, initiating corrective action within 1 hour of a bag leak detection system alarm and completing corrective actions in accordance with your OM&M plan... established during the performance test; and iv. If chemicals are added to the scrubber water, collecting the...
ERIC Educational Resources Information Center
Sherwood, David E.
2010-01-01
According to closed-loop accounts of motor control, movement errors are detected by comparing sensory feedback to an acquired reference state. Differences between the reference state and the movement-produced feedback results in an error signal that serves as a basis for a correction. The main question addressed in the current study was how…
Gustaf: Detecting and correctly classifying SVs in the NGS twilight zone.
Trappe, Kathrin; Emde, Anne-Katrin; Ehrlich, Hans-Christian; Reinert, Knut
2014-12-15
The landscape of structural variation (SV) including complex duplication and translocation patterns is far from resolved. SV detection tools usually exhibit low agreement, are often geared toward certain types or size ranges of variation and struggle to correctly classify the type and exact size of SVs. We present Gustaf (Generic mUlti-SpliT Alignment Finder), a sound generic multi-split SV detection tool that detects and classifies deletions, inversions, dispersed duplications and translocations of ≥ 30 bp. Our approach is based on a generic multi-split alignment strategy that can identify SV breakpoints with base pair resolution. We show that Gustaf correctly identifies SVs, especially in the range from 30 to 100 bp, which we call the next-generation sequencing (NGS) twilight zone of SVs, as well as larger SVs >500 bp. Gustaf performs better than similar tools in our benchmark and is furthermore able to correctly identify size and location of dispersed duplications and translocations, which otherwise might be wrongly classified, for example, as large deletions. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-12-01
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.
NASA Astrophysics Data System (ADS)
Sun, Lin; Liu, Xinyan; Yang, Yikun; Chen, TingTing; Wang, Quan; Zhou, Xueying
2018-04-01
Although enhanced over prior Landsat instruments, Landsat 8 OLI can obtain very high cloud detection precisions, but for the detection of cloud shadows, it still faces great challenges. Geometry-based cloud shadow detection methods are considered the most effective and are being improved constantly. The Function of Mask (Fmask) cloud shadow detection method is one of the most representative geometry-based methods that has been used for cloud shadow detection with Landsat 8 OLI. However, the Fmask method estimates cloud height employing fixed temperature rates, which are highly uncertain, and errors of large area cloud shadow detection can be caused by errors in estimations of cloud height. This article improves the geometry-based cloud shadow detection method for Landsat OLI from the following two aspects. (1) Cloud height no longer depends on the brightness temperature of the thermal infrared band but uses a possible dynamic range from 200 m to 12,000 m. In this case, cloud shadow is not a specific location but a possible range. Further analysis was carried out in the possible range based on the spectrum to determine cloud shadow location. This effectively avoids the cloud shadow leakage caused by the error in the height determination of a cloud. (2) Object-based and pixel spectral analyses are combined to detect cloud shadows, which can realize cloud shadow detection from two aspects of target scale and pixel scale. Based on the analysis of the spectral differences between the cloud shadow and typical ground objects, the best cloud shadow detection bands of Landsat 8 OLI were determined. The combined use of spectrum and shape can effectively improve the detection precision of cloud shadows produced by thin clouds. Several cloud shadow detection experiments were carried out, and the results were verified by the results of artificial recognition. The results of these experiments indicated that this method can identify cloud shadows in different regions with correct accuracy exceeding 80%, approximately 5% of the areas were wrongly identified, and approximately 10% of the cloud shadow areas were missing. The accuracy of this method is obviously higher than the recognition accuracy of Fmask, which has correct accuracy lower than 60%, and the missing recognition is approximately 40%.
The effects of compressibility on the GIA in southeast Alaska
NASA Astrophysics Data System (ADS)
Tanaka, Yoshiyuki; Sato, Tadahiro; Ohta, Yusaku; Miura, Satoshi; Freymueller, Jeffrey T.; Klemann, Volker
2015-03-01
Recent theoretical simulations on the glacial isostatic adjustment (GIA) have revealed that the model differences arising from considering mantle compressibility are not necessarily negligible if compared with the observation accuracy of present-day deformation rates. In this study, a compressible model is constructed for the GIA in southeast Alaska, and the uplift rate is compared with GPS data and the incompressible case for the first time. It is shown that, for Maxwell rheology, the incompressible model potentially underestimates the mean uplift rate by approximately 27% (4 mm/yr) with respect to the compressible case and the difference is detectable given observational precision. This difference between the compressible and incompressible models is reduced to 10% by matching the flexural rigidity of both earth models. When carrying out an inversion using incompressible models, this adjustment is important to infer a physically more correct viscoelastic structure.
NASA Astrophysics Data System (ADS)
Bringmann, Torsten; Calore, Francesca; Galea, Ahmad; Garny, Mathias
2017-09-01
It is well known that the annihilation of Majorana dark matter into fermions is helicity suppressed. Here, we point out that the underlying mechanism is a subtle combination of two distinct effects, and we present a comprehensive analysis of how the suppression can be partially or fully lifted by the internal bremsstrahlung of an additional boson in the final state. As a concrete illustration, we compute analytically the full amplitudes and annihilation rates of supersymmetric neutralinos to final states that contain any combination of two standard model fermions, plus one electroweak gauge boson or one of the five physical Higgs bosons that appear in the minimal supersymmetric standard model. We classify the various ways in which these three-body rates can be large compared to the two-body rates, identifying cases that have not been pointed out before. In our analysis, we put special emphasis on how to avoid the double counting of identical kinematic situations that appear for two-body and three-body final states, in particular on how to correctly treat differential rates and the spectrum of the resulting stable particles that is relevant for indirect dark matter searches. We find that both the total annihilation rates and the yields can be significantly enhanced when taking into account the corrections computed here, in particular for models with somewhat small annihilation rates at tree-level which otherwise would not be testable with indirect dark matter searches. Even more importantly, however, we find that the resulting annihilation spectra of positrons, neutrinos, gamma-rays and antiprotons differ in general substantially from the model-independent spectra that are commonly adopted, for these final states, when constraining particle dark matter with indirect detection experiments.
76 FR 39006 - Medicare Program; Hospital Inpatient Value-Based Purchasing Program; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-05
... Pneumonia (PN) 30-Day .8818 Mortality Rate. 7. On page 26516, Table 7 is corrected to read as follows... Day Mortality Rate. MORT-30 PN Pneumonia (PN) 30-Day .9021 Mortality Rate. 8. On page 26527, in the...
Detection of trans–cis flips and peptide-plane flips in protein structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Touw, Wouter G., E-mail: wouter.touw@radboudumc.nl; Joosten, Robbie P.; Vriend, Gert, E-mail: wouter.touw@radboudumc.nl
A method is presented to detect peptide bonds that need either a trans–cis flip or a peptide-plane flip. A coordinate-based method is presented to detect peptide bonds that need correction either by a peptide-plane flip or by a trans–cis inversion of the peptide bond. When applied to the whole Protein Data Bank, the method predicts 4617 trans–cis flips and many thousands of hitherto unknown peptide-plane flips. A few examples are highlighted for which a correction of the peptide-plane geometry leads to a correction of the understanding of the structure–function relation. All data, including 1088 manually validated cases, are freely availablemore » and the method is available from a web server, a web-service interface and through WHAT-CHECK.« less
Priming effects under correct change detection and change blindness.
Caudek, Corrado; Domini, Fulvio
2013-03-01
In three experiments, we investigated the priming effects induced by an image change on a successive animate/inanimate decision task. We studied both perceptual (Experiments 1 and 2) and conceptual (Experiment 3) priming effects, under correct change detection and change blindness (CB). Under correct change detection, we found larger positive priming effects on congruent trials for probes representing animate entities than for probes representing artifactual objects. Under CB, we found performance impairment relative to a "no-change" baseline condition. This inhibition effect induced by CB was modulated by the semantic congruency between the changed item and the probe in the case of probe images, but not for probe words. We discuss our results in the context of the literature on the negative priming effect. Copyright © 2012 Elsevier Inc. All rights reserved.
Snapper, Leslie; Oranç, Cansu; Hawley-Dolan, Angelina; Nissel, Jenny; Winner, Ellen
2015-04-01
Can people with no special knowledge about art detect the skill, intentionality, and expressed meanings in non-representational art? Hawley-Dolan and Winner (2011) showed participants without training in art images of abstract expressionist paintings paired with superficially similar works by children or animals and asked them which they preferred and which was a better work of art. Participants selected the works by artists in response to both questions at a rate above chance. In Study 1, we used the same image pairs but asked a more direct question: which painting is by the artist rather than the child or animal? Individuals with no familiarity with abstract expressionism correctly identified the artists' works at a rate significantly above chance. In Study 2 participants saw each image singly and were asked whether it was by an artist or a child or animal. Participants unfamiliar with abstract expressionism again correctly identified the source of the works at a rate above chance. Study 3 demonstrated that this discrimination is made on the basis of perceived intentionality and perceived structure. People see more than they think they do in abstract art. These findings tell us something about the nature of non-figurative art. They also tell us something about the human tendency to ferret out intentionality. Copyright © 2014 Elsevier B.V. All rights reserved.
Evaluation of harmonic direction-finding systems for detecting locomotor activity
Boyarski, V.L.; Rodda, G.H.; Savidge, J.A.
2007-01-01
We conducted a physical simulation experiment to test the efficacy of harmonic direction finding for remotely detecting locomotor activity in animals. The ability to remotely detect movement helps to avoid disturbing natural movement behavior. Remote detection implies that the observer can sense only a change in signal bearing. In our simulated movements, small changes in bearing (<5.7??) were routinely undetectable. Detectability improved progressively with the size of the simulated animal movement. The average (??SD) of reflector tag movements correctly detected for 5 observers was 93.9 ?? 12.8% when the tag was moved ???11.5??; most observers correctly detected tag movements ???20.1??. Given our data, one can assess whether the technique will be effective for detecting movements at an observation distance appropriate for the study organism. We recommend that both habitat and behavior of the organism be taken into consideration when contemplating use of this technique for detecting locomotion.
Continuous quantum error correction for non-Markovian decoherence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oreshkov, Ognyan; Brun, Todd A.; Communication Sciences Institute, University of Southern California, Los Angeles, California 90089
2007-08-15
We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximatelymore » follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics.« less
Bronchial abnormalities found in a consecutive series of 40 brachycephalic dogs.
De Lorenzi, Davide; Bertoncello, Diana; Drigo, Michele
2009-10-01
To detect abnormalities of the lower respiratory tract (trachea, principal bronchi, and lobar bronchi) in brachycephalic dogs by use of endoscopy, evaluate the correlation between laryngeal collapse and bronchial abnormalities, and determine whether dogs with bronchial abnormalities have a less favorable postsurgical long-term outcome following correction of brachycephalic syndrome. Prospective case series study. 40 client-owned brachycephalic dogs with stertorous breathing and clinical signs of respiratory distress. Brachycephalic dogs anesthetized for pharyngoscopy and laryngoscopy between January 2007 and June 2008 underwent flexible bronchoscopy for systematic evaluation of the principal and lobar bronchi. For dogs that underwent surgical correction of any component of brachycephalic syndrome, owners rated surgical outcome during a follow-up telephone survey. Correlation between laryngeal collapse and bronchial abnormalities and association between bronchial abnormalities and long-term outcome were assessed. Pugs (n = 20), English Bulldogs (13), and French Bulldogs (7) were affected. A fixed bronchial collapse was recognized in 35 of 40 dogs with a total of 94 bronchial stenoses. Abnormalities were irregularly distributed between hemithoraces; 15 of 94 bronchial abnormalities were detected in the right bronchial system, and 79 of 94 were detected in the left. The left cranial bronchus was the most commonly affected structure, and Pugs were the most severely affected breed. Laryngeal collapse was significantly correlated with severe bronchial collapse; no significant correlation was found between severity of bronchial abnormalities and postsurgical outcome. Bronchial collapse was a common finding in brachycephalic dogs, and long-term postsurgical outcome was not affected by bronchial stenosis.
40 CFR 63.1382 - Emission standards
Code of Federal Regulations, 2010 CFR
2010-07-01
... complete corrective actions in a timely manner according to the procedures in the operations, maintenance... or operator must initiate corrective action within 1 hour of an alarm from a bag leak detection system and complete corrective actions in a timely manner according to the procedures in the operations...
Kepha, Stella; Kihara, Jimmy H.; Njenga, Sammy M.; Pullan, Rachel L.; Brooker, Simon J.
2014-01-01
Objectives This study evaluates the diagnostic accuracy and cost-effectiveness of the Kato-Katz and Mini-FLOTAC methods for detection of soil-transmitted helminths (STH) in a post-treatment setting in western Kenya. A cost analysis also explores the cost implications of collecting samples during school surveys when compared to household surveys. Methods Stool samples were collected from children (n = 652) attending 18 schools in Bungoma County and diagnosed by the Kato-Katz and Mini-FLOTAC coprological methods. Sensitivity and additional diagnostic performance measures were analyzed using Bayesian latent class modeling. Financial and economic costs were calculated for all survey and diagnostic activities, and cost per child tested, cost per case detected and cost per STH infection correctly classified were estimated. A sensitivity analysis was conducted to assess the impact of various survey parameters on cost estimates. Results Both diagnostic methods exhibited comparable sensitivity for detection of any STH species over single and consecutive day sampling: 52.0% for single day Kato-Katz; 49.1% for single-day Mini-FLOTAC; 76.9% for consecutive day Kato-Katz; and 74.1% for consecutive day Mini-FLOTAC. Diagnostic performance did not differ significantly between methods for the different STH species. Use of Kato-Katz with school-based sampling was the lowest cost scenario for cost per child tested ($10.14) and cost per case correctly classified ($12.84). Cost per case detected was lowest for Kato-Katz used in community-based sampling ($128.24). Sensitivity analysis revealed the cost of case detection for any STH decreased non-linearly as prevalence rates increased and was influenced by the number of samples collected. Conclusions The Kato-Katz method was comparable in diagnostic sensitivity to the Mini-FLOTAC method, but afforded greater cost-effectiveness. Future work is required to evaluate the cost-effectiveness of STH surveillance in different settings. PMID:24810593
Assefa, Liya M; Crellen, Thomas; Kepha, Stella; Kihara, Jimmy H; Njenga, Sammy M; Pullan, Rachel L; Brooker, Simon J
2014-05-01
This study evaluates the diagnostic accuracy and cost-effectiveness of the Kato-Katz and Mini-FLOTAC methods for detection of soil-transmitted helminths (STH) in a post-treatment setting in western Kenya. A cost analysis also explores the cost implications of collecting samples during school surveys when compared to household surveys. Stool samples were collected from children (n = 652) attending 18 schools in Bungoma County and diagnosed by the Kato-Katz and Mini-FLOTAC coprological methods. Sensitivity and additional diagnostic performance measures were analyzed using Bayesian latent class modeling. Financial and economic costs were calculated for all survey and diagnostic activities, and cost per child tested, cost per case detected and cost per STH infection correctly classified were estimated. A sensitivity analysis was conducted to assess the impact of various survey parameters on cost estimates. Both diagnostic methods exhibited comparable sensitivity for detection of any STH species over single and consecutive day sampling: 52.0% for single day Kato-Katz; 49.1% for single-day Mini-FLOTAC; 76.9% for consecutive day Kato-Katz; and 74.1% for consecutive day Mini-FLOTAC. Diagnostic performance did not differ significantly between methods for the different STH species. Use of Kato-Katz with school-based sampling was the lowest cost scenario for cost per child tested ($10.14) and cost per case correctly classified ($12.84). Cost per case detected was lowest for Kato-Katz used in community-based sampling ($128.24). Sensitivity analysis revealed the cost of case detection for any STH decreased non-linearly as prevalence rates increased and was influenced by the number of samples collected. The Kato-Katz method was comparable in diagnostic sensitivity to the Mini-FLOTAC method, but afforded greater cost-effectiveness. Future work is required to evaluate the cost-effectiveness of STH surveillance in different settings.
Jain, Ram B
2017-07-01
Prevalence of smoking is needed to estimate the need for future public health resources. To compute and compare smoking prevalence rates by using self-reported smoking statuses, two serum cotinine (SCOT) based biomarker methods, and one urinary 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanol (NNAL) based biomarker method. These estimates were then used to develop correction factors to be applicable to self-reported prevalences to arrive at corrected smoking prevalence rates. Data from National Health and Nutrition Examination Survey (NHANES) for 2007-2012 for those aged ≥20 years (N = 16826) were used. Self-reported prevalence rate for the total population computed as the weighted number of self-reported smokers divided by weighted number of all participants was 21.6% and 24% when computed by weighted number of self-reported smokers divided by the weighted number of self-reported smokers and nonsmokers. The corrected prevalence rate was found to be 25.8%. A 1% underestimate in smoking prevalence is equivalent to not being able to identify 2.2 million smokers in US in a given year. This underestimation, if not corrected, could lead to serious gap in the public health services available and needed to provide adequate preventive and corrective treatment to smokers.
SU-E-T-472: Improvement of IMRT QA Passing Rate by Correcting Angular Dependence of MatriXX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Q; Watkins, W; Kim, T
2015-06-15
Purpose: Multi-channel planar detector arrays utilized for IMRT-QA, such as the MatriXX, exhibit an incident-beam angular dependent response which can Result in false-positive gamma-based QA results, especially for helical tomotherapy plans which encompass the full range of beam angles. Although MatriXX can use with gantry angle sensor to provide automatically angular correction, this sensor does not work with tomotherapy. The purpose of the study is to reduce IMRT-QA false-positives by correcting for the MatriXX angular dependence. Methods: MatriXX angular dependence was characterized by comparing multiple fixed-angle irradiation measurements with corresponding TPS computed doses. For 81 Tomo-helical IMRT-QA measurements, two differentmore » correction schemes were tested: (1) A Monte-Carlo dose engine was used to compute MatriXX signal based on the angular-response curve. The computed signal was then compared with measurement. (2) Uncorrected computed signal was compared with measurements uniformly scaled to account for the average angular dependence. Three scaling factor (+2%, +2.5%, +3%) were tested. Results: The MatriXX response is 8% less than predicted for a PA beam even when the couch is fully accounted for. Without angular correction, only 67% of the cases pass the >90% points γ<1 (3%, 3mm). After full angular correction, 96% of the cases pass the criteria. Of three scaling factors, +2% gave the highest passing rate (89%), which is still less than the full angular correction method. With a stricter γ(2%,3mm) criteria, the full angular correction method was still able to achieve the 90% passing rate while the scaling method only gives 53% passing rate. Conclusion: Correction for the MatriXX angular dependence reduced the false-positives rate of our IMRT-QA process. It is necessary to correct for the angular dependence to achieve the IMRT passing criteria specified in TG129.« less
Schmid, Karen Barros; Scherer, Luciene; Barcellos, Regina Bones; Kuhleis, Daniele; Prestes, Isaías Valente; Steffen, Ricardo Ewbank; Dalla Costa, Elis Regina; Rossetti, Maria Lucia Rosa
2014-12-16
Prison conditions can favor the spread of tuberculosis (TB). This study aimed to evaluate in a Brazilian prison: the performance and accuracy of smear, culture and Detect-TB; performance of smear plus culture and smear plus Detect-TB, according to different TB prevalence rates; and the cost-effectiveness of these procedures for pulmonary tuberculosis (PTB) diagnosis. This paper describes a cost-effectiveness study. A decision analytic model was developed to estimate the costs and cost-effectiveness of five routine diagnostic procedures for diagnosis of PTB using sputum specimens: a) Smear alone, b) Culture alone, c) Detect-TB alone, d) Smear plus culture and e) Smear plus Detect-TB. The cost-effectiveness ratio of costs were evaluated per correctly diagnosed TB case and all procedures costs were attributed based on the procedure costs adopted by the Brazilian Public Health System. A total of 294 spontaneous sputum specimens from patients suspected of having TB were analyzed. The sensibility and specificity were calculated to be 47% and 100% for smear; 93% and 100%, for culture; 74% and 95%, for Detect-TB; 96% and 100%, for smear plus culture; and 86% and 95%, for smear plus Detect-TB. The negative and positive predictive values for smear plus Detect-TB, according to different TB prevalence rates, ranged from 83 to 99% and 48 to 96%, respectively. In a cost-effectiveness analysis, smear was both less costly and less effective than the other strategies. Culture and smear plus culture were more effective but more costly than the other strategies. Smear plus Detect-TB was the most cost-effective method. The Detect-TB evinced to be sensitive and effective for the PTB diagnosis when applied with smear microscopy. Diagnostic methods should be improved to increase TB case detection. To support rational decisions about the implementation of such techniques, cost-effectiveness studies are essential, including in prisons, which are known for health care assessment problems.
Fatigue Crack Growth Rate and Stress-Intensity Factor Corrections for Out-of-Plane Crack Growth
NASA Technical Reports Server (NTRS)
Forth, Scott C.; Herman, Dave J.; James, Mark A.
2003-01-01
Fatigue crack growth rate testing is performed by automated data collection systems that assume straight crack growth in the plane of symmetry and use standard polynomial solutions to compute crack length and stress-intensity factors from compliance or potential drop measurements. Visual measurements used to correct the collected data typically include only the horizontal crack length, which for cracks that propagate out-of-plane, under-estimates the crack growth rates and over-estimates the stress-intensity factors. The authors have devised an approach for correcting both the crack growth rates and stress-intensity factors based on two-dimensional mixed mode-I/II finite element analysis (FEA). The approach is used to correct out-of-plane data for 7050-T7451 and 2025-T6 aluminum alloys. Results indicate the correction process works well for high DeltaK levels but fails to capture the mixed-mode effects at DeltaK levels approaching threshold (da/dN approximately 10(exp -10) meter/cycle).
Generic, scalable and decentralized fault detection for robot swarms.
Tarapore, Danesh; Christensen, Anders Lyhne; Timmis, Jon
2017-01-01
Robot swarms are large-scale multirobot systems with decentralized control which means that each robot acts based only on local perception and on local coordination with neighboring robots. The decentralized approach to control confers number of potential benefits. In particular, inherent scalability and robustness are often highlighted as key distinguishing features of robot swarms compared with systems that rely on traditional approaches to multirobot coordination. It has, however, been shown that swarm robotics systems are not always fault tolerant. To realize the robustness potential of robot swarms, it is thus essential to give systems the capacity to actively detect and accommodate faults. In this paper, we present a generic fault-detection system for robot swarms. We show how robots with limited and imperfect sensing capabilities are able to observe and classify the behavior of one another. In order to achieve this, the underlying classifier is an immune system-inspired algorithm that learns to distinguish between normal behavior and abnormal behavior online. Through a series of experiments, we systematically assess the performance of our approach in a detailed simulation environment. In particular, we analyze our system's capacity to correctly detect robots with faults, false positive rates, performance in a foraging task in which each robot exhibits a composite behavior, and performance under perturbations of the task environment. Results show that our generic fault-detection system is robust, that it is able to detect faults in a timely manner, and that it achieves a low false positive rate. The developed fault-detection system has the potential to enable long-term autonomy for robust multirobot systems, thus increasing the usefulness of robots for a diverse repertoire of upcoming applications in the area of distributed intelligent automation.
Behrens, A; Pech, O; Wuthnow, E; Manner, H; Pohl, J; May, A; Ell, C
2015-06-01
Detecting early neoplasias in Barrett's oesophagus (BE) is challenging. Recent publications have been focusing on improving the detection of such lesions during Barrett's surveillance. However in a recently published Danish register study calculating the risk for cancer-development in BE two-thirds of the diagnosed tumors were identified during the first examination or in the first year. This means that index endoscopy might be more effective than surveillance in detecting early neoplasia in BE. In the period from January 2010 to April 2011, all patients who consecutively presented with a diagnosis of early neoplastic changes in BE were recorded prospectively. The analysis included data for 121 patients. In patients with short-segment BE (SSBE), neoplasia was only diagnosed in 6 % of cases in the surveillance examination, compared with 44 % of cases in long-segment BE (LSBE). The neoplastic lesion was identified visually in 43 patients (36 %) during the external EGD. Type II tumours were detected in 40 % (39/98) and were correctly assessed as neoplastic in 25 % of cases (24/98). 1. in patients with SSBE almost all early tumours are diagnosed by index endoscopy and not by Barrett's surveillance; 2. around 40 % of all early neoplasias are endoscopically invisible and are only diagnosed using four-quadrant biopsies; 3. the macroscopic tumour type has a substantial influence on the detection rate for neoplasia. If efforts to increase the detection rate for early neoplasia in BE are focused solely on the Barrett's surveillance method, then only a minority of patients - 20 % in the present group - will benefit from the measure. German clinical trials register, DRKS00 004 168. © Georg Thieme Verlag KG Stuttgart · New York.
Generic, scalable and decentralized fault detection for robot swarms
Christensen, Anders Lyhne; Timmis, Jon
2017-01-01
Robot swarms are large-scale multirobot systems with decentralized control which means that each robot acts based only on local perception and on local coordination with neighboring robots. The decentralized approach to control confers number of potential benefits. In particular, inherent scalability and robustness are often highlighted as key distinguishing features of robot swarms compared with systems that rely on traditional approaches to multirobot coordination. It has, however, been shown that swarm robotics systems are not always fault tolerant. To realize the robustness potential of robot swarms, it is thus essential to give systems the capacity to actively detect and accommodate faults. In this paper, we present a generic fault-detection system for robot swarms. We show how robots with limited and imperfect sensing capabilities are able to observe and classify the behavior of one another. In order to achieve this, the underlying classifier is an immune system-inspired algorithm that learns to distinguish between normal behavior and abnormal behavior online. Through a series of experiments, we systematically assess the performance of our approach in a detailed simulation environment. In particular, we analyze our system’s capacity to correctly detect robots with faults, false positive rates, performance in a foraging task in which each robot exhibits a composite behavior, and performance under perturbations of the task environment. Results show that our generic fault-detection system is robust, that it is able to detect faults in a timely manner, and that it achieves a low false positive rate. The developed fault-detection system has the potential to enable long-term autonomy for robust multirobot systems, thus increasing the usefulness of robots for a diverse repertoire of upcoming applications in the area of distributed intelligent automation. PMID:28806756
Complete analog control of the carrier-envelope-phase of a high-power laser amplifier.
Feng, C; Hergott, J-F; Paul, P-M; Chen, X; Tcherbakoff, O; Comte, M; Gobert, O; Reduzzi, M; Calegari, F; Manzoni, C; Nisoli, M; Sansone, G
2013-10-21
In this work we demonstrate the development of a complete analog feedback loop for the control of the carrier-envelope phase (CEP) of a high-average power (20 W) laser operating at 10 kHz repetition rate. The proposed method combines a detection scheme working on a single-shot basis at the full-repetition-rate of the laser system with a fast actuator based either on an acousto-optic or on an electro-optic crystal. The feedback loop is used to correct the CEP fluctuations introduced by the amplification process demonstrating a CEP residual noise of 320 mrad measured on a single-shot basis. The comparison with a feedback loop operating at a lower sampling rate indicates an improvement up to 45% in the residual noise. The measurement of the CEP drift for different integration times clearly evidences the importance of the single-shot characterization of the residual CEP drift. The demonstrated scheme could be efficiently applied for systems approaching the 100 kHz repetition rate regime.
NASA Astrophysics Data System (ADS)
Tejedor, J.; Macias-Guarasa, J.; Martins, H. F.; Piote, D.; Pastor-Graells, J.; Martin-Lopez, S.; Corredera, P.; De Pauw, G.; De Smet, F.; Postvoll, W.; Ahlen, C. H.; Gonzalez-Herraez, M.
2017-04-01
This paper presents the first report on on-line and final blind field test results of a pipeline integrity threat surveillance system. The system integrates a machine+activity identification mode, and a threat detection mode. Two different pipeline sections were selected for the blind tests: One close to the sensor position, and the other 35 km away from it. Results of the machine+activity identification mode showed that about 46% of the times the machine, the activity or both were correctly identified. For the threat detection mode, 8 out of 10 threats were correctly detected, with 1 false alarm.
Custodio, Nilton; Lira, David; Herrera-Perez, Eder; Montesinos, Rosa; Castro-Suarez, Sheila; Cuenca-Alfaro, José; Valeriano-Lorenzo, Lucía
2017-01-01
Background/Aims: Short tests to early detection of the cognitive impairment are necessary in primary care setting, particularly in populations with low educational level. The aim of this study was to assess the performance of Memory Alteration Test (M@T) to discriminate controls, patients with amnestic Mild Cognitive Impairment (aMCI) and patients with early Alzheimer’s Dementia (AD) in a sample of individuals with low level of education. Methods: Cross-sectional study to assess the performance of the M@T (study test), compared to the neuropsychological evaluation (gold standard test) scores in 247 elderly subjects with low education level from Lima-Peru. The cognitive evaluation included three sequential stages: (1) screening (to detect cases with cognitive impairment); (2) nosological diagnosis (to determinate specific disease); and (3) classification (to differentiate disease subtypes). The subjects with negative results for all stages were considered as cognitively normal (controls). The test performance was assessed by means of area under the receiver operating characteristic (ROC) curve. We calculated validity measures (sensitivity, specificity and correctly classified percentage), the internal consistency (Cronbach’s alpha coefficient), and concurrent validity (Pearson’s ratio coefficient between the M@T and Clinical Dementia Rating (CDR) scores). Results: The Cronbach’s alpha coefficient was 0.79 and Pearson’s ratio coefficient was 0.79 (p < 0.01). The AUC of M@T to discriminate between early AD and aMCI was 99.60% (sensitivity = 100.00%, specificity = 97.53% and correctly classified = 98.41%) and to discriminate between aMCI and controls was 99.56% (sensitivity = 99.17%, specificity = 91.11%, and correctly classified = 96.99%). Conclusions: The M@T is a short test with a good performance to discriminate controls, aMCI and early AD in individuals with low level of education from urban settings. PMID:28878665
Wavefront detection method of a single-sensor based adaptive optics system.
Wang, Chongchong; Hu, Lifa; Xu, Huanyu; Wang, Yukun; Li, Dayu; Wang, Shaoxin; Mu, Quanquan; Yang, Chengliang; Cao, Zhaoliang; Lu, Xinghai; Xuan, Li
2015-08-10
In adaptive optics system (AOS) for optical telescopes, the reported wavefront sensing strategy consists of two parts: a specific sensor for tip-tilt (TT) detection and another wavefront sensor for other distortions detection. Thus, a part of incident light has to be used for TT detection, which decreases the light energy used by wavefront sensor and eventually reduces the precision of wavefront correction. In this paper, a single Shack-Hartmann wavefront sensor based wavefront measurement method is presented for both large amplitude TT and other distortions' measurement. Experiments were performed for testing the presented wavefront method and validating the wavefront detection and correction ability of the single-sensor based AOS. With adaptive correction, the root-mean-square of residual TT was less than 0.2 λ, and a clear image was obtained in the lab. Equipped on a 1.23-meter optical telescope, the binary stars with angle distance of 0.6″ were clearly resolved using the AOS. This wavefront measurement method removes the separate TT sensor, which not only simplifies the AOS but also saves light energy for subsequent wavefront sensing and imaging, and eventually improves the detection and imaging capability of the AOS.
A median filter approach for correcting errors in a vector field
NASA Technical Reports Server (NTRS)
Schultz, H.
1985-01-01
Techniques are presented for detecting and correcting errors in a vector field. These methods employ median filters which are frequently used in image processing to enhance edges and remove noise. A detailed example is given for wind field maps produced by a spaceborne scatterometer. The error detection and replacement algorithm was tested with simulation data from the NASA Scatterometer (NSCAT) project.
Gilmore, Adam Matthew
2014-01-01
Contemporary spectrofluorimeters comprise exciting light sources, excitation and emission monochromators, and detectors that without correction yield data not conforming to an ideal spectral response. The correction of the spectral properties of the exciting and emission light paths first requires calibration of the wavelength and spectral accuracy. The exciting beam path can be corrected up to the sample position using a spectrally corrected reference detection system. The corrected reference response accounts for both the spectral intensity and drift of the exciting light source relative to emission and/or transmission detector responses. The emission detection path must also be corrected for the combined spectral bias of the sample compartment optics, emission monochromator, and detector. There are several crucial issues associated with both excitation and emission correction including the requirement to account for spectral band-pass and resolution, optical band-pass or neutral density filters, and the position and direction of polarizing elements in the light paths. In addition, secondary correction factors are described including (1) subtraction of the solvent's fluorescence background, (2) removal of Rayleigh and Raman scattering lines, as well as (3) correcting for sample concentration-dependent inner-filter effects. The importance of the National Institute of Standards and Technology (NIST) traceable calibration and correction protocols is explained in light of valid intra- and interlaboratory studies and effective spectral qualitative and quantitative analyses including multivariate spectral modeling.
Horn, Kevin M.
2013-07-09
A method reconstructs the charge collection from regions beneath opaque metallization of a semiconductor device, as determined from focused laser charge collection response images, and thereby derives a dose-rate dependent correction factor for subsequent broad-area, dose-rate equivalent, laser measurements. The position- and dose-rate dependencies of the charge-collection magnitude of the device are determined empirically and can be combined with a digital reconstruction methodology to derive an accurate metal-correction factor that permits subsequent absolute dose-rate response measurements to be derived from laser measurements alone. Broad-area laser dose-rate testing can thereby be used to accurately determine the peak transient current, dose-rate response of semiconductor devices to penetrating electron, gamma- and x-ray irradiation.
78 FR 53152 - Prescription Drug User Fee Rates for Fiscal Year 2014; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-28
...] Prescription Drug User Fee Rates for Fiscal Year 2014; Correction AGENCY: Food and Drug Administration, HHS... ``Prescription Drug User Fee Rates for Fiscal Year 2014'' that appeared in the Federal Register of August 2, 2013 (78 FR 46980). The document announced the Fiscal Year 2014 fee rates for the Prescription Drug User...
The NEEDS Data Base Management and Archival Mass Memory System
NASA Technical Reports Server (NTRS)
Bailey, G. A.; Bryant, S. B.; Thomas, D. T.; Wagnon, F. W.
1980-01-01
A Data Base Management System and an Archival Mass Memory System are being developed that will have a 10 to the 12th bit on-line and a 10 to the 13th off-line storage capacity. The integrated system will accept packetized data from the data staging area at 50 Mbps, create a comprehensive directory, provide for file management, record the data, perform error detection and correction, accept user requests, retrieve the requested data files and provide the data to multiple users at a combined rate of 50 Mbps. Stored and replicated data files will have a bit error rate of less than 10 to the -9th even after ten years of storage. The integrated system will be demonstrated to prove the technology late in 1981.
The use of a photoionization detector to detect harmful volatile chemicals by emergency personnel
Patel, Neil D; Fales, William D; Farrell, Robert N
2009-01-01
Objective The objective of this investigation was to determine if a photoionization detector (PID) could be used to detect the presence of a simulated harmful chemical on simulated casualties of a chemical release. Methods A screening protocol, based on existing radiation screening protocols, was developed for the purposes of the investigation. Three simulated casualties were contaminated with a simulated chemical agent and two groups of emergency responders were involved in the trials. The success–failure ratio of the participants was used to judge the performance of the PID in this application. Results A high success rate was observed when the screening protocol was properly adhered to (97.67%). Conversely, the success rate suffered when participants deviated from the protocol (86.31%). With one exception, all failures were noted to have been the result of a failure to correctly observe the established screening protocol. Conclusions The results of this investigation indicate that the PID may be an effective screening tool for emergency responders. However, additional study is necessary to both confirm the effectiveness of the PID and refine the screening protocol if necessary. PMID:27147829
Incidence of Speech-Correcting Surgery in Children With Isolated Cleft Palate.
Gustafsson, Charlotta; Heliövaara, Arja; Leikola, Junnu; Rautio, Jorma
2018-01-01
Speech-correcting surgeries (pharyngoplasty) are performed to correct velopharyngeal insufficiency (VPI). This study aimed to analyze the need for speech-correcting surgery in children with isolated cleft palate (ICP) and to determine differences among cleft extent, gender, and primary technique used. In addition, we assessed the timing and number of secondary procedures performed and the incidence of operated fistulas. Retrospective medical chart review study from hospital archives and electronic records. These comprised the 423 consecutive nonsyndromic children (157 males and 266 females) with ICP treated at the Cleft Palate and Craniofacial Center of Helsinki University Hospital during 1990 to 2016. The total incidence of VPI surgery was 33.3% and the fistula repair rate, 7.8%. Children with cleft of both the hard and soft palate (n = 300) had a VPI secondary surgery rate of 37.3% (fistula repair rate 10.7%), whereas children with only cleft of the soft palate (n = 123) had a corresponding rate of 23.6% (fistula repair rate 0.8%). Gender and primary palatoplasty technique were not considered significant factors in need for VPI surgery. The majority of VPI surgeries were performed before school age. One fifth of patients receiving speech-correcting surgery had more than one subsequent procedure. The need for speech-correcting surgery and fistula repair was related to the severity of the cleft. Although the majority of the corrective surgeries were done before the age of 7 years, a considerable number were performed at a later stage, necessitating long-term observation.
Blauch, A J; Schiano, J L; Ginsberg, M D
2000-06-01
The performance of a nuclear resonance detection system can be quantified using binary detection theory. Within this framework, signal averaging increases the probability of a correct detection and decreases the probability of a false alarm by reducing the variance of the noise in the average signal. In conjunction with signal averaging, we propose another method based on feedback control concepts that further improves detection performance. By maximizing the nuclear resonance signal amplitude, feedback raises the probability of correct detection. Furthermore, information generated by the feedback algorithm can be used to reduce the probability of false alarm. We discuss the advantages afforded by feedback that cannot be obtained using signal averaging. As an example, we show how this method is applicable to the detection of explosives using nuclear quadrupole resonance. Copyright 2000 Academic Press.
NASA Astrophysics Data System (ADS)
Sun, Yankui; Li, Shan; Sun, Zhongyang
2017-01-01
We propose a framework for automated detection of dry age-related macular degeneration (AMD) and diabetic macular edema (DME) from retina optical coherence tomography (OCT) images, based on sparse coding and dictionary learning. The study aims to improve the classification performance of state-of-the-art methods. First, our method presents a general approach to automatically align and crop retina regions; then it obtains global representations of images by using sparse coding and a spatial pyramid; finally, a multiclass linear support vector machine classifier is employed for classification. We apply two datasets for validating our algorithm: Duke spectral domain OCT (SD-OCT) dataset, consisting of volumetric scans acquired from 45 subjects-15 normal subjects, 15 AMD patients, and 15 DME patients; and clinical SD-OCT dataset, consisting of 678 OCT retina scans acquired from clinics in Beijing-168, 297, and 213 OCT images for AMD, DME, and normal retinas, respectively. For the former dataset, our classifier correctly identifies 100%, 100%, and 93.33% of the volumes with DME, AMD, and normal subjects, respectively, and thus performs much better than the conventional method; for the latter dataset, our classifier leads to a correct classification rate of 99.67%, 99.67%, and 100.00% for DME, AMD, and normal images, respectively.
Sun, Yankui; Li, Shan; Sun, Zhongyang
2017-01-01
We propose a framework for automated detection of dry age-related macular degeneration (AMD) and diabetic macular edema (DME) from retina optical coherence tomography (OCT) images, based on sparse coding and dictionary learning. The study aims to improve the classification performance of state-of-the-art methods. First, our method presents a general approach to automatically align and crop retina regions; then it obtains global representations of images by using sparse coding and a spatial pyramid; finally, a multiclass linear support vector machine classifier is employed for classification. We apply two datasets for validating our algorithm: Duke spectral domain OCT (SD-OCT) dataset, consisting of volumetric scans acquired from 45 subjects—15 normal subjects, 15 AMD patients, and 15 DME patients; and clinical SD-OCT dataset, consisting of 678 OCT retina scans acquired from clinics in Beijing—168, 297, and 213 OCT images for AMD, DME, and normal retinas, respectively. For the former dataset, our classifier correctly identifies 100%, 100%, and 93.33% of the volumes with DME, AMD, and normal subjects, respectively, and thus performs much better than the conventional method; for the latter dataset, our classifier leads to a correct classification rate of 99.67%, 99.67%, and 100.00% for DME, AMD, and normal images, respectively.
Utility of the serum C-reactive protein for detection of occult bacterial infection in children.
Isaacman, Daniel J; Burke, Bonnie L
2002-09-01
To assess the utility of serum C-reactive protein (CRP) as a screen for occult bacterial infection in children. Febrile children ages 3 to 36 months who visited an urban children's hospital emergency department and received a complete blood cell count and blood culture as part of their evaluation were prospectively enrolled from February 2, 2000, through May 30, 2001. Informed consent was obtained for the withdrawal of an additional 1-mL aliquot of blood for use in CRP evaluation. Logistic regression and receiver operator characteristic (ROC) curves were modeled for each predictor to identify optimal test values, and were compared using likelihood ratio tests. Two hundred fifty-six patients were included in the analysis, with a median age of 15.3 months (range, 3.1-35.2 months) and median temperature at triage 40.0 degrees C (range, 39.0 degrees C-41.3 degrees C). Twenty-nine (11.3%) cases of occult bacterial infection (OBI) were identified, including 17 cases of pneumonia, 9 cases of urinary tract infection, and 3 cases of bacteremia. The median white blood cell count in this data set was 12.9 x 10(3)/ micro L [corrected] (range, 3.6-39.1 x10(3)/ micro L) [corrected], the median absolute neutrophil count (ANC) was 7.12 x 10(3)/L [corrected] (range, 0.56-28.16 x10(3)/L) [corrected], and the median CRP level was 1.7 mg/dL (range, 0.2-43.3 mg/dL). The optimal cut-off point for CRP in this data set (4.4 mg/dL) achieved a sensitivity of 63% and a specificity of 81% for detection of OBI in this population. Comparing models using cut-off values from individual laboratory predictors (ANC, white blood cell count, and CRP) that maximized sensitivity and specificity revealed that a model using an ANC of 10.6 x10(3)/L [corrected] (sensitivity, 69%; specificity, 79%) was the best predictive model. Adding CRP to the model insignificantly increased sensitivity to 79%, while significantly decreasing specificity to 50%. Active monitoring of emergency department blood cultures drawn during the study period from children between 3 and 36 months of age showed an overall bacteremia rate of 1.1% during this period. An ANC cut-off point of 10.6 x10(3)/L [corrected] offers the best predictive model for detection of occult bacterial infection using a single test. The addition of CRP to ANC adds little diagnostic utility. Furthermore, the lowered incidence of occult bacteremia in our population supports a decrease in the use of diagnostic screening in this population.
75 FR 11502 - Schedule of Water Charges; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-11
... DELAWARE RIVER BASIN COMMISSION 18 CFR Part 410 Schedule of Water Charges; Correction AGENCY: Delaware River Basin Commission. ACTION: Proposed rule; correction. SUMMARY: This document corrects the... of water charges. This correction clarifies that the amended rates are proposed to take effect in two...
Studying fish near ocean energy devices using underwater video
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzner, Shari; Hull, Ryan E.; Harker-Klimes, Genevra EL
The effects of energy devices on fish populations are not well-understood, and studying the interactions of fish with tidal and instream turbines is challenging. To address this problem, we have evaluated algorithms to automatically detect fish in underwater video and propose a semi-automated method for ocean and river energy device ecological monitoring. The key contributions of this work are the demonstration of a background subtraction algorithm (ViBE) that detected 87% of human-identified fish events and is suitable for use in a real-time system to reduce data volume, and the demonstration of a statistical model to classify detections as fish ormore » not fish that achieved a correct classification rate of 85% overall and 92% for detections larger than 5 pixels. Specific recommendations for underwater video acquisition to better facilitate automated processing are given. The recommendations will help energy developers put effective monitoring systems in place, and could lead to a standard approach that simplifies the monitoring effort and advances the scientific understanding of the ecological impacts of ocean and river energy devices.« less
Shirah, Bader Hamza; Shirah, Hamza Asaad; Alhaidari, Wael Awad; Elraghi, Mohamed Ali; Chughtai, Mohammad Azam
2017-01-01
The diagnosis of acute appendicitis is mainly clinical and is correct in about 80% of patients, but 20-33% present with atypical findings, which resulted in a negative appendectomy rate of 20-30%. The graded compression ultrasound method in the diagnosis of acute appendicitis was reported with a sensitivity of 89%, and specificity of 95%. In this study, we aim to evaluate the graded compression ultrasonography in the diagnosis of acute appendicitis, its influence on the clinical judgment to operate, and its role in lowering the negative appendectomy rate. 1073 patients treated surgically for acute appendicitis between January 2005 and December 2014 were reviewed. Ultrasound findings, histopathological diagnosis, and positive or negative appendectomy rates were analyzed. 647 (60.3%) patients were males and 426 (39.7%) females. The mean age was 26.5 years. Positive ultrasound findings were recorded in 892 (83.13%), while negative findings were recorded in 181 (16.87%). Positive appendectomy was recorded in 983 (91.6%), while negative appendectomy was recorded in 90 (8.4%). The sensitivity was 83%, specificity was 100%, and the rate of negative appendectomy was 8.39%. Graded compression technique of ultrasound is a useful modality, in addition to the clinical judgment of the surgeon and clinical findings, in detecting true positive cases of acute appendicitis, and thus reducing the negative appendectomy rate. Values of 100% specificity, and 8.4% negative appendectomy rate, or better, could be achieved, when an experienced surgeon and a professional radiologist collaborate in the diagnosis of acute appendicitis.
Performance of the STIS CCD Dark Rate Temperature Correction
NASA Astrophysics Data System (ADS)
Branton, Doug; STScI STIS Team
2018-06-01
Since July 2001, the Space Telescope Imaging Spectrograph (STIS) onboard Hubble has operated on its Side-2 electronics due to a failure in the primary Side-1 electronics. While nearly identical, Side-2 lacks a functioning temperature sensor for the CCD, introducing a variability in the CCD operating temperature. Previous analysis utilized the CCD housing temperature telemetry to characterize the relationship between the housing temperature and the dark rate. It was found that a first-order 7%/°C uniform dark correction demonstrated a considerable improvement in the quality of dark subtraction on Side-2 era CCD data, and that value has been used on all Side-2 CCD darks since. In this report, we show how this temperature correction has performed historically. We compare the current 7%/°C value against the ideal first-order correction at a given time (which can vary between ~6%/°C and ~10%/°C) as well as against a more complex second-order correction that applies a unique slope to each pixel as a function of dark rate and time. At worst, the current correction has performed ~1% worse than the second-order correction. Additionally, we present initial evidence suggesting that the variability in pixel temperature-sensitivity is significant enough to warrant a temperature correction that considers pixels individually rather than correcting them uniformly.
Accuracy of contrast-enhanced ultrasound in the detection of bladder cancer
Nicolau, C; Bunesch, L; Peri, L; Salvador, R; Corral, J M; Mallofre, C; Sebastia, C
2011-01-01
Objective To assess the accuracy contrast-enhanced ultrasound (CEUS) in bladder cancer detection using transurethral biopsy in conventional cystoscopy as the reference standard and to determine whether CEUS improves the bladder cancer detection rate of baseline ultrasound. Methods 43 patients with suspected bladder cancer underwent conventional cystoscopy with transurethral biopsy of the suspicious lesions. 64 bladder cancers were confirmed in 33 out of 43 patients. Baseline ultrasound and CEUS were performed the day before surgery and the accuracy of both techniques for bladder cancer detection and number of detected tumours were analysed and compared with the final diagnosis. Results CEUS was significantly more accurate than ultrasound in determining presence or absence of bladder cancer: 88.37% vs 72.09%. Seven of eight uncertain baseline ultrasound results were correctly diagnosed using CEUS. CEUS sensitivity was also better than that of baseline ultrasound per number of tumours: 65.62% vs 60.93%. CEUS sensitivity for bladder cancer detection was very high for tumours larger than 5 mm (94.7%) but very low for tumours <5 mm (20%) and also had a very low negative predictive value (28.57%) in tumours <5 mm. Conclusion CEUS provided higher accuracy than baseline ultrasound for bladder cancer detection, being especially useful in non-conclusive baseline ultrasound studies. PMID:21123306
Schmidt, Jürgen; Laarousi, Rihab; Stolzmann, Wolfgang; Karrer-Gauß, Katja
2018-06-01
In this article, we examine the performance of different eye blink detection algorithms under various constraints. The goal of the present study was to evaluate the performance of an electrooculogram- and camera-based blink detection process in both manually and conditionally automated driving phases. A further comparison between alert and drowsy drivers was performed in order to evaluate the impact of drowsiness on the performance of blink detection algorithms in both driving modes. Data snippets from 14 monotonous manually driven sessions (mean 2 h 46 min) and 16 monotonous conditionally automated driven sessions (mean 2 h 45 min) were used. In addition to comparing two data-sampling frequencies for the electrooculogram measures (50 vs. 25 Hz) and four different signal-processing algorithms for the camera videos, we compared the blink detection performance of 24 reference groups. The analysis of the videos was based on very detailed definitions of eyelid closure events. The correct detection rates for the alert and manual driving phases (maximum 94%) decreased significantly in the drowsy (minus 2% or more) and conditionally automated (minus 9% or more) phases. Blinking behavior is therefore significantly impacted by drowsiness as well as by automated driving, resulting in less accurate blink detection.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-15
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.
A novel method of forceps biopsy improves the diagnosis of proximal biliary malignancies.
Kulaksiz, Hasan; Strnad, Pavel; Römpp, Achim; von Figura, Guido; Barth, Thomas; Esposito, Irene; Schirmacher, Peter; Henne-Bruns, Doris; Adler, Guido; Stiehl, Adolf
2011-02-01
Tissue specimen collection represents a cornerstone in diagnosis of proximal biliary tract malignancies offering great specificity, but only limited sensitivity. To improve the tumor detection rate, we developed a new method of forceps biopsy and compared it prospectively with endoscopic transpapillary brush cytology. 43 patients with proximal biliary stenoses, which were suspect for malignancy, undergoing endoscopic retrograde cholangiography were prospectively recruited and subjected to both biopsy [using a double-balloon enteroscopy (DBE) forceps under a guidance of a pusher and guiding catheter with guidewire] and transpapillary brush cytology. The cytological/histological findings were compared with the final clinical diagnosis. 35 out of 43 patients had a malignant disease (33 cholangiocarcinomas, 1 hepatocellular carcinoma, 1 gallbladder carcinoma). The sensitivity of cytology and biopsy in these patients was 49 and 69%, respectively. The method with DBE forceps allowed a pinpoint biopsy of the biliary stenoses. Both methods had 100% specificity, and, when combined, 80% of malignant processes were detected. All patients with non-malignant conditions were correctly assigned by both methods. No clinically relevant complications were observed. The combination of forceps biopsy and transpapillary brush cytology is safe and offers superior detection rates compared to both methods alone, and therefore represents a promising approach in evaluation of proximal biliary tract processes.
Tanaka, Kenichi; Kajimoto, Tsuyoshi; Hayashi, Takahiro; Asanuma, Osamu; Hori, Masakazu; Kamo, Ken-Ichi; Sumida, Iori; Takahashi, Yutaka; Tateoka, Kunihiko; Bengua, Gerard; Sakata, Koh-Ichi; Endo, Satoru
2018-04-11
This study aims to demonstrate the feasibility of a method for estimating the strength of a moving brachytherapy source during implantation in a patient. Experiments were performed under the same conditions as in the actual treatment, except for one point that the source was not implanted into a patient. The brachytherapy source selected for this study was 125I with an air kerma strength of 0.332 U (μGym2h-1), and the detector used was a plastic scintillator with dimensions of 10 cm × 5 cm × 5 cm. A calibration factor to convert the counting rate of the detector to the source strength was measured and then the accuracy of the proposed method was investigated for a manually driven source. The accuracy was found to be under 10% when the shielding effect of additional needles for implantation at other positions was corrected, and about 30% when the shielding was not corrected. Even without shielding correction, the proposed method can detect dead/dropped source, implantation of a source with the wrong strength, and a mistake in the number of the sources implanted. Furthermore, when the correction was applied, the achieved accuracy came close to within 7% required to find the Oncoseed 6711 (125I seed with unintended strength among the commercially supplied values of 0.392, 0.462 and 0.533 U).
Classification of ring artifacts for their effective removal using type adaptive correction schemes.
Anas, Emran Mohammad Abu; Lee, Soo Yeol; Hasan, Kamrul
2011-06-01
High resolution tomographic images acquired with a digital X-ray detector are often degraded by the so called ring artifacts. In this paper, a detail analysis including the classification, detection and correction of these ring artifacts is presented. At first, a novel idea for classifying rings into two categories, namely type I and type II rings, is proposed based on their statistical characteristics. The defective detector elements and the dusty scintillator screens result in type I ring and the mis-calibrated detector elements lead to type II ring. Unlike conventional approaches, we emphasize here on the separate detection and correction schemes for each type of rings for their effective removal. For the detection of type I ring, the histogram of the responses of the detector elements is used and a modified fast image inpainting algorithm is adopted to correct the responses of the defective pixels. On the other hand, to detect the type II ring, first a simple filtering scheme is presented based on the fast Fourier transform (FFT) to smooth the sum curve derived form the type I ring corrected projection data. The difference between the sum curve and its smoothed version is then used to detect their positions. Then, to remove the constant bias suffered by the responses of the mis-calibrated detector elements with view angle, an estimated dc shift is subtracted from them. The performance of the proposed algorithm is evaluated using real micro-CT images and is compared with three recently reported algorithms. Simulation results demonstrate superior performance of the proposed technique as compared to the techniques reported in the literature. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sun, Wei; Ding, Wei; Yan, Huifang; Duan, Shunli
2018-06-01
Shoe-mounted pedestrian navigation systems based on micro inertial sensors rely on zero velocity updates to correct their positioning errors in time, which effectively makes determining the zero velocity interval play a key role during normal walking. However, as walking gaits are complicated, and vary from person to person, it is difficult to detect walking gaits with a fixed threshold method. This paper proposes a pedestrian gait classification method based on a hidden Markov model. Pedestrian gait data are collected with a micro inertial measurement unit installed at the instep. On the basis of analyzing the characteristics of the pedestrian walk, a single direction angular rate gyro output is used to classify gait features. The angular rate data are modeled into a univariate Gaussian mixture model with three components, and a four-state left–right continuous hidden Markov model (CHMM) is designed to classify the normal walking gait. The model parameters are trained and optimized using the Baum–Welch algorithm and then the sliding window Viterbi algorithm is used to decode the gait. Walking data are collected through eight subjects walking along the same route at three different speeds; the leave-one-subject-out cross validation method is conducted to test the model. Experimental results show that the proposed algorithm can accurately detect different walking gaits of zero velocity interval. The location experiment shows that the precision of CHMM-based pedestrian navigation improved by 40% when compared to the angular rate threshold method.
Detection of pre-symptomatic rose powdery-mildew and gray-mold diseases based on thermal vision
NASA Astrophysics Data System (ADS)
Jafari, M.; Minaei, S.; Safaie, N.
2017-09-01
Roses are the most important plants in ornamental horticulture. Roses are susceptible to a number of phytopathogenic diseases. Among the most serious diseases of rose, powdery mildew (Podosphaera pannosa var. rosae) and gray mold (Botrytis cinerea) are widespread which require considerable attention. In this study, the potential of implementing thermal imaging to detect the pre-symptomatic appearance of these fungal diseases was investigated. Effects of powdery mildew and gray mold diseases on rose plants (Rosa hybrida L.) were examined by two experiments conducted in a growth chamber. To classify the healthy and infected plants, feature selection was carried out and the best extracted thermal features with the largest linguistic hedge values were chosen. Two neuro-fuzzy classifiers were trained to distinguish between the healthy and infected plants. Best estimation rates of 92.55% and 92.3% were achieved in training and testing the classifier with 8 clusters in order to identify the leaves infected with powdery mildew. In addition, the best estimation rates of 97.5% and 92.59% were achieved in training and testing the classifier with 4 clusters to identify the gray mold disease on flowers. Performance of the designed neuro-fuzzy classifiers were evaluated with the thermal images captured using an automatic imaging setup. Best correct estimation rates of 69% and 80% were achieved (on the second day post-inoculation) for pre-symptomatic appearance detection of powdery mildew and gray mold diseases, respectively.
Abele-Horn, Marianne; Hommers, Leif; Trabold, René; Frosch, Matthias
2006-01-01
We evaluated the ability of the new VITEK 2 version 4.01 software to identify and detect glycopeptide-resistant enterococci compared to that of the reference broth microdilution method and to classify them into the vanA, vanB, vanC1, and vanC2 genotypes. Moreover, the accuracy of antimicrobial susceptibility testing with agents with improved potencies against glycopeptide-resistant enterococci was determined. A total of 121 enterococci were investigated. The new VITEK 2 software was able to identify 114 (94.2%) enterococcal strains correctly to the species level and to classify 119 (98.3%) enterococci correctly to the glycopeptide resistance genotype level. One Enterococcus casseliflavus strain and six Enterococcus faecium vanA strains with low-level resistance to vancomycin were identified with low discrimination, requiring additional tests. One of the vanA strains was misclassified as the vanB type, and one glycopeptide-susceptible E. facium wild type was misclassified as the vanA type. The overall essential agreements for antimicrobial susceptibility testing results were 94.2% for vancomycin, 95.9% for teicoplanin, 100% for quinupristin-dalfopristin and moxifloxacin, and 97.5% for linezolid. The rates of minor errors were 9% for teicoplanin and 5% for the other antibiotic agents. The identification and susceptibility data were produced within 4 h to 6 h 30 min and 8 h 15 min to 12 h 15 min. In conclusion, use of VITEK 2 version 4.01 software appears to be a reliable method for the identification and detection of glycopeptide-resistant enterococci as well as an improvement over the use of the former VITEK 2 database. However, a significant reduction in the detection time would be desirable. PMID:16390951
Lu, Hongwei; Zhang, Chenxi; Sun, Ying; Hao, Zhidong; Wang, Chunfang; Tian, Jiajia
2015-08-01
Predicting the termination of paroxysmal atrial fibrillation (AF) may provide a signal to decide whether there is a need to intervene the AF timely. We proposed a novel RdR RR intervals scatter plot in our study. The abscissa of the RdR scatter plot was set to RR intervals and the ordinate was set as the difference between successive RR intervals. The RdR scatter plot includes information of RR intervals and difference between successive RR intervals, which captures more heart rate variability (HRV) information. By RdR scatter plot analysis of one minute RR intervals for 50 segments with non-terminating AF and immediately terminating AF, it was found that the points in RdR scatter plot of non-terminating AF were more decentralized than the ones of immediately terminating AF. By dividing the RdR scatter plot into uniform grids and counting the number of non-empty grids, non-terminating AF and immediately terminating AF segments were differentiated. By utilizing 49 RR intervals, for 20 segments of learning set, 17 segments were correctly detected, and for 30 segments of test set, 20 segments were detected. While utilizing 66 RR intervals, for 18 segments of learning set, 16 segments were correctly detected, and for 28 segments of test set, 20 segments were detected. The results demonstrated that during the last one minute before the termination of paroxysmal AF, the variance of the RR intervals and the difference of the neighboring two RR intervals became smaller. The termination of paroxysmal AF could be successfully predicted by utilizing the RdR scatter plot, while the predicting accuracy should be further improved.
ASCS online fault detection and isolation based on an improved MPCA
NASA Astrophysics Data System (ADS)
Peng, Jianxin; Liu, Haiou; Hu, Yuhui; Xi, Junqiang; Chen, Huiyan
2014-09-01
Multi-way principal component analysis (MPCA) has received considerable attention and been widely used in process monitoring. A traditional MPCA algorithm unfolds multiple batches of historical data into a two-dimensional matrix and cut the matrix along the time axis to form subspaces. However, low efficiency of subspaces and difficult fault isolation are the common disadvantages for the principal component model. This paper presents a new subspace construction method based on kernel density estimation function that can effectively reduce the storage amount of the subspace information. The MPCA model and the knowledge base are built based on the new subspace. Then, fault detection and isolation with the squared prediction error (SPE) statistic and the Hotelling ( T 2) statistic are also realized in process monitoring. When a fault occurs, fault isolation based on the SPE statistic is achieved by residual contribution analysis of different variables. For fault isolation of subspace based on the T 2 statistic, the relationship between the statistic indicator and state variables is constructed, and the constraint conditions are presented to check the validity of fault isolation. Then, to improve the robustness of fault isolation to unexpected disturbances, the statistic method is adopted to set the relation between single subspace and multiple subspaces to increase the corrective rate of fault isolation. Finally fault detection and isolation based on the improved MPCA is used to monitor the automatic shift control system (ASCS) to prove the correctness and effectiveness of the algorithm. The research proposes a new subspace construction method to reduce the required storage capacity and to prove the robustness of the principal component model, and sets the relationship between the state variables and fault detection indicators for fault isolation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knill, C; Wayne State University School of Medicine, Detroit, MI; Snyder, M
Purpose: PTW’s Octavius 1000 SRS array performs IMRT QA measurements with liquid filled ionization chambers (LICs). Collection efficiencies of LICs have been shown to change during IMRT delivery as a function of LINAC pulse frequency and pulse dose, which affects QA results. In this study, two methods were developed to correct changes in collection efficiencies during IMRT QA measurements, and the effects of these corrections on QA pass rates were compared. Methods: For the first correction, Matlab software was developed that calculates pulse frequency and pulse dose for each detector, using measurement and DICOM RT Plan files. Pulse information ismore » converted to collection efficiency and measurements are corrected by multiplying detector dose by ratios of calibration to measured collection efficiencies. For the second correction, MU/min in daily 1000 SRS calibration was chosen to match average MU/min of the VMAT plan. Usefulness of derived corrections were evaluated using 6MV and 10FFF SBRT RapidArc plans delivered to the OCTAVIUS 4D system using a TrueBeam equipped with an HD- MLC. Effects of the two corrections on QA results were examined by performing 3D gamma analysis comparing predicted to measured dose, with and without corrections. Results: After complex Matlab corrections, average 3D gamma pass rates improved by [0.07%,0.40%,1.17%] for 6MV and [0.29%,1.40%,4.57%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. Maximum changes in gamma pass rates were [0.43%,1.63%,3.05%] for 6MV and [1.00%,4.80%,11.2%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. On average, pass rates of simple daily calibration corrections were within 1% of complex Matlab corrections. Conclusion: Ion recombination effects can potentially be clinically significant for OCTAVIUS 1000 SRS measurements, especially for higher pulse dose unflattened beams when using tighter gamma tolerances. Matching daily 1000 SRS calibration MU/min to average planned MU/min is a simple correction that greatly reduces ion recombination effects, improving measurements accuracy and gamma pass rates. This work was supported by PTW.« less
On the independence of visual awareness and metacognition: a signal detection theoretic analysis.
Jachs, Barbara; Blanco, Manuel J; Grantham-Hill, Sarah; Soto, David
2015-04-01
Classically, visual awareness and metacognition are thought to be intimately linked, with our knowledge of the correctness of perceptual choices (henceforth metacognition) being dependent on the level of stimulus awareness. Here we used a signal detection theoretic approach involving a Gabor orientation discrimination task in conjunction with trial-by-trial ratings of perceptual awareness and response confidence in order to gauge estimates of type-1 (perceptual) orientation sensitivity and type-2 (metacognitive) sensitivity at different levels of stimulus awareness. Data from three experiments indicate that while the level of stimulus awareness had a profound impact on type-1 perceptual sensitivity, the awareness effect on type-2 metacognitive sensitivity was far lower by comparison. The present data pose a challenge for signal detection theoretic models in which both type-1 (perceptual) and type-2 (metacognitive) processes are assumed to operate on the same input. More broadly, the findings challenge the commonly held view that metacognition is tightly coupled to conscious states. (c) 2015 APA, all rights reserved.
Age group classification and gender detection based on forced expiratory spirometry.
Cosgun, Sema; Ozbek, I Yucel
2015-08-01
This paper investigates the utility of forced expiratory spirometry (FES) test with efficient machine learning algorithms for the purpose of gender detection and age group classification. The proposed method has three main stages: feature extraction, training of the models and detection. In the first stage, some features are extracted from volume-time curve and expiratory flow-volume loop obtained from FES test. In the second stage, the probabilistic models for each gender and age group are constructed by training Gaussian mixture models (GMMs) and Support vector machine (SVM) algorithm. In the final stage, the gender (or age group) of test subject is estimated by using the trained GMM (or SVM) model. Experiments have been evaluated on a large database from 4571 subjects. The experimental results show that average correct classification rate performance of both GMM and SVM methods based on the FES test is more than 99.3 % and 96.8 % for gender and age group classification, respectively.
Kelly, J F Daniel; Downey, Gerard
2005-05-04
Fourier transform infrared spectroscopy and attenuated total reflection sampling have been used to detect adulteration of single strength apple juice samples. The sample set comprised 224 authentic apple juices and 480 adulterated samples. Adulterants used included partially inverted cane syrup (PICS), beet sucrose (BS), high fructose corn syrup (HFCS), and a synthetic solution of fructose, glucose, and sucrose (FGS). Adulteration was carried out on individual apple juice samples at levels of 10, 20, 30, and 40% w/w. Spectral data were compressed by principal component analysis and analyzed using k-nearest neighbors and partial least squares regression techniques. Prediction results for the best classification models achieved an overall (authentic plus adulterated) correct classification rate of 96.5, 93.9, 92.2, and 82.4% for PICS, BS, HFCS, and FGS adulterants, respectively. This method shows promise as a rapid screening technique for the detection of a broad range of potential adulterants in apple juice.
2014-01-01
Background Osmotic demyelination syndrome (ODS) may be observed as a result of a rapid change in serum osmolarity, such as that induced by an overly rapid correction of serum sodium levels in hyponatraemic patients. Case presentation We describe the case of a 21-year-old woman who was hospitalized at week 10 of gestation because of severe hyperemesis. At admission the patient appeared restless and confused and severe hyponatraemia (serum sodium 107 mmol/L) and hypokalemia (serum potassium 1.1 mmol/L) were detected. Active and simultaneous correction of these imbalances led to an overly rapid increase of serum sodium levels (17 mmol/L in the first 24 hours). Isotonic saline solution was stopped and replaced by 5% dextrose solution infusion. However, the neurological alterations worsened and the radiological features were consistent with the diagnosis of extra-pontine ODS. Steroids were administered intravenously with progressive improvement of biochemical and clinical abnormalities. At the time of discharge, 20 days later, the patient was able to walk and eat autonomously with only minimal external support. Conclusions This report illustrates an unusual case of ODS, occurred after an excessive rate of correction of hyponatraemia obtained with isotonic saline infusion. Hypokaliemia and its active correction very likely played a crucial role in facilitating the onset of ODS. This interesting aspect will be explained in detail in the article. A more cautious and thoughtful correction of electrolyte alterations, would have probably prevented the onset of ODS in this patient. Physicians should be aware of the possibly fatal consequences that an exceedingly rapid change of serum osmolarity may have and should strictly follow the known safety measures in order to prevent it to occur. PMID:24725751
The Anglo-Australian Planet Search XXIV: The Frequency of Jupiter Analogs
NASA Astrophysics Data System (ADS)
Wittenmyer, Robert A.; Butler, R. P.; Tinney, C. G.; Horner, Jonathan; Carter, B. D.; Wright, D. J.; Jones, H. R. A.; Bailey, J.; O'Toole, Simon J.
2016-03-01
We present updated simulations of the detectability of Jupiter analogs by the 17-year Anglo-Australian Planet Search. The occurrence rate of Jupiter-like planets that have remained near their formation locations beyond the ice line is a critical datum necessary to constrain the details of planet formation. It is also vital in our quest to fully understand how common (or rare) planetary systems like our own are in the Galaxy. From a sample of 202 solar-type stars, and correcting for imperfect detectability on a star-by-star basis, we derive a frequency of {6.2}-1.6+2.8% for giant planets in orbits from 3 to 7 au. When a consistent definition of “Jupiter analog” is used, our results are in agreement with those from other legacy radial-velocity surveys.
Defining the best quality-control systems by design and inspection.
Hinckley, C M
1997-05-01
Not all of the many approaches to quality control are equally effective. Nonconformities in laboratory testing are caused basically by excessive process variation and mistakes. Statistical quality control can effectively control process variation, but it cannot detect or prevent most mistakes. Because mistakes or blunders are frequently the dominant source of nonconformities, we conclude that statistical quality control by itself is not effective. I explore the 100% inspection methods essential for controlling mistakes. Unlike the inspection techniques that Deming described as ineffective, the new "source" inspection methods can detect mistakes and enable corrections before nonconformities are generated, achieving the highest degree of quality at a fraction of the cost of traditional methods. Key relationships between task complexity and nonconformity rates are also described, along with cultural changes that are essential for implementing the best quality-control practices.
GNSS Signal Authentication Via Power and Distortion Monitoring
NASA Astrophysics Data System (ADS)
Wesson, Kyle D.; Gross, Jason N.; Humphreys, Todd E.; Evans, Brian L.
2018-04-01
We propose a simple low-cost technique that enables civil Global Positioning System (GPS) receivers and other civil global navigation satellite system (GNSS) receivers to reliably detect carry-off spoofing and jamming. The technique, which we call the Power-Distortion detector, classifies received signals as interference-free, multipath-afflicted, spoofed, or jammed according to observations of received power and correlation function distortion. It does not depend on external hardware or a network connection and can be readily implemented on many receivers via a firmware update. Crucially, the detector can with high probability distinguish low-power spoofing from ordinary multipath. In testing against over 25 high-quality empirical data sets yielding over 900,000 separate detection tests, the detector correctly alarms on all malicious spoofing or jamming attacks while maintaining a <0.6% single-channel false alarm rate.
Gilhuley, Kathleen; Cianciminio-Bordelon, Diane; Tang, Yi-Wei
2012-01-01
We compared the performance characteristics of culture and the Cepheid Xpert vanA assay for routine surveillance of vancomycin-resistant enterococci (VRE) from rectal swabs in patients at high risk for VRE carriage. The Cepheid Xpert vanA assay had a limit of detection of 100 CFU/ml and correctly detected 101 well-characterized clinical VRE isolates with no cross-reactivity in 27 non-VRE and related culture isolates. The clinical sensitivity, specificity, positive predictive value, and negative predictive value of the Xpert vanA PCR assay were 100%, 96.9%, 91.3%, and 100%, respectively, when tested on 300 consecutively collected rectal swabs. This assay provides excellent predictive values for prompt identification of VRE-colonized patients in hospitals with relatively high rates of VRE carriage. PMID:22972822
Imaging the fetal central nervous system
De Keersmaecker, B.; Claus, F.; De Catte, L.
2011-01-01
The low prevalence of fetal central nervous system anomalies results in a restricted level of exposure and limited experience for most of the obstetricians involved in prenatal ultrasound. Sonographic guidelines for screening the fetal brain in a systematic way will probably increase the detection rate and enhance a correct referral to a tertiary care center, offering the patient a multidisciplinary approach of the condition. This paper aims to elaborate on prenatal sonographic and magnetic resonance imaging (MRI) diagnosis and outcome of various central nervous system malformations. Detailed neurosonographic investigation has become available through high resolution vaginal ultrasound probes and the development of a variety of 3D ultrasound modalities e.g. ultrasound tomographic imaging. In addition, fetal MRI is particularly helpful in the detection of gyration and neurulation anomalies and disorders of the gray and white matter. PMID:24753859
Electronic Nose: A Promising Tool For Early Detection Of Alicyclobacillus spp In Soft Drinks
NASA Astrophysics Data System (ADS)
Concina, I.; Bornšek, M.; Baccelliere, S.; Falasconi, M.; Sberveglieri, G.
2009-05-01
In the present work we investigate the potential use of the Electronic Nose EOS835 (SACMI scarl, Italy) to early detect Alicyclobacillus spp in two flavoured soft drinks. These bacteria have been acknowledged by producer companies as a major quality control target microorganisms because of their ability to survive commercial pasteurization processes and produce taint compounds in final product. Electronic Nose was able to distinguish between uncontaminated and contaminated products before the taint metabolites were identifiable by an untrained panel. Classification tests showed an excellent rate of correct classification for both drinks (from 86% uo to 100%). High performance liquid chromatography analyses showed no presence of the main metabolite at a level of 200 ppb, thus confirming the skill of the Electronic Nose technology in performing an actual early diagnosis of contamination.
Combinatorial pulse position modulation for power-efficient free-space laser communications
NASA Technical Reports Server (NTRS)
Budinger, James M.; Vanderaar, M.; Wagner, P.; Bibyk, Steven
1993-01-01
A new modulation technique called combinatorial pulse position modulation (CPPM) is presented as a power-efficient alternative to quaternary pulse position modulation (QPPM) for direct-detection, free-space laser communications. The special case of 16C4PPM is compared to QPPM in terms of data throughput and bit error rate (BER) performance for similar laser power and pulse duty cycle requirements. The increased throughput from CPPM enables the use of forward error corrective (FEC) encoding for a net decrease in the amount of laser power required for a given data throughput compared to uncoded QPPM. A specific, practical case of coded CPPM is shown to reduce the amount of power required to transmit and receive a given data sequence by at least 4.7 dB. Hardware techniques for maximum likelihood detection and symbol timing recovery are presented.
AESA diagnostics in operational environments
NASA Astrophysics Data System (ADS)
Hull, W. P.
The author discusses some possible solutions to ASEA (active electronically scanned array) diagnostics in the operational environment using built-in testing (BIT), which can play a key role in reducing life-cycle cost if accurately implemented. He notes that it is highly desirable to detect and correct in the operational environment all degradation that impairs mission performance. This degradation must be detected with low false alarm rate and the appropriate action initiated consistent with low life-cycle cost. Mutual coupling is considered as a BIT signal injection method and is shown to have potential. However, the limits of the diagnostic capability using this method clearly depend on its stability and on the level of multipath for a specific application. BIT using mutual coupling may need to be supplemented on the ground by an externally mounted passive antenna that interfaces with onboard avionics.
Rowlands, Derek J
2012-01-01
The QT interval on the electrocardiogram is an increasingly important measurement, especially in relation to drug action and interaction. The QT interval varies inversely as the heart rate and numerous rate correction formulae have been proposed. It is difficult to compare the effect of applying different formulae at different heart rates and for different measured QT intervals. A simple graphical display of the results from different formulae is proposed. This display is dependent on the concept of the absolute correction factor. This graphical presentation is useful (a) in comparing the effect of the application of different formulae and (b) in directly reading the correction produced by any individual formula. Copyright © 2012 Elsevier Inc. All rights reserved.
The Impact of Traumatic Brain Injury on Prison Health Services and Offender Management.
Piccolino, Adam L; Solberg, Kenneth B
2014-07-01
A large percentage of incarcerated offenders report a history of traumatic brain injury (TBI) with concomitant neuropsychiatric and social sequelae. However, research looking at the relationship between TBI and delivery of correctional health services and offender management is limited. In this study, the relationships between TBI and use of correctional medical/psychological services, chemical dependency (CD) treatment completion rates, in-prison rule infractions, and recidivism were investigated. Findings indicated that TBI history has a statistically significant association with increased usage of correctional medical/psychological services, including crisis interventions services, and with higher recidivism rates. Results also showed a trend toward offenders with TBI incurring higher rates of in-prison rule infractions and lower rates of CD treatment completion. Implications and future directions for correctional systems are discussed. © The Author(s) 2014.
Caliber Corrected Markov Modeling (C2M2): Correcting Equilibrium Markov Models.
Dixit, Purushottam D; Dill, Ken A
2018-02-13
Rate processes are often modeled using Markov State Models (MSMs). Suppose you know a prior MSM and then learn that your prediction of some particular observable rate is wrong. What is the best way to correct the whole MSM? For example, molecular dynamics simulations of protein folding may sample many microstates, possibly giving correct pathways through them while also giving the wrong overall folding rate when compared to experiment. Here, we describe Caliber Corrected Markov Modeling (C 2 M 2 ), an approach based on the principle of maximum entropy for updating a Markov model by imposing state- and trajectory-based constraints. We show that such corrections are equivalent to asserting position-dependent diffusion coefficients in continuous-time continuous-space Markov processes modeled by a Smoluchowski equation. We derive the functional form of the diffusion coefficient explicitly in terms of the trajectory-based constraints. We illustrate with examples of 2D particle diffusion and an overdamped harmonic oscillator.
Haeussinger, F B; Dresler, T; Heinzel, S; Schecklmann, M; Fallgatter, A J; Ehlis, A-C
2014-07-15
Functional near-infrared spectroscopy (fNIRS) is an optical neuroimaging method that detects temporal concentration changes of oxygenated and deoxygenated hemoglobin within the cortex, so that neural activation can be inferred. However, even though fNIRS is a very practical and well-tolerated method with several advantages particularly in methodically challenging measurement situations (e.g., during tasks involving movement or open speech), it has been shown to be confounded by systemic compounds of non-cerebral, extra-cranial origin (e.g. changes in blood pressure, heart rate). Especially event-related signal patterns induced by dilation or constriction of superficial forehead and temple veins impair the detection of frontal brain activation elicited by cognitive tasks. To further investigate this phenomenon, we conducted a simultaneous fNIRS-fMRI study applying a working memory paradigm (n-back). Extra-cranial signals were obtained by extracting the BOLD signal from fMRI voxels within the skin. To develop a filter method that corrects for extra-cranial skin blood flow, particularly intended for fNIRS data sets recorded by widely used continuous wave systems with fixed optode distances, we identified channels over the forehead with probable major extra-cranial signal contributions. The averaged signal from these channels was then subtracted from all fNIRS channels of the probe set. Additionally, the data were corrected for motion and non-evoked systemic artifacts. Applying these filters, we can show that measuring brain activation in frontal brain areas with fNIRS was substantially improved. The resulting signal resembled the fMRI parameters more closely than before the correction. Future fNIRS studies measuring functional brain activation in the forehead region need to consider the use of different filter options to correct for interfering extra-cranial signals. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Challa, M. S.; Natanson, G. A.; Baker, D. F.; Deutschmann, J. K.
1994-01-01
This paper describes real-time attitude determination results for the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX), a gyroless spacecraft, using a Kalman filter/Euler equation approach denoted the real-time sequential filter (RTSF). The RTSF is an extended Kalman filter whose state vector includes the attitude quaternion and corrections to the rates, which are modeled as Markov processes with small time constants. The rate corrections impart a significant robustness to the RTSF against errors in modeling the environmental and control torques, as well as errors in the initial attitude and rates, while maintaining a small state vector. SAMPLEX flight data from various mission phases are used to demonstrate the robustness of the RTSF against a priori attitude and rate errors of up to 90 deg and 0.5 deg/sec, respectively, as well as a sensitivity of 0.0003 deg/sec in estimating rate corrections in torque computations. In contrast, it is shown that the RTSF attitude estimates without the rate corrections can degrade rapidly. RTSF advantages over single-frame attitude determination algorithms are also demonstrated through (1) substantial improvements in attitude solutions during sun-magnetic field coalignment and (2) magnetic-field-only attitude and rate estimation during the spacecraft's sun-acquisition mode. A robust magnetometer-only attitude-and-rate determination method is also developed to provide for the contingency when both sun data as well as a priori knowledge of the spacecraft state are unavailable. This method includes a deterministic algorithm used to initialize the RTSF with coarse estimates of the spacecraft attitude and rates. The combined algorithm has been found effective, yielding accuracies of 1.5 deg in attitude and 0.01 deg/sec in the rates and convergence times as little as 400 sec.
Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin
2017-10-01
In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.
Carotid Flow Time Test Performance for the Detection of Dehydration in Children With Diarrhea.
Mackenzie, David C; Nasrin, Sabiha; Atika, Bita; Modi, Payal; Alam, Nur H; Levine, Adam C
2018-06-01
Unstructured clinical assessments of dehydration in children are inaccurate. Point-of-care ultrasound is a noninvasive diagnostic tool that can help evaluate the volume status; the corrected carotid artery flow time has been shown to predict volume depletion in adults. We sought to determine the ability of the corrected carotid artery flow time to identify dehydration in a population of children presenting with acute diarrhea in Dhaka, Bangladesh. Children presenting with acute diarrhea were recruited and rehydrated according to hospital protocols. The corrected carotid artery flow time was measured at the time of presentation. The percentage of weight change with rehydration was used to categorize each child's dehydration as severe (>9%), some (3%-9%), or none (<3%). A receiver operating characteristic curve was constructed to test the performance of the corrected carotid artery flow time for detecting severe dehydration. Linear regression was used to model the relationship between the corrected carotid artery flow time and percentage of dehydration. A total of 350 children (0-60 months) were enrolled. The mean corrected carotid artery flow time was 326 milliseconds (interquartile range, 295-351 milliseconds). The area under the receiver operating characteristic curve for the detection of severe dehydration was 0.51 (95% confidence interval, 0.42, 0.61). Linear regression modeling showed a weak association between the flow time and dehydration. The corrected carotid artery flow time was a poor predictor of severe dehydration in this population of children with diarrhea. © 2017 by the American Institute of Ultrasound in Medicine.
Specific NIST projects in support of the NIJ Concealed Weapon Detection and Imaging Program
NASA Astrophysics Data System (ADS)
Paulter, Nicholas G.
1998-12-01
The Electricity Division of the National Institute of Standards and Technology is developing revised performance standards for hand-held (HH) and walk-through (WT) metal weapon detectors, test procedures and systems for these detectors, and a detection/imaging system for finding concealed weapons. The revised standards will replace the existing National Institute of Justice (NIJ) standards for HH and WT devices and will include detection performance specifications as well as system specifications (environmental conditions, mechanical strength and safety, response reproducibility and repeatability, quality assurance, test reporting, etc.). These system requirements were obtained from the Law Enforcement and corrections Technology Advisory Council, an advisory council for the NIJ. Reproducible and repeatable test procedures and appropriate measurement systems will be developed for evaluating HH and WT detection performance. A guide to the technology and application of non- eddy-current-based detection/imaging methods (such as acoustic, passive millimeter-wave and microwave, active millimeter-wave and terahertz-wave, x-ray, etc.) Will be developed. The Electricity Division is also researching the development of a high- frequency/high-speed (300 GH to 1 THz) pulse-illuminated, stand- off, video-rate, concealed weapons/contraband imaging system.
Bernard, Florian; Deuter, Christian Eric; Gemmar, Peter; Schachinger, Hartmut
2013-10-01
Using the positions of the eyelids is an effective and contact-free way for the measurement of startle induced eye-blinks, which plays an important role in human psychophysiological research. To the best of our knowledge, no methods for an efficient detection and tracking of the exact eyelid contours in image sequences captured at high-speed exist that are conveniently usable by psychophysiological researchers. In this publication a semi-automatic model-based eyelid contour detection and tracking algorithm for the analysis of high-speed video recordings from an eye tracker is presented. As a large number of images have been acquired prior to method development it was important that our technique is able to deal with images that are recorded without any special parametrisation of the eye tracker. The method entails pupil detection, specular reflection removal and makes use of dynamic model adaption. In a proof-of-concept study we could achieve a correct detection rate of 90.6%. With this approach, we provide a feasible method to accurately assess eye-blinks from high-speed video recordings. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Robust Crop and Weed Segmentation under Uncontrolled Outdoor Illumination
Jeon, Hong Y.; Tian, Lei F.; Zhu, Heping
2011-01-01
An image processing algorithm for detecting individual weeds was developed and evaluated. Weed detection processes included were normalized excessive green conversion, statistical threshold value estimation, adaptive image segmentation, median filter, morphological feature calculation and Artificial Neural Network (ANN). The developed algorithm was validated for its ability to identify and detect weeds and crop plants under uncontrolled outdoor illuminations. A machine vision implementing field robot captured field images under outdoor illuminations and the image processing algorithm automatically processed them without manual adjustment. The errors of the algorithm, when processing 666 field images, ranged from 2.1 to 2.9%. The ANN correctly detected 72.6% of crop plants from the identified plants, and considered the rest as weeds. However, the ANN identification rates for crop plants were improved up to 95.1% by addressing the error sources in the algorithm. The developed weed detection and image processing algorithm provides a novel method to identify plants against soil background under the uncontrolled outdoor illuminations, and to differentiate weeds from crop plants. Thus, the proposed new machine vision and processing algorithm may be useful for outdoor applications including plant specific direct applications (PSDA). PMID:22163954
NASA Astrophysics Data System (ADS)
Chen, Shih-Hao; Chow, Chi-Wai
2015-01-01
Multiple-input and multiple-output (MIMO) scheme can extend the transmission capacity for the light-emitting-diode (LED) based visible light communication (VLC) systems. The MIMO VLC system that uses the mobile-phone camera as the optical receiver (Rx) to receive MIMO signal from the n×n Red-Green-Blue (RGB) LED array is desirable. The key step of decoding this signal is to detect the signal direction. If the LED transmitter (Tx) is rotated, the Rx may not realize the rotation and transmission error can occur. In this work, we propose and demonstrate a novel hierarchical transmission scheme which can reduce the computation complexity of rotation detection in LED array VLC system. We use the n×n RGB LED array as the MIMO Tx. In our study, a novel two dimensional Hadamard coding scheme is proposed. Using the different LED color layers to indicate the rotation, a low complexity rotation detection method can be used for improving the quality of received signal. The detection correction rate is above 95% in the indoor usage distance. Experimental results confirm the feasibility of the proposed scheme.
Law enforcement suicide: a national analysis.
Violanti, John M; Robinson, Cynthia F; Shen, Rui
2013-01-01
Previous research suggests that there is an elevated risk of suicide among workers within law enforcement occupations. The present study examined the proportionate mortality for suicide in law enforcement in comparison to the US working population during 1999, 2003-2004, and 2007, based on Centers for Disease Control and Prevention's National Institute for Occupational Safety and Health National Occupational Mortality Surveillance data. We analyzed data for all law enforcement occupations and focused on two specific law enforcement occupational categories-detectives/criminal investigators/ police and corrections officers. Suicides were also explored by race, gender and ethnicity. The results of the study showed proportionate mortality ratios (PMRs) for suicide were significantly high for all races and sexes combined (all law enforcement--PMR = 169, 95% CI = 150-191, p < 0.01, 264 deaths; detectives/criminal investigators/police--PMR = 182, 95% CI = 150-218, p < 0.01, 115 deaths; and corrections officers-PMR = 141, 95% CI = 111-178, p < 0.01, 73 deaths). Detectives/criminal investigators/police had the higher suicide risk (an 82% increase) compared to corrections officers (a 41% increase). When analyzed by race and sex, suicide PMRs for Caucasian males were significantly high for both occupations-detectives/ criminal investigators/police (PMR = 133; 95% CI = 108-162, p < 0.01; corrections officers--PMR = 134, 95% CI = 102-173, p < 0.01). A significantly high (PMR = 244, p < 0.01, 95% CI = 147-380) ratio was found among Hispanic males in the law enforcement combined category, and a similarly high PMR was found among Hispanic detectives/criminal investigators/police (PMR = 388, p < 0.01, 95% CI = 168-765). There were small numbers of deaths among female and African American officers. The results included significantly increased risk for suicide among detectives/criminal investigators/police and corrections officers, which suggests that additional study could provide better data to inform us for preventive action.
Detecting long-term growth trends using tree rings: a critical evaluation of methods.
Peters, Richard L; Groenendijk, Peter; Vlam, Mart; Zuidema, Pieter A
2015-05-01
Tree-ring analysis is often used to assess long-term trends in tree growth. A variety of growth-trend detection methods (GDMs) exist to disentangle age/size trends in growth from long-term growth changes. However, these detrending methods strongly differ in approach, with possible implications for their output. Here, we critically evaluate the consistency, sensitivity, reliability and accuracy of four most widely used GDMs: conservative detrending (CD) applies mathematical functions to correct for decreasing ring widths with age; basal area correction (BAC) transforms diameter into basal area growth; regional curve standardization (RCS) detrends individual tree-ring series using average age/size trends; and size class isolation (SCI) calculates growth trends within separate size classes. First, we evaluated whether these GDMs produce consistent results applied to an empirical tree-ring data set of Melia azedarach, a tropical tree species from Thailand. Three GDMs yielded similar results - a growth decline over time - but the widely used CD method did not detect any change. Second, we assessed the sensitivity (probability of correct growth-trend detection), reliability (100% minus probability of detecting false trends) and accuracy (whether the strength of imposed trends is correctly detected) of these GDMs, by applying them to simulated growth trajectories with different imposed trends: no trend, strong trends (-6% and +6% change per decade) and weak trends (-2%, +2%). All methods except CD, showed high sensitivity, reliability and accuracy to detect strong imposed trends. However, these were considerably lower in the weak or no-trend scenarios. BAC showed good sensitivity and accuracy, but low reliability, indicating uncertainty of trend detection using this method. Our study reveals that the choice of GDM influences results of growth-trend studies. We recommend applying multiple methods when analysing trends and encourage performing sensitivity and reliability analysis. Finally, we recommend SCI and RCS, as these methods showed highest reliability to detect long-term growth trends. © 2014 John Wiley & Sons Ltd.
General test plan redundant sensor strapdown IMU evaluation program
NASA Technical Reports Server (NTRS)
Hartwell, T.; Irwin, H. A.; Miyatake, Y.; Wedekind, D. E.
1971-01-01
The general test plan for a redundant sensor strapdown inertial measuring unit evaluation program is presented. The inertial unit contains six gyros and three orthogonal accelerometers. The software incorporates failure detection and correction logic and a land vehicle navigation program. The principal objective of the test is a demonstration of the practicability, reliability, and performance of the inertial measuring unit with failure detection and correction in operational environments.
Pok, Kwoon Yong; Squires, Raynal C; Tan, Li Kiang; Takasaki, Tomohiko; Abubakar, Sazaly; Hasebe, Futoshi; Partridge, Jeffrey; Lee, Chin Kei; Lo, Janice; Aaskov, John; Ng, Lee Ching; Konings, Frank
2015-01-01
Accurate laboratory testing is a critical component of dengue surveillance and control. The objective of this programme was to assess dengue diagnostic proficiency among national-level public health laboratories in the World Health Organization (WHO) Western Pacific Region. Nineteen national-level public health laboratories performed routine dengue diagnostic assays on a proficiency testing panel consisting of two modules: one containing commercial serum samples spiked with cultured dengue viruses for the detection of nucleic acid and non-structural protein 1 (NS1) (Module A) and one containing human serum samples for the detection of anti-dengue virus antibodies (Module B). A review of logistics arrangements was also conducted. All 16 laboratories testing Module A performed reverse transcriptase polymerase chain reaction (RT-PCR) for both RNA and serotype detection. Of these, 15 had correct results for RNA detection and all 16 correctly serotyped the viruses. All nine laboratories performing NS1 antigen detection obtained the correct results. Sixteen of the 18 laboratories using IgM assays in Module B obtained the correct results as did the 13 laboratories that performed IgG assays. Detection of ongoing/recent dengue virus infection by both molecular (RT-PCR) and serological methods (IgM) was available in 15/19 participating laboratories. This first round of external quality assessment of dengue diagnostics was successfully conducted in national-level public health laboratories in the WHO Western Pacific Region, revealing good proficiency in both molecular and serological testing. Further comprehensive diagnostic testing for dengue virus and other priority pathogens in the Region will be assessed during future rounds.
Wagner, John H; Miskelly, Gordon M
2003-05-01
The combination of photographs taken at wavelengths at and bracketing the peak of a narrow absorbance band can lead to enhanced visualization of the substance causing the narrow absorbance band. This concept can be used to detect putative bloodstains by division of a linear photographic image taken at or near 415 nm with an image obtained by averaging linear photographs taken at or near 395 and 435 nm. Nonlinear images can also be background corrected by substituting subtraction for the division. This paper details experimental applications and limitations of this technique, including wavelength selection of the illuminant and at the camera. Characterization of a digital camera to be used in such a study is also detailed. Detection limits for blood using the three wavelength correction method under optimum conditions have been determined to be as low as 1 in 900 dilution, although on strongly patterned substrates blood diluted more than twenty-fold is difficult to detect. Use of only the 435 nm photograph to estimate the background in the 415 nm image lead to a twofold improvement in detection limit on unpatterned substrates compared with the three wavelength method with the particular camera and lighting system used, but it gave poorer background correction on patterned substrates.
The First Fermi-GBM Terrestrial Gamma Ray Flash Catalog
NASA Astrophysics Data System (ADS)
Roberts, O. J.; Fitzpatrick, G.; Stanbro, M.; McBreen, S.; Briggs, M. S.; Holzworth, R. H.; Grove, J. E.; Chekhtman, A.; Cramer, E. S.; Mailyan, B. G.
2018-05-01
We present the first Fermi Space Telescope Gamma Ray Burst Monitor (GBM) catalog of 4,144 terrestrial gamma ray flashes (TGFs), detected since launch in 11 July 2008 through 31 July 2016. We discuss the updates and improvements to the triggered data and off-line search algorithms, comparing this improved detection rate of ˜800 TGFs per year with event rates from previously published TGF catalogs from other missions. A Bayesian block algorithm calculated the temporal and spectral properties of the TGFs, revealing a delay between the hard (>300 keV) and soft (≤300 keV) photons of around 27 μs. Detector count rates of "low-fluence" events were found to have average rates exceeding 150 kHz. Searching the World-Wide Lightning Location Network data for radio sferics within ±5 min of each TGF revealed a clean sample of 1,314 World-Wide Lightning Location Network locations, which were used to to accurately locate TGF-producing storms. It also revealed lightning and storm activity for specific regions, as well as seasonal and daily variations of global lightning patterns. Correcting for the orbit of Fermi, we quantitatively find a marginal excess of TGFs being produced from storms over land near oceans (i.e., narrow isthmuses and small islands). No difference was observed between the duration of TGFs over the ocean and land. The distribution of TGFs at a given local solar time for predefined American, Asian, and African regions were confirmed to correlate well with known regional lightning rates.
Bunch mode specific rate corrections for PILATUS3 detectors
Trueb, P.; Dejoie, C.; Kobas, M.; ...
2015-04-09
PILATUS X-ray detectors are in operation at many synchrotron beamlines around the world. This article reports on the characterization of the new PILATUS3 detector generation at high count rates. As for all counting detectors, the measured intensities have to be corrected for the dead-time of the counting mechanism at high photon fluxes. The large number of different bunch modes at these synchrotrons as well as the wide range of detector settings presents a challenge for providing accurate corrections. To avoid the intricate measurement of the count rate behaviour for every bunch mode, a Monte Carlo simulation of the counting mechanismmore » has been implemented, which is able to predict the corrections for arbitrary bunch modes and a wide range of detector settings. This article compares the simulated results with experimental data acquired at different synchrotrons. It is found that the usage of bunch mode specific corrections based on this simulation improves the accuracy of the measured intensities by up to 40% for high photon rates and highly structured bunch modes. For less structured bunch modes, the instant retrigger technology of PILATUS3 detectors substantially reduces the dependency of the rate correction on the bunch mode. The acquired data also demonstrate that the instant retrigger technology allows for data acquisition up to 15 million photons per second per pixel.« less
Buschmann, Tilo; Zhang, Rong; Brash, Douglas E; Bystrykh, Leonid V
2014-08-07
DNA barcodes are short unique sequences used to label DNA or RNA-derived samples in multiplexed deep sequencing experiments. During the demultiplexing step, barcodes must be detected and their position identified. In some cases (e.g., with PacBio SMRT), the position of the barcode and DNA context is not well defined. Many reads start inside the genomic insert so that adjacent primers might be missed. The matter is further complicated by coincidental similarities between barcode sequences and reference DNA. Therefore, a robust strategy is required in order to detect barcoded reads and avoid a large number of false positives or negatives.For mass inference problems such as this one, false discovery rate (FDR) methods are powerful and balanced solutions. Since existing FDR methods cannot be applied to this particular problem, we present an adapted FDR method that is suitable for the detection of barcoded reads as well as suggest possible improvements. In our analysis, barcode sequences showed high rates of coincidental similarities with the Mus musculus reference DNA. This problem became more acute when the length of the barcode sequence decreased and the number of barcodes in the set increased. The method presented in this paper controls the tail area-based false discovery rate to distinguish between barcoded and unbarcoded reads. This method helps to establish the highest acceptable minimal distance between reads and barcode sequences. In a proof of concept experiment we correctly detected barcodes in 83% of the reads with a precision of 89%. Sensitivity improved to 99% at 99% precision when the adjacent primer sequence was incorporated in the analysis. The analysis was further improved using a paired end strategy. Following an analysis of the data for sequence variants induced in the Atp1a1 gene of C57BL/6 murine melanocytes by ultraviolet light and conferring resistance to ouabain, we found no evidence of cross-contamination of DNA material between samples. Our method offers a proper quantitative treatment of the problem of detecting barcoded reads in a noisy sequencing environment. It is based on the false discovery rate statistics that allows a proper trade-off between sensitivity and precision to be chosen.
Detection and correction of patient movement in prostate brachytherapy seed reconstruction
NASA Astrophysics Data System (ADS)
Lam, Steve T.; Cho, Paul S.; Marks, Robert J., II; Narayanan, Sreeram
2005-05-01
Intraoperative dosimetry of prostate brachytherapy can help optimize the dose distribution and potentially improve clinical outcome. Evaluation of dose distribution during the seed implant procedure requires the knowledge of 3D seed coordinates. Fluoroscopy-based seed localization is a viable option. From three x-ray projections obtained at different gantry angles, 3D seed positions can be determined. However, when local anaesthesia is used for prostate brachytherapy, the patient movement during fluoroscopy image capture becomes a practical problem. If uncorrected, the errors introduced by patient motion between image captures would cause seed mismatches. Subsequently, the seed reconstruction algorithm would either fail to reconstruct or yield erroneous results. We have developed an algorithm that permits detection and correction of patient movement that may occur between fluoroscopy image captures. The patient movement is decomposed into translational shifts along the tabletop and rotation about an axis perpendicular to the tabletop. The property of spatial invariance of the co-planar imaging geometry is used for lateral movement correction. Cranio-caudal movement is corrected by analysing the perspective invariance along the x-ray axis. Rotation is estimated by an iterative method. The method can detect and correct for the range of patient movement commonly seen in the clinical environment. The algorithm has been implemented for routine clinical use as the preprocessing step for seed reconstruction.
Toward detecting deception in intelligent systems
NASA Astrophysics Data System (ADS)
Santos, Eugene, Jr.; Johnson, Gregory, Jr.
2004-08-01
Contemporary decision makers often must choose a course of action using knowledge from several sources. Knowledge may be provided from many diverse sources including electronic sources such as knowledge-based diagnostic or decision support systems or through data mining techniques. As the decision maker becomes more dependent on these electronic information sources, detecting deceptive information from these sources becomes vital to making a correct, or at least more informed, decision. This applies to unintentional disinformation as well as intentional misinformation. Our ongoing research focuses on employing models of deception and deception detection from the fields of psychology and cognitive science to these systems as well as implementing deception detection algorithms for probabilistic intelligent systems. The deception detection algorithms are used to detect, classify and correct attempts at deception. Algorithms for detecting unexpected information rely upon a prediction algorithm from the collaborative filtering domain to predict agent responses in a multi-agent system.
Real time health monitoring and control system methodology for flexible space structures
NASA Astrophysics Data System (ADS)
Jayaram, Sanjay
This dissertation is concerned with the Near Real-time Autonomous Health Monitoring of Flexible Space Structures. The dynamics of multi-body flexible systems is uncertain due to factors such as high non-linearity, consideration of higher modal frequencies, high dimensionality, multiple inputs and outputs, operational constraints, as well as unexpected failures of sensors and/or actuators. Hence a systematic framework of developing a high fidelity, dynamic model of a flexible structural system needs to be understood. The fault detection mechanism that will be an integrated part of an autonomous health monitoring system comprises the detection of abnormalities in the sensors and/or actuators and correcting these detected faults (if possible). Applying the robust control law and the robust measures that are capable of detecting and recovering/replacing the actuators rectifies the actuator faults. The fault tolerant concept applied to the sensors will be in the form of an Extended Kalman Filter (EKF). The EKF is going to weigh the information coming from multiple sensors (redundant sensors used to measure the same information) and automatically identify the faulty sensors and weigh the best estimate from the remaining sensors. The mechanization is comprised of instrumenting flexible deployable panels (solar array) with multiple angular position and rate sensors connected to the data acquisition system. The sensors will give position and rate information of the solar panel in all three axes (i.e. roll, pitch and yaw). The position data corresponds to the steady state response and the rate data will give better insight on the transient response of the system. This is a critical factor for real-time autonomous health monitoring. MATLAB (and/or C++) software will be used for high fidelity modeling and fault tolerant mechanism.