Sample records for average true positive

  1. 76 FR 59410 - Government-Owned Inventions; Availability for Licensing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-26

    ... tumors. The experimental results (average of 91.4% true positive volume fraction and 4.0% of false... test for the tumor marker CA-125. The CA-125 test only returns a true positive result for about 50% of...

  2. Accurate decisions in an uncertain world: collective cognition increases true positives while decreasing false positives.

    PubMed

    Wolf, Max; Kurvers, Ralf H J M; Ward, Ashley J W; Krause, Stefan; Krause, Jens

    2013-04-07

    In a wide range of contexts, including predator avoidance, medical decision-making and security screening, decision accuracy is fundamentally constrained by the trade-off between true and false positives. Increased true positives are possible only at the cost of increased false positives; conversely, decreased false positives are associated with decreased true positives. We use an integrated theoretical and experimental approach to show that a group of decision-makers can overcome this basic limitation. Using a mathematical model, we show that a simple quorum decision rule enables individuals in groups to simultaneously increase true positives and decrease false positives. The results from a predator-detection experiment that we performed with humans are in line with these predictions: (i) after observing the choices of the other group members, individuals both increase true positives and decrease false positives, (ii) this effect gets stronger as group size increases, (iii) individuals use a quorum threshold set between the average true- and false-positive rates of the other group members, and (iv) individuals adjust their quorum adaptively to the performance of the group. Our results have broad implications for our understanding of the ecology and evolution of group-living animals and lend themselves for applications in the human domain such as the design of improved screening methods in medical, forensic, security and business applications.

  3. Accurate decisions in an uncertain world: collective cognition increases true positives while decreasing false positives

    PubMed Central

    Wolf, Max; Kurvers, Ralf H. J. M.; Ward, Ashley J. W.; Krause, Stefan; Krause, Jens

    2013-01-01

    In a wide range of contexts, including predator avoidance, medical decision-making and security screening, decision accuracy is fundamentally constrained by the trade-off between true and false positives. Increased true positives are possible only at the cost of increased false positives; conversely, decreased false positives are associated with decreased true positives. We use an integrated theoretical and experimental approach to show that a group of decision-makers can overcome this basic limitation. Using a mathematical model, we show that a simple quorum decision rule enables individuals in groups to simultaneously increase true positives and decrease false positives. The results from a predator-detection experiment that we performed with humans are in line with these predictions: (i) after observing the choices of the other group members, individuals both increase true positives and decrease false positives, (ii) this effect gets stronger as group size increases, (iii) individuals use a quorum threshold set between the average true- and false-positive rates of the other group members, and (iv) individuals adjust their quorum adaptively to the performance of the group. Our results have broad implications for our understanding of the ecology and evolution of group-living animals and lend themselves for applications in the human domain such as the design of improved screening methods in medical, forensic, security and business applications. PMID:23407830

  4. On the Use of a Cumulative Distribution as a Utility Function in Educational or Employment Selection.

    DTIC Science & Technology

    1981-02-01

    monotonic increasing function of true ability or performance score. A cumulative probability function is * then very convenient for describiny; one’s...possible outcomes such as test scores, grade-point averages or other common outcome variables. Utility is usually a monotonic increasing function of true ...r(0) is negative for 8 <i and positive for 0 > M, U(o) is risk-prone for low 0 values and risk-averse for high 0 values. This property is true for

  5. Cluster-level statistical inference in fMRI datasets: The unexpected behavior of random fields in high dimensions.

    PubMed

    Bansal, Ravi; Peterson, Bradley S

    2018-06-01

    Identifying regional effects of interest in MRI datasets usually entails testing a priori hypotheses across many thousands of brain voxels, requiring control for false positive findings in these multiple hypotheses testing. Recent studies have suggested that parametric statistical methods may have incorrectly modeled functional MRI data, thereby leading to higher false positive rates than their nominal rates. Nonparametric methods for statistical inference when conducting multiple statistical tests, in contrast, are thought to produce false positives at the nominal rate, which has thus led to the suggestion that previously reported studies should reanalyze their fMRI data using nonparametric tools. To understand better why parametric methods may yield excessive false positives, we assessed their performance when applied both to simulated datasets of 1D, 2D, and 3D Gaussian Random Fields (GRFs) and to 710 real-world, resting-state fMRI datasets. We showed that both the simulated 2D and 3D GRFs and the real-world data contain a small percentage (<6%) of very large clusters (on average 60 times larger than the average cluster size), which were not present in 1D GRFs. These unexpectedly large clusters were deemed statistically significant using parametric methods, leading to empirical familywise error rates (FWERs) as high as 65%: the high empirical FWERs were not a consequence of parametric methods failing to model spatial smoothness accurately, but rather of these very large clusters that are inherently present in smooth, high-dimensional random fields. In fact, when discounting these very large clusters, the empirical FWER for parametric methods was 3.24%. Furthermore, even an empirical FWER of 65% would yield on average less than one of those very large clusters in each brain-wide analysis. Nonparametric methods, in contrast, estimated distributions from those large clusters, and therefore, by construct rejected the large clusters as false positives at the nominal FWERs. Those rejected clusters were outlying values in the distribution of cluster size but cannot be distinguished from true positive findings without further analyses, including assessing whether fMRI signal in those regions correlates with other clinical, behavioral, or cognitive measures. Rejecting the large clusters, however, significantly reduced the statistical power of nonparametric methods in detecting true findings compared with parametric methods, which would have detected most true findings that are essential for making valid biological inferences in MRI data. Parametric analyses, in contrast, detected most true findings while generating relatively few false positives: on average, less than one of those very large clusters would be deemed a true finding in each brain-wide analysis. We therefore recommend the continued use of parametric methods that model nonstationary smoothness for cluster-level, familywise control of false positives, particularly when using a Cluster Defining Threshold of 2.5 or higher, and subsequently assessing rigorously the biological plausibility of the findings, even for large clusters. Finally, because nonparametric methods yielded a large reduction in statistical power to detect true positive findings, we conclude that the modest reduction in false positive findings that nonparametric analyses afford does not warrant a re-analysis of previously published fMRI studies using nonparametric techniques. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Exploring the Binary Nature of STF 2128 Using Separation and Position Angle Measurements

    NASA Astrophysics Data System (ADS)

    Minarik, Holly; Helm, Victoria; Asquith, Ezra; Hoffman, Andrew; Fielding, Alec; Gaytan, Humberto; Marvier, Kevin; Warneke, Walter; Howell, James; Rowe, David; Freed, Rachel; Genet, Russell

    2018-01-01

    A team of nine students from Cuesta College studied double star STF 2128 (WDS 17033+5935) using ten CCD images obtained at the Sierra Remote Observatories. Calculations of these ten observations yielded an average separation of 12.203" and an average position angle of 42.957°. By comparing these values with past observations from the Washington Double Star Catalogue, we concluded that STF 2128 is likely a true binary system.

  7. SU-E-J-85: The Effect of Different Imaging Modalities On the Delineation of the True Spinal Cord for Spinal Stereotactic Body Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goddard, L; Brodin, P; Mani, K

    Purpose: SBRT allows the delivery of high dose radiation treatments to localized tumors while minimizing dose to surrounding tissues. Due to the large doses delivered, accurate contouring of organs at risk is essential. In this study, differences between the true spinal cord as seen using MRI and CT myelogram (CTM) have been assessed in patients with spinal metastases treated using SBRT. Methods: Ten patients were identified that have both a CTM and a MRI. Using rigid registration tools, the MRI was fused to the CTM. The thecal sac and true cord were contoured using each imaging modality. Images were exportedmore » and analyzed for similarity by computing the Dice similarity coefficient and the modified Hausdorff distance (greatest distance from a point in one set to the closest point in the other set). Results: The Dice coefficient was calculated for the thecal sac (0.81 ±0.06) and true cord (0.63 ±0.13). These two measures are correlated; however, some points show a low true cord overlap despite a high overlap for the thecal sac. The Hausdorff distance for structure comparisons was also calculated. For thecal sac structures, the average value, 1.6mm (±1.1), indicates good overlap. For true cord comparison, the average value, 0.3mm (±0.16), indicates very good overlap. The minimum Hausdorff distance between the true cord and thecal sac was on average 1.6mm (±0.9) Conclusion: The true cord position as seen in MRI and CTM is fairly constant, although care should be taken as large differences can be seen in individual patients. Avoidning the true cord in spine SBRT is critical, so the ability to visualize the true cord before performing SBRT to the vertebrae is essential. Here, CT myelogram appears an excellent, robust option, that can be obtained the day of treatment planning and is unaffected by uncertainties in image fusion.« less

  8. Cardiac phase detection in intravascular ultrasound images

    NASA Astrophysics Data System (ADS)

    Matsumoto, Monica M. S.; Lemos, Pedro Alves; Yoneyama, Takashi; Furuie, Sergio Shiguemi

    2008-03-01

    Image gating is related to image modalities that involve quasi-periodic moving organs. Therefore, during intravascular ultrasound (IVUS) examination, there is cardiac movement interference. In this paper, we aim to obtain IVUS gated images based on the images themselves. This would allow the reconstruction of 3D coronaries with temporal accuracy for any cardiac phase, which is an advantage over the ECG-gated acquisition that shows a single one. It is also important for retrospective studies, as in existing IVUS databases there are no additional reference signals (ECG). From the images, we calculated signals based on average intensity (AI), and, from consecutive frames, average intensity difference (AID), cross-correlation coefficient (CC) and mutual information (MI). The process includes a wavelet-based filter step and ascendant zero-cross detection in order to obtain the phase information. Firstly, we tested 90 simulated sequences with 1025 frames each. Our method was able to achieve more than 95.0% of true positives and less than 2.3% of false positives ratio, for all signals. Afterwards, we tested in a real examination, with 897 frames and ECG as gold-standard. We achieved 97.4% of true positives (CC and MI), and 2.5% of false positives. For future works, methodology should be tested in wider range of IVUS examinations.

  9. Towards Development of a 3-State Self-Paced Brain-Computer Interface

    PubMed Central

    Bashashati, Ali; Ward, Rabab K.; Birch, Gary E.

    2007-01-01

    Most existing brain-computer interfaces (BCIs) detect specific mental activity in a so-called synchronous paradigm. Unlike synchronous systems which are operational at specific system-defined periods, self-paced (asynchronous) interfaces have the advantage of being operational at all times. The low-frequency asynchronous switch design (LF-ASD) is a 2-state self-paced BCI that detects the presence of a specific finger movement in the ongoing EEG. Recent evaluations of the 2-state LF-ASD show an average true positive rate of 41% at the fixed false positive rate of 1%. This paper proposes two designs for a 3-state self-paced BCI that is capable of handling idle brain state. The two proposed designs aim at detecting right- and left-hand extensions from the ongoing EEG. They are formed of two consecutive detectors. The first detects the presence of a right- or a left-hand movement and the second classifies the detected movement as a right or a left one. In an offline analysis of the EEG data collected from four able-bodied individuals, the 3-state brain-computer interface shows a comparable performance with a 2-state system and significant performance improvement if used as a 2-state BCI, that is, in detecting the presence of a right- or a left-hand movement (regardless of the type of movement). It has an average true positive rate of 37.5% and 42.8% (at false positives rate of 1%) in detecting right- and left-hand extensions, respectively, in the context of a 3-state self-paced BCI and average detection rate of 58.1% (at false positive rate of 1%) in the context of a 2-state self-paced BCI. PMID:18288260

  10. Digital servo control of random sound fields

    NASA Technical Reports Server (NTRS)

    Nakich, R. B.

    1973-01-01

    It is necessary to place number of sensors at different positions in sound field to determine actual sound intensities to which test object is subjected. It is possible to determine whether specification is being met adequately or exceeded. Since excitation is of random nature, signals are essentially coherent and it is impossible to obtain true average.

  11. Epileptic Seizures Prediction Using Machine Learning Methods

    PubMed Central

    Usman, Syed Muhammad

    2017-01-01

    Epileptic seizures occur due to disorder in brain functionality which can affect patient's health. Prediction of epileptic seizures before the beginning of the onset is quite useful for preventing the seizure by medication. Machine learning techniques and computational methods are used for predicting epileptic seizures from Electroencephalograms (EEG) signals. However, preprocessing of EEG signals for noise removal and features extraction are two major issues that have an adverse effect on both anticipation time and true positive prediction rate. Therefore, we propose a model that provides reliable methods of both preprocessing and feature extraction. Our model predicts epileptic seizures' sufficient time before the onset of seizure starts and provides a better true positive rate. We have applied empirical mode decomposition (EMD) for preprocessing and have extracted time and frequency domain features for training a prediction model. The proposed model detects the start of the preictal state, which is the state that starts few minutes before the onset of the seizure, with a higher true positive rate compared to traditional methods, 92.23%, and maximum anticipation time of 33 minutes and average prediction time of 23.6 minutes on scalp EEG CHB-MIT dataset of 22 subjects. PMID:29410700

  12. Performance analysis of newly developed point-of-care hemoglobinometer (TrueHb) against an automated hematology analyzer (Sysmex XT 1800i) in terms of precision in hemoglobin measurement.

    PubMed

    Srivastava, A; Koul, V; Dwivedi, S N; Upadhyaya, A D; Ahuja, A; Saxena, R

    2015-08-01

    The aim of this study was to evaluate the performance of the newly developed handheld hemoglobinmeter (TrueHb) by comparing its performance against and an automated five-part hematology analyzer, Sysmex counter XT 1800i (Sysmex). Two hundred venous blood samples were subjected through their total hemoglobin evaluation on each device three times. The average of the three readings on each device was considered as their respective device values, that is, TrueHb values and Sysmex values. The two set of values were comparatively analyzed. The repeatability of the performance of TrueHb was also evaluated against Sysmex values. The scatter plot of TrueHb values and Sysmex values showed linear distribution with positive correlations (r = 0.99). The intraclass correlation (ICC) values between the two set of values was found to be 0.995. Regression coefficients through origin, β, was found to be 0.995, with 95% confidence intervals (CI) ranging between 0.9900 and 1.0000. The mean difference in Bland-Altman plots of TrueHb values against the Sysmex values was found to be -0.02, with limits of agreement between -0.777 and 0.732 g/dL. Statistical analysis suggested good repeatability in results of TrueHb, having a low mean CV of 2.22, against 4.44, that of Sysmex values, and 95% confidence interval of 1.99-2.44, against 3.85-5.03, that of Sysmex values. These results suggested a strong positive correlation between the two measurements devices. It is thus concluded that TrueHb is a good point-of-care testing tool for estimating hemoglobin. © 2014 John Wiley & Sons Ltd.

  13. The heterogeneity statistic I(2) can be biased in small meta-analyses.

    PubMed

    von Hippel, Paul T

    2015-04-14

    Estimated effects vary across studies, partly because of random sampling error and partly because of heterogeneity. In meta-analysis, the fraction of variance that is due to heterogeneity is estimated by the statistic I(2). We calculate the bias of I(2), focusing on the situation where the number of studies in the meta-analysis is small. Small meta-analyses are common; in the Cochrane Library, the median number of studies per meta-analysis is 7 or fewer. We use Mathematica software to calculate the expectation and bias of I(2). I(2) has a substantial bias when the number of studies is small. The bias is positive when the true fraction of heterogeneity is small, but the bias is typically negative when the true fraction of heterogeneity is large. For example, with 7 studies and no true heterogeneity, I(2) will overestimate heterogeneity by an average of 12 percentage points, but with 7 studies and 80 percent true heterogeneity, I(2) can underestimate heterogeneity by an average of 28 percentage points. Biases of 12-28 percentage points are not trivial when one considers that, in the Cochrane Library, the median I(2) estimate is 21 percent. The point estimate I(2) should be interpreted cautiously when a meta-analysis has few studies. In small meta-analyses, confidence intervals should supplement or replace the biased point estimate I(2).

  14. Comparing diagnostic tests on benefit-risk.

    PubMed

    Pennello, Gene; Pantoja-Galicia, Norberto; Evans, Scott

    2016-01-01

    Comparing diagnostic tests on accuracy alone can be inconclusive. For example, a test may have better sensitivity than another test yet worse specificity. Comparing tests on benefit risk may be more conclusive because clinical consequences of diagnostic error are considered. For benefit-risk evaluation, we propose diagnostic yield, the expected distribution of subjects with true positive, false positive, true negative, and false negative test results in a hypothetical population. We construct a table of diagnostic yield that includes the number of false positive subjects experiencing adverse consequences from unnecessary work-up. We then develop a decision theory for evaluating tests. The theory provides additional interpretation to quantities in the diagnostic yield table. It also indicates that the expected utility of a test relative to a perfect test is a weighted accuracy measure, the average of sensitivity and specificity weighted for prevalence and relative importance of false positive and false negative testing errors, also interpretable as the cost-benefit ratio of treating non-diseased and diseased subjects. We propose plots of diagnostic yield, weighted accuracy, and relative net benefit of tests as functions of prevalence or cost-benefit ratio. Concepts are illustrated with hypothetical screening tests for colorectal cancer with test positive subjects being referred to colonoscopy.

  15. Variability of the inclination of anatomic horizontal reference planes of the craniofacial complex in relation to the true horizontal line in orthognathic patients.

    PubMed

    Zebeib, Ameen M; Naini, Farhad B

    2014-12-01

    The purpose of this study was to assess the reliability of the Frankfort horizontal (FH), sella-nasion horizontal, and optic planes in terms of their variabilities in relation to a true horizontal line in orthognathic surgery patients. Thirty-six consecutive presurgical orthognathic patients (13 male, 23 female; age range, 16-35 years; 30 white, 6 African Caribbean) had lateral cephalometric radiographs taken in natural head position, with a plumb line orientating the true vertical line, and the true horizontal line perpendicular to the true vertical. The inclinations of the anatomic reference planes were compared with the true horizontal. The FH plane was found to be on average closest to the true horizontal, with a mean of -1.6° (SD, 3.4°), whereas the sella-nasion horizontal and the optic plane had means of 2.1° (SD, 5.1°) and 3.2° (SD, 4.7°), respectively. The FH showed the least variability of the 3 anatomic planes. The ranges of variability were high for all anatomic planes: -8° to 8° for the FH, -8° to 15° for the sella-nasion horizontal, and -6° to 13° for the optic plane. No significant differences were found in relation to patients' sex, skeletal patterns, or ethnic backgrounds. The clinically significant variability in the inclinations of anatomic reference planes in relation to the true horizontal plane makes their use unreliable in orthognathic patients. Copyright © 2014 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  16. Comparison of Online 6 Degree-of-Freedom Image Registration of Varian TrueBeam Cone-Beam CT and BrainLab ExacTrac X-Ray for Intracranial Radiosurgery.

    PubMed

    Li, Jun; Shi, Wenyin; Andrews, David; Werner-Wasik, Maria; Lu, Bo; Yu, Yan; Dicker, Adam; Liu, Haisong

    2017-06-01

    The study was aimed to compare online 6 degree-of-freedom image registrations of TrueBeam cone-beam computed tomography and BrainLab ExacTrac X-ray imaging systems for intracranial radiosurgery. Phantom and patient studies were performed on a Varian TrueBeam STx linear accelerator (version 2.5), which is integrated with a BrainLab ExacTrac imaging system (version 6.1.1). The phantom study was based on a Rando head phantom and was designed to evaluate isocenter location dependence of the image registrations. Ten isocenters at various locations representing clinical treatment sites were selected in the phantom. Cone-beam computed tomography and ExacTrac X-ray images were taken when the phantom was located at each isocenter. The patient study included 34 patients. Cone-beam computed tomography and ExacTrac X-ray images were taken at each patient's treatment position. The 6 degree-of-freedom image registrations were performed on cone-beam computed tomography and ExacTrac, and residual errors calculated from cone-beam computed tomography and ExacTrac were compared. In the phantom study, the average residual error differences (absolute values) between cone-beam computed tomography and ExacTrac image registrations were 0.17 ± 0.11 mm, 0.36 ± 0.20 mm, and 0.25 ± 0.11 mm in the vertical, longitudinal, and lateral directions, respectively. The average residual error differences in the rotation, roll, and pitch were 0.34° ± 0.08°, 0.13° ± 0.09°, and 0.12° ± 0.10°, respectively. In the patient study, the average residual error differences in the vertical, longitudinal, and lateral directions were 0.20 ± 0.16 mm, 0.30 ± 0.18 mm, 0.21 ± 0.18 mm, respectively. The average residual error differences in the rotation, roll, and pitch were 0.40°± 0.16°, 0.17° ± 0.13°, and 0.20° ± 0.14°, respectively. Overall, the average residual error differences were <0.4 mm in the translational directions and <0.5° in the rotational directions. ExacTrac X-ray image registration is comparable to TrueBeam cone-beam computed tomography image registration in intracranial treatments.

  17. In-situ EBSD study of deformation behavior of retained austenite in a low-carbon quenching and partitioning steel via uniaxial tensile tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Wan-song

    Through using in-situ electron back-scattered diffraction and uniaxial tensile tests, this work mainly focuses on the deformation behavior of retained austenite (RA) in a low-carbon quenching and partitioning (Q&P) steel. In this paper, three different types of RA can be distinguished from different locations, respectively, RA grains at the triple edges, twinned austenite and RA grains positioned between martensite. The results have shown that grains at the triple edges and twinned austenite could transform easily with increasing strain, i.e. are less stable when compared with RA grains distributed between martensite that could resist a larger plastic deformation. Meanwhile, the strainmore » leads to rotations of RA grains distributed at the triple edges and between martensite. Moreover, RA grains with a similar orientation undergone similar rotations with the same true strain. These RA grains rotated along a specific slip plane and slip direction and the grain rotation is taken as a significant factor to improve the ductility of steel. In addition, grain sizes of RA decreased gradually with an increase of true strain and smaller (0–0.2 μm) grains were more capable of resisting the deformation. According to kernel average misorientation (KAM) analysis, it can be found that strain distribution is preferentially localized near martensite–austenite phase boundaries and in the interior of martensite. The average KAM values increased continuously with increasing true strain. - Highlights: •The in-situ and ex-situ tensile specimens differ to some extent in mechanical properties. •Retained austenite grains at the triple edges and twinned austenite transformed easily at the early stage of true strain. •Film-like retained austenite grains only rotated prior to the transformation during straining. •Retained austenite grains having with a similar orientation experienced similar rotations during the same true strain.« less

  18. Biological false-positive venereal disease research laboratory test in cerebrospinal fluid in the diagnosis of neurosyphilis - a case-control study.

    PubMed

    Zheng, S; Lin, R J; Chan, Y H; Ngan, C C L

    2018-03-01

    There is no clear consensus on the diagnosis of neurosyphilis. The Venereal Disease Research Laboratory (VDRL) test from cerebrospinal fluid (CSF) has traditionally been considered the gold standard for diagnosing neurosyphilis but is widely known to be insensitive. In this study, we compared the clinical and laboratory characteristics of true-positive VDRL-CSF cases with biological false-positive VDRL-CSF cases. We retrospectively identified cases of true and false-positive VDRL-CSF across a 3-year period received by the Immunology and Serology Laboratory, Singapore General Hospital. A biological false-positive VDRL-CSF is defined as a reactive VDRL-CSF with a non-reactive Treponema pallidum particle agglutination (TPPA)-CSF and/or negative Line Immuno Assay (LIA)-CSF IgG. A true-positive VDRL-CSF is a reactive VDRL-CSF with a concordant reactive TPPA-CSF and/or positive LIA-CSF IgG. During the study period, a total of 1254 specimens underwent VDRL-CSF examination. Amongst these, 60 specimens from 53 patients tested positive for VDRL-CSF. Of the 53 patients, 42 (79.2%) were true-positive cases and 11 (20.8%) were false-positive cases. In our setting, a positive non-treponemal serology has 97.6% sensitivity, 100% specificity, 100% positive predictive value and 91.7% negative predictive value for a true-positive VDRL-CSF based on our laboratory definition. HIV seropositivity was an independent predictor of a true-positive VDRL-CSF. Biological false-positive VDRL-CSF is common in a setting where patients are tested without first establishing a serological diagnosis of syphilis. Serological testing should be performed prior to CSF evaluation for neurosyphilis. © 2017 European Academy of Dermatology and Venereology.

  19. Automatic estimation of detector radial position for contoured SPECT acquisition using CT images on a SPECT/CT system.

    PubMed

    Liu, Ruijie Rachel; Erwin, William D

    2006-08-01

    An algorithm was developed to estimate noncircular orbit (NCO) single-photon emission computed tomography (SPECT) detector radius on a SPECT/CT imaging system using the CT images, for incorporation into collimator resolution modeling for iterative SPECT reconstruction. Simulated male abdominal (arms up), male head and neck (arms down) and female chest (arms down) anthropomorphic phantom, and ten patient, medium-energy SPECT/CT scans were acquired on a hybrid imaging system. The algorithm simulated inward SPECT detector radial motion and object contour detection at each projection angle, employing the calculated average CT image and a fixed Hounsfield unit (HU) threshold. Calculated radii were compared to the observed true radii, and optimal CT threshold values, corresponding to patient bed and clothing surfaces, were found to be between -970 and -950 HU. The algorithm was constrained by the 45 cm CT field-of-view (FOV), which limited the detected radii to < or = 22.5 cm and led to occasional radius underestimation in the case of object truncation by CT. Two methods incorporating the algorithm were implemented: physical model (PM) and best fit (BF). The PM method computed an offset that produced maximum overlap of calculated and true radii for the phantom scans, and applied that offset as a calculated-to-true radius transformation. For the BF method, the calculated-to-true radius transformation was based upon a linear regression between calculated and true radii. For the PM method, a fixed offset of +2.75 cm provided maximum calculated-to-true radius overlap for the phantom study, which accounted for the camera system's object contour detect sensor surface-to-detector face distance. For the BF method, a linear regression of true versus calculated radius from a reference patient scan was used as a calculated-to-true radius transform. Both methods were applied to ten patient scans. For -970 and -950 HU thresholds, the combined overall average root-mean-square (rms) error in radial position for eight patient scans without truncation were 3.37 cm (12.9%) for PM and 1.99 cm (8.6%) for BF, indicating BF is superior to PM in the absence of truncation. For two patient scans with truncation, the rms error was 3.24 cm (12.2%) for PM and 4.10 cm (18.2%) for BF. The slightly better performance of PM in the case of truncation is anomalous, due to FOV edge truncation artifacts in the CT reconstruction, and thus is suspect. The calculated NCO contour for a patient SPECT/CT scan was used with an iterative reconstruction algorithm that incorporated compensation for system resolution. The resulting image was qualitatively superior to the image obtained by reconstructing the data using the fixed radius stored by the scanner. The result was also superior to the image reconstructed using the iterative algorithm provided with the system, which does not incorporate resolution modeling. These results suggest that, under conditions of no or only mild lateral truncation of the CT scan, the algorithm is capable of providing radius estimates suitable for iterative SPECT reconstruction collimator geometric resolution modeling.

  20. Optimization of OT-MACH Filter Generation for Target Recognition

    NASA Technical Reports Server (NTRS)

    Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin

    2009-01-01

    An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.

  1. Automatic choroid cells segmentation and counting based on approximate convexity and concavity of chain code in fluorescence microscopic image

    NASA Astrophysics Data System (ADS)

    Lu, Weihua; Chen, Xinjian; Zhu, Weifang; Yang, Lei; Cao, Zhaoyuan; Chen, Haoyu

    2015-03-01

    In this paper, we proposed a method based on the Freeman chain code to segment and count rhesus choroid-retinal vascular endothelial cells (RF/6A) automatically for fluorescence microscopy images. The proposed method consists of four main steps. First, a threshold filter and morphological transform were applied to reduce the noise. Second, the boundary information was used to generate the Freeman chain codes. Third, the concave points were found based on the relationship between the difference of the chain code and the curvature. Finally, cells segmentation and counting were completed based on the characteristics of the number of the concave points, the area and shape of the cells. The proposed method was tested on 100 fluorescence microscopic cell images, and the average true positive rate (TPR) is 98.13% and the average false positive rate (FPR) is 4.47%, respectively. The preliminary results showed the feasibility and efficiency of the proposed method.

  2. Combining forecast weights: Why and how?

    NASA Astrophysics Data System (ADS)

    Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim

    2012-09-01

    This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.

  3. Positive self-statements: power for some, peril for others.

    PubMed

    Wood, Joanne V; Perunovic, W Q Elaine; Lee, John W

    2009-07-01

    Positive self-statements are widely believed to boost mood and self-esteem, yet their effectiveness has not been demonstrated. We examined the contrary prediction that positive self-statements can be ineffective or even harmful. A survey study confirmed that people often use positive self-statements and believe them to be effective. Two experiments showed that among participants with low self-esteem, those who repeated a positive self-statement ("I'm a lovable person") or who focused on how that statement was true felt worse than those who did not repeat the statement or who focused on how it was both true and not true. Among participants with high self-esteem, those who repeated the statement or focused on how it was true felt better than those who did not, but to a limited degree. Repeating positive self-statements may benefit certain people, but backfire for the very people who "need" them the most.

  4. A soft kinetic data structure for lesion border detection.

    PubMed

    Kockara, Sinan; Mete, Mutlu; Yip, Vincent; Lee, Brendan; Aydin, Kemal

    2010-06-15

    The medical imaging and image processing techniques, ranging from microscopic to macroscopic, has become one of the main components of diagnostic procedures to assist dermatologists in their medical decision-making processes. Computer-aided segmentation and border detection on dermoscopic images is one of the core components of diagnostic procedures and therapeutic interventions for skin cancer. Automated assessment tools for dermoscopic images have become an important research field mainly because of inter- and intra-observer variations in human interpretations. In this study, a novel approach-graph spanner-for automatic border detection in dermoscopic images is proposed. In this approach, a proximity graph representation of dermoscopic images in order to detect regions and borders in skin lesion is presented. Graph spanner approach is examined on a set of 100 dermoscopic images whose manually drawn borders by a dermatologist are used as the ground truth. Error rates, false positives and false negatives along with true positives and true negatives are quantified by digitally comparing results with manually determined borders from a dermatologist. The results show that the highest precision and recall rates obtained to determine lesion boundaries are 100%. However, accuracy of assessment averages out at 97.72% and borders errors' mean is 2.28% for whole dataset.

  5. The Influence of Averageness on Adults' Perceptions of Attractiveness: The Effect of Early Visual Deprivation.

    PubMed

    Vingilis-Jaremko, Larissa; Maurer, Daphne; Rhodes, Gillian; Jeffery, Linda

    2016-08-03

    Adults who missed early visual input because of congenital cataracts later have deficits in many aspects of face processing. Here we investigated whether they make normal judgments of facial attractiveness. In particular, we studied whether their perceptions are affected normally by a face's proximity to the population mean, as is true of typically developing adults, who find average faces to be more attractive than most other faces. We compared the judgments of facial attractiveness of 12 cataract-reversal patients to norms established from 36 adults with normal vision. Participants viewed pairs of adult male and adult female faces that had been transformed 50% toward and 50% away from their respective group averages, and selected which face was more attractive. Averageness influenced patients' judgments of attractiveness, but to a lesser extent than controls. The results suggest that cataract-reversal patients are able to develop a system for representing faces with a privileged position for an average face, consistent with evidence from identity aftereffects. However, early visual experience is necessary to set up the neural architecture necessary for averageness to influence perceptions of attractiveness with its normal potency. © The Author(s) 2016.

  6. Nuclear IHC enumeration: A digital phantom to evaluate the performance of automated algorithms in digital pathology.

    PubMed

    Niazi, Muhammad Khalid Khan; Abas, Fazly Salleh; Senaras, Caglar; Pennell, Michael; Sahiner, Berkman; Chen, Weijie; Opfer, John; Hasserjian, Robert; Louissaint, Abner; Shana'ah, Arwa; Lozanski, Gerard; Gurcan, Metin N

    2018-01-01

    Automatic and accurate detection of positive and negative nuclei from images of immunostained tissue biopsies is critical to the success of digital pathology. The evaluation of most nuclei detection algorithms relies on manually generated ground truth prepared by pathologists, which is unfortunately time-consuming and suffers from inter-pathologist variability. In this work, we developed a digital immunohistochemistry (IHC) phantom that can be used for evaluating computer algorithms for enumeration of IHC positive cells. Our phantom development consists of two main steps, 1) extraction of the individual as well as nuclei clumps of both positive and negative nuclei from real WSI images, and 2) systematic placement of the extracted nuclei clumps on an image canvas. The resulting images are visually similar to the original tissue images. We created a set of 42 images with different concentrations of positive and negative nuclei. These images were evaluated by four board certified pathologists in the task of estimating the ratio of positive to total number of nuclei. The resulting concordance correlation coefficients (CCC) between the pathologist and the true ratio range from 0.86 to 0.95 (point estimates). The same ratio was also computed by an automated computer algorithm, which yielded a CCC value of 0.99. Reading the phantom data with known ground truth, the human readers show substantial variability and lower average performance than the computer algorithm in terms of CCC. This shows the limitation of using a human reader panel to establish a reference standard for the evaluation of computer algorithms, thereby highlighting the usefulness of the phantom developed in this work. Using our phantom images, we further developed a function that can approximate the true ratio from the area of the positive and negative nuclei, hence avoiding the need to detect individual nuclei. The predicted ratios of 10 held-out images using the function (trained on 32 images) are within ±2.68% of the true ratio. Moreover, we also report the evaluation of a computerized image analysis method on the synthetic tissue dataset.

  7. SU-F-J-42: Comparison of Varian TrueBeam Cone-Beam CT and BrainLab ExacTrac X-Ray for Cranial Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, J; Shi, W; Andrews, D

    2016-06-15

    Purpose: To compare online image registrations of TrueBeam cone-beam CT (CBCT) and BrainLab ExacTrac x-ray imaging systems for cranial radiotherapy. Method: Phantom and patient studies were performed on a Varian TrueBeam STx linear accelerator (Version 2.5), which is integrated with a BrainLab ExacTrac imaging system (Version 6.1.1). The phantom study was based on a Rando head phantom, which was designed to evaluate isocenter-location dependence of the image registrations. Ten isocenters were selected at various locations in the phantom, which represented clinical treatment sites. CBCT and ExacTrac x-ray images were taken when the phantom was located at each isocenter. The patientmore » study included thirteen patients. CBCT and ExacTrac x-ray images were taken at each patient’s treatment position. Six-dimensional image registrations were performed on CBCT and ExacTrac, and residual errors calculated from CBCT and ExacTrac were compared. Results: In the phantom study, the average residual-error differences between CBCT and ExacTrac image registrations were: 0.16±0.10 mm, 0.35±0.20 mm, and 0.21±0.15 mm, in the vertical, longitudinal, and lateral directions, respectively. The average residual-error differences in the rotation, roll, and pitch were: 0.36±0.11 degree, 0.14±0.10 degree, and 0.12±0.10 degree, respectively. In the patient study, the average residual-error differences in the vertical, longitudinal, and lateral directions were: 0.13±0.13 mm, 0.37±0.21 mm, 0.22±0.17 mm, respectively. The average residual-error differences in the rotation, roll, and pitch were: 0.30±0.10 degree, 0.18±0.11 degree, and 0.22±0.13 degree, respectively. Larger residual-error differences (up to 0.79 mm) were observed in the longitudinal direction in the phantom and patient studies where isocenters were located in or close to frontal lobes, i.e., located superficially. Conclusion: Overall, the average residual-error differences were within 0.4 mm in the translational directions and were within 0.4 degree in the rotational directions.« less

  8. Automated Detection of Atrial Fibrillation Based on Time-Frequency Analysis of Seismocardiograms.

    PubMed

    Hurnanen, Tero; Lehtonen, Eero; Tadi, Mojtaba Jafari; Kuusela, Tom; Kiviniemi, Tuomas; Saraste, Antti; Vasankari, Tuija; Airaksinen, Juhani; Koivisto, Tero; Pankaala, Mikko

    2017-09-01

    In this paper, a novel method to detect atrial fibrillation (AFib) from a seismocardiogram (SCG) is presented. The proposed method is based on linear classification of the spectral entropy and a heart rate variability index computed from the SCG. The performance of the developed algorithm is demonstrated on data gathered from 13 patients in clinical setting. After motion artifact removal, in total 119 min of AFib data and 126 min of sinus rhythm data were considered for automated AFib detection. No other arrhythmias were considered in this study. The proposed algorithm requires no direct heartbeat peak detection from the SCG data, which makes it tolerant against interpersonal variations in the SCG morphology, and noise. Furthermore, the proposed method relies solely on the SCG and needs no complementary electrocardiography to be functional. For the considered data, the detection method performs well even on relatively low quality SCG signals. Using a majority voting scheme that takes five randomly selected segments from a signal and classifies these segments using the proposed algorithm, we obtained an average true positive rate of [Formula: see text] and an average true negative rate of [Formula: see text] for detecting AFib in leave-one-out cross-validation. This paper facilitates adoption of microelectromechanical sensor based heart monitoring devices for arrhythmia detection.

  9. 40 CFR 89.111 - Averaging, banking, and trading of exhaust emissions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Averaging, banking, and trading of... ENGINES Emission Standards and Certification Provisions § 89.111 Averaging, banking, and trading of exhaust emissions. Regulations regarding the availability of an averaging, banking, and trading program...

  10. 40 CFR 91.103 - Averaging, banking, and trading of exhaust emission credits.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Averaging, banking, and trading of... Standards and Certification Provisions § 91.103 Averaging, banking, and trading of exhaust emission credits. Regulations regarding averaging, banking, and trading provisions along with applicable recordkeeping...

  11. Measuring true localization accuracy in super resolution microscopy with DNA-origami nanostructures

    NASA Astrophysics Data System (ADS)

    Reuss, Matthias; Fördős, Ferenc; Blom, Hans; Öktem, Ozan; Högberg, Björn; Brismar, Hjalmar

    2017-02-01

    A common method to assess the performance of (super resolution) microscopes is to use the localization precision of emitters as an estimate for the achieved resolution. Naturally, this is widely used in super resolution methods based on single molecule stochastic switching. This concept suffers from the fact that it is hard to calibrate measures against a real sample (a phantom), because true absolute positions of emitters are almost always unknown. For this reason, resolution estimates are potentially biased in an image since one is blind to true position accuracy, i.e. deviation in position measurement from true positions. We have solved this issue by imaging nanorods fabricated with DNA-origami. The nanorods used are designed to have emitters attached at each end in a well-defined and highly conserved distance. These structures are widely used to gauge localization precision. Here, we additionally determined the true achievable localization accuracy and compared this figure of merit to localization precision values for two common super resolution microscope methods STED and STORM.

  12. Comparison of Newer IOL Power Calculation Methods for Eyes With Previous Radial Keratotomy

    PubMed Central

    Ma, Jack X.; Tang, Maolong; Wang, Li; Weikert, Mitchell P.; Huang, David; Koch, Douglas D.

    2016-01-01

    Purpose To evaluate the accuracy of the optical coherence tomography–based (OCT formula) and Barrett True K (True K) intraocular lens (IOL) calculation formulas in eyes with previous radial keratotomy (RK). Methods In 95 eyes of 65 patients, using the actual refraction following cataract surgery as target refraction, the predicted IOL power for each method was calculated. The IOL prediction error (PE) was obtained by subtracting the predicted IOL power from the implanted IOL power. The arithmetic IOL PE and median refractive PE were calculated and compared. Results All formulas except the True K produced hyperopic IOL PEs at 1 month, which decreased at ≥4 months (all P < 0.05). For the double-K Holladay 1, OCT formula, True K, and average of these three formulas (Average), the median absolute refractive PEs were, respectively, 0.78 diopters (D), 0.74 D, 0.60 D, and 0.59 D at 1 month; 0.69 D, 0.77 D, 0.77 D, and 0.61 D at 2 to 3 months; and 0.34 D, 0.65 D, 0.69 D, and 0.46 D at ≥4 months. The Average produced significantly smaller refractive PE than did the double-K Holladay 1 at 1 month (P < 0.05). There were no significant differences in refractive PEs among formulas at 4 months. Conclusions The OCT formula and True K were comparable to the double-K Holladay 1 method on the ASCRS (American Society of Cataract and Refractive Surgery) calculator. The Average IOL power on the ASCRS calculator may be considered when selecting the IOL power. Further improvements in the accuracy of IOL power calculation in RK eyes are desirable. PMID:27409468

  13. A real-time approach for heart rate monitoring using a Hilbert transform in seismocardiograms.

    PubMed

    Jafari Tadi, Mojtaba; Lehtonen, Eero; Hurnanen, Tero; Koskinen, Juho; Eriksson, Jonas; Pänkäälä, Mikko; Teräs, Mika; Koivisto, Tero

    2016-11-01

    Heart rate monitoring helps in assessing the functionality and condition of the cardiovascular system. We present a new real-time applicable approach for estimating beat-to-beat time intervals and heart rate in seismocardiograms acquired from a tri-axial microelectromechanical accelerometer. Seismocardiography (SCG) is a non-invasive method for heart monitoring which measures the mechanical activity of the heart. Measuring true beat-to-beat time intervals from SCG could be used for monitoring of the heart rhythm, for heart rate variability analysis and for many other clinical applications. In this paper we present the Hilbert adaptive beat identification technique for the detection of heartbeat timings and inter-beat time intervals in SCG from healthy volunteers in three different positions, i.e. supine, left and right recumbent. Our method is electrocardiogram (ECG) independent, as it does not require any ECG fiducial points to estimate the beat-to-beat intervals. The performance of the algorithm was tested against standard ECG measurements. The average true positive rate, positive prediction value and detection error rate for the different positions were, respectively, supine (95.8%, 96.0% and ≃0.6%), left (99.3%, 98.8% and ≃0.001%) and right (99.53%, 99.3% and ≃0.01%). High correlation and agreement was observed between SCG and ECG inter-beat intervals (r  >  0.99) for all positions, which highlights the capability of the algorithm for SCG heart monitoring from different positions. Additionally, we demonstrate the applicability of the proposed method in smartphone based SCG. In conclusion, the proposed algorithm can be used for real-time continuous unobtrusive cardiac monitoring, smartphone cardiography, and in wearable devices aimed at health and well-being applications.

  14. Unbiased mean direction of paleomagnetic data and better estimate of paleolatitude

    NASA Astrophysics Data System (ADS)

    Hatakeyama, T.; Shibuya, H.

    2010-12-01

    In paleomagnetism, when we obtain only paleodirection data without paleointensities we calculate Fisher-mean directions (I, D) and Fisher-mean VGP positions as the description of the mean field. However, Kono (1997) and Hatakeyama and Kono (2001) indicated that these averaged directions does not show the unbiased estimated mean directions derived from the time-averaged field (TAF). Hatakeyama and Kono (2002) calculated the TAF and paleosecular variation (PSV) models for the past 5My with considering the biases due to the averaging of the nonlinear functions such as the summation of the unit vectors in the Fisher statistics process. Here we will show a zonal TAF model based on the Hatakeyama and Kono TAF model. Moreover, we will introduce the biased angles due to the PSV in the mean direction and a method for determining true paleolatitudes, which represents the TAF, from paleodirections. This method will helps tectonics studies, especially in the estimation of the accurate paleolatitude in the middle latitude regions.

  15. Rapid automated method for screening of enteric pathogens from stool specimens.

    PubMed Central

    Villasante, P A; Agulla, A; Merino, F J; Pérez, T; Ladrón de Guevara, C; Velasco, A C

    1987-01-01

    A total of 800 colonies suggestive of Salmonella, Shigella, or Yersinia species isolated on stool differential agar media were inoculated onto both conventional biochemical test media (triple sugar iron agar, urea agar, and phenylalanine agar) and Entero Pathogen Screen cards of the AutoMicrobic system (Vitek Systems, Inc., Hazelwood, Mo.). Based on the conventional tests, the AutoMicrobic system method yielded the following results: 587 true-negatives, 185 true-positives, 2 false-negatives, and 26 false-positives (sensitivity, 99%; specificity, 96%). Both true-positive and true-negative results were achieved considerably earlier than false results (P less than 0.001). The Entero Pathogen Screen card method is a fast, easy, and sensitive method for screening for Salmonella, Shigella, or Yersinia species. The impossibility of screening for oxidase-positive pathogens is a minor disadvantage of this method. PMID:3553230

  16. Comparative evaluation of paired blood culture (aerobic/aerobic) and single blood culture, along with clinical importance in catheter versus peripheral line at a tertiary care hospital.

    PubMed

    Tarai, B; Das, P; Kumar, D; Budhiraja, S

    2012-01-01

    Paired blood culture (PBC) is uncommon practice in hospitals in India, leading to delayed and inadequate diagnosis. Also contamination remains a critical determinant in hampering the definitive diagnosis. To establish the need of PBC over single blood culture (SBC) along with the degree of contamination, this comparative retrospective study was initiated. We processed 2553 PBC and 4350 SBC in BacT/ALERT 3D (bioMerieux) between October 2010 and June 2011. The positive cultures were identified in VITEK 2 Compact (bioMerieux). True positivity and contaminants were also analyzed in 486 samples received from catheter and peripheral line. Out of 2553 PBC samples, positivity was seen in 350 (13.70%). In 4350 SBC samples, positivity was seen in 200 samples (4.59%). In PBC true pathogens were 267 (10.45%) and contaminants were 83 (3.25%), whereas in SBC 153 (3.51%) were true positives and contaminants were 47 (1.08%). Most of the blood cultures (99.27 %) grew within 72 h and 95.8% were isolated within 48 h. In 486 PBCs received from catheter/periphery (one each), catheter positivity was found in 85 (true positives were 48, false positives 37). In peripheral samples true positives were 50 and false positives were 8. Significantly higher positive rates were seen in PBCs compared with SBCs. Automated blood culture and identification methods significantly reduced the time required for processing of samples and also facilitated yield of diverse/rare organisms. Blood culture from catheter line had higher false positives than peripheral blood culture. Thus every positive result from a catheter must be correlated with clinical findings and requires further confirmation.

  17. A novel onset detection technique for brain-computer interfaces using sound-production related cognitive tasks in simulated-online system

    NASA Astrophysics Data System (ADS)

    Song, YoungJae; Sepulveda, Francisco

    2017-02-01

    Objective. Self-paced EEG-based BCIs (SP-BCIs) have traditionally been avoided due to two sources of uncertainty: (1) precisely when an intentional command is sent by the brain, i.e., the command onset detection problem, and (2) how different the intentional command is when compared to non-specific (or idle) states. Performance evaluation is also a problem and there are no suitable standard metrics available. In this paper we attempted to tackle these issues. Approach. Self-paced covert sound-production cognitive tasks (i.e., high pitch and siren-like sounds) were used to distinguish between intentional commands (IC) and idle states. The IC states were chosen for their ease of execution and negligible overlap with common cognitive states. Band power and a digital wavelet transform were used for feature extraction, and the Davies-Bouldin index was used for feature selection. Classification was performed using linear discriminant analysis. Main results. Performance was evaluated under offline and simulated-online conditions. For the latter, a performance score called true-false-positive (TFP) rate, ranging from 0 (poor) to 100 (perfect), was created to take into account both classification performance and onset timing errors. Averaging the results from the best performing IC task for all seven participants, an 77.7% true-positive (TP) rate was achieved in offline testing. For simulated-online analysis the best IC average TFP score was 76.67% (87.61% TP rate, 4.05% false-positive rate). Significance. Results were promising when compared to previous IC onset detection studies using motor imagery, in which best TP rates were reported as 72.0% and 79.7%, and which, crucially, did not take timing errors into account. Moreover, based on our literature review, there is no previous covert sound-production onset detection system for spBCIs. Results showed that the proposed onset detection technique and TFP performance metric have good potential for use in SP-BCIs.

  18. Object detectability at increased ambient lighting conditions.

    PubMed

    Pollard, Benjamin J; Chawla, Amarpreet S; Delong, David M; Hashimoto, Noriyuki; Samei, Ehsan

    2008-06-01

    Under typical dark conditions encountered in diagnostic reading rooms, a reader's pupils will contract and dilate as the visual focus intermittently shifts between the high luminance display and the darker background wall, resulting in increased visual fatigue and the degradation of diagnostic performance. A controlled increase of ambient lighting may, however, reduce the severity of these pupillary adjustments by minimizing the difference between the luminance level to which the eyes adapt while viewing an image (L(adp)) and the luminance level of diffusely reflected light from the area surrounding the display (L(s)). Although ambient lighting in reading rooms has conventionally been kept at a minimum to maintain the perceived contrast of film images, proper Digital Imaging and Communications in Medicine (DICOM) calibration of modern medical-grade liquid crystal displays can compensate for minor lighting increases with very little loss of image contrast. This paper describes two psychophysical studies developed to evaluate and refine optimum reading room ambient lighting conditions through the use of observational tasks intended to simulate real clinical practices. The first study utilized the biologic contrast response of the human visual system to determine a range of representative L(adp) values for typical medical images. Readers identified low contrast horizontal objects in circular foregrounds of uniform luminance (5, 12, 20, and 30 cd/m2) embedded within digitized mammograms. The second study examined the effect of increased ambient lighting on the detection of subtle objects embedded in circular foregrounds of uniform luminance (5, 12, and 35 cd/m2) centered within a constant background of 12 cd/m2 luminance. The images were displayed under a dark room condition (1 lux) and an increased ambient lighting level (50 lux) such that the luminance level of the diffusely reflected light from the background wall was approximately equal to the image L(adp) value of 12 cd/m2. Results from the first study demonstrated that observer true positive and false positive detection rates and true positive detection times were considerably better while viewing foregrounds at 12 and 20 cd/m2 than at the other foreground luminance levels. Results from the second study revealed that under increased room illuminance, the average true positive detection rate improved a statistically significant amount from 39.3% to 55.6% at 5 cd/m2 foreground luminance. Additionally, the true positive rate increased from 46.4% to 56.6% at 35 cd/m2 foreground luminance, and decreased slightly from 90.2% to 87.5% at 12 cd/m2 foreground luminance. False positive rates at all foreground luminance levels remained approximately constant with increased ambient lighting. Furthermore, under increased room illuminance, true positive detection times declined at every foreground luminance level, with the most considerable decrease (approximately 500 ms) at the 5 cd/m2 foreground luminance. The first study suggests that L(adp) of typical mammograms lies between 12 and 20 cd/m2, leading to an optimum reading room illuminance of approximately 50-80 lux. Findings from the second study provide psychophysical evidence that ambient lighting may be increased to a level within this range, potentially improving radiologist comfort, without deleterious effects on diagnostic performance.

  19. Statistics provide guidance for indigenous organic carbon detection on Mars missions.

    PubMed

    Sephton, Mark A; Carter, Jonathan N

    2014-08-01

    Data from the Viking and Mars Science Laboratory missions indicate the presence of organic compounds that are not definitively martian in origin. Both contamination and confounding mineralogies have been suggested as alternatives to indigenous organic carbon. Intuitive thought suggests that we are repeatedly obtaining data that confirms the same level of uncertainty. Bayesian statistics may suggest otherwise. If an organic detection method has a true positive to false positive ratio greater than one, then repeated organic matter detection progressively increases the probability of indigeneity. Bayesian statistics also reveal that methods with higher ratios of true positives to false positives give higher overall probabilities and that detection of organic matter in a sample with a higher prior probability of indigenous organic carbon produces greater confidence. Bayesian statistics, therefore, provide guidance for the planning and operation of organic carbon detection activities on Mars. Suggestions for future organic carbon detection missions and instruments are as follows: (i) On Earth, instruments should be tested with analog samples of known organic content to determine their true positive to false positive ratios. (ii) On the mission, for an instrument with a true positive to false positive ratio above one, it should be recognized that each positive detection of organic carbon will result in a progressive increase in the probability of indigenous organic carbon being present; repeated measurements, therefore, can overcome some of the deficiencies of a less-than-definitive test. (iii) For a fixed number of analyses, the highest true positive to false positive ratio method or instrument will provide the greatest probability that indigenous organic carbon is present. (iv) On Mars, analyses should concentrate on samples with highest prior probability of indigenous organic carbon; intuitive desires to contrast samples of high prior probability and low prior probability of indigenous organic carbon should be resisted.

  20. Evaluation of surveillance case definition in the diagnosis of leptospirosis, using the Microscopic Agglutination Test: a validation study.

    PubMed

    Dassanayake, Dinesh L B; Wimalaratna, Harith; Agampodi, Suneth B; Liyanapathirana, Veranja C; Piyarathna, Thibbotumunuwe A C L; Goonapienuwala, Bimba L

    2009-04-22

    Leptospirosis is endemic in both urban and rural areas of Sri Lanka and there had been many out breaks in the recent past. This study was aimed at validating the leptospirosis surveillance case definition, using the Microscopic Agglutination Test (MAT). The study population consisted of patients with undiagnosed acute febrile illness who were admitted to the medical wards of the Teaching Hospital Kandy, from 1st July 2007 to 31st July 2008. The subjects were screened to diagnose leptospirosis according to the leptospirosis case definition. MAT was performed on blood samples taken from each patient on the 7th day of fever. Leptospirosis case definition was evaluated in regard to sensitivity, specificity and predictive values, using a MAT titre >or= 1:800 for confirming leptospirosis. A total of 123 patients were initially recruited of which 73 had clinical features compatible with the surveillance case definition. Out of the 73 only 57 had a positive MAT result (true positives) leaving 16 as false positives. Out of the 50 who didn't have clinical features compatible with the case definition 45 had a negative MAT as well (true negatives), therefore 5 were false negatives. Total number of MAT positives was 62 out of 123. According to these results the test sensitivity was 91.94%, specificity 73.77%, positive predictive value and negative predictive values were 78.08% and 90% respectively. Diagnostic accuracy of the test was 82.93%. This study confirms that the surveillance case definition has a very high sensitivity and negative predictive value with an average specificity in diagnosing leptospirosis, based on a MAT titre of >or= 1: 800.

  1. WE-H-BRC-05: Catastrophic Error Metrics for Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, S; Molloy, J

    Purpose: Intuitive evaluation of complex radiotherapy treatments is impractical, while data transfer anomalies create the potential for catastrophic treatment delivery errors. Contrary to prevailing wisdom, logical scrutiny can be applied to patient-specific machine settings. Such tests can be automated, applied at the point of treatment delivery and can be dissociated from prior states of the treatment plan, potentially revealing errors introduced early in the process. Methods: Analytical metrics were formulated for conventional and intensity modulated RT (IMRT) treatments. These were designed to assess consistency between monitor unit settings, wedge values, prescription dose and leaf positioning (IMRT). Institutional metric averages formore » 218 clinical plans were stratified over multiple anatomical sites. Treatment delivery errors were simulated using a commercial treatment planning system and metric behavior assessed via receiver-operator-characteristic (ROC) analysis. A positive result was returned if the erred plan metric value exceeded a given number of standard deviations, e.g. 2. The finding was declared true positive if the dosimetric impact exceeded 25%. ROC curves were generated over a range of metric standard deviations. Results: Data for the conventional treatment metric indicated standard deviations of 3%, 12%, 11%, 8%, and 5 % for brain, pelvis, abdomen, lung and breast sites, respectively. Optimum error declaration thresholds yielded true positive rates (TPR) between 0.7 and 1, and false positive rates (FPR) between 0 and 0.2. Two proposed IMRT metrics possessed standard deviations of 23% and 37%. The superior metric returned TPR and FPR of 0.7 and 0.2, respectively, when both leaf position and MUs were modelled. Isolation to only leaf position errors yielded TPR and FPR values of 0.9 and 0.1. Conclusion: Logical tests can reveal treatment delivery errors and prevent large, catastrophic errors. Analytical metrics are able to identify errors in monitor units, wedging and leaf positions with favorable sensitivity and specificity. In part by Varian.« less

  2. PREDICTION OF SOLAR FLARE SIZE AND TIME-TO-FLARE USING SUPPORT VECTOR MACHINE REGRESSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boucheron, Laura E.; Al-Ghraibah, Amani; McAteer, R. T. James

    We study the prediction of solar flare size and time-to-flare using 38 features describing magnetic complexity of the photospheric magnetic field. This work uses support vector regression to formulate a mapping from the 38-dimensional feature space to a continuous-valued label vector representing flare size or time-to-flare. When we consider flaring regions only, we find an average error in estimating flare size of approximately half a geostationary operational environmental satellite (GOES) class. When we additionally consider non-flaring regions, we find an increased average error of approximately three-fourths a GOES class. We also consider thresholding the regressed flare size for the experimentmore » containing both flaring and non-flaring regions and find a true positive rate of 0.69 and a true negative rate of 0.86 for flare prediction. The results for both of these size regression experiments are consistent across a wide range of predictive time windows, indicating that the magnetic complexity features may be persistent in appearance long before flare activity. This is supported by our larger error rates of some 40 hr in the time-to-flare regression problem. The 38 magnetic complexity features considered here appear to have discriminative potential for flare size, but their persistence in time makes them less discriminative for the time-to-flare problem.« less

  3. 40 CFR 63.1975 - How do I calculate the 3-hour block average used to demonstrate compliance?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 12 2010-07-01 2010-07-01 true How do I calculate the 3-hour block average used to demonstrate compliance? 63.1975 Section 63.1975 Protection of Environment ENVIRONMENTAL... block average used to demonstrate compliance? Averages are calculated in the same way as they are...

  4. New recording package for VACM provides sensor flexibility

    USGS Publications Warehouse

    Strahle, William J.; Worrilow, S. E.; Fucile, S. E.; Martini, Marinna A.

    1994-01-01

    For the past three decades, the VACM has been a standard for ocean current measurements. A VACM is a true vector-averaging instrument that computes north and east current vectors and averages temperature continuously over a specified interval. It keeps a running total of rotor counts, and records one-shot samples of compass, vane position and time. Adding peripheral sensors to the data stream was easy. In today's economy, it seems imperative that operational centers concentrate on upgrading present inventory rather than purchasing newer instruments that often fall short of the flexible measurement platforms with high data capacities required by most researchers today. PCMCIA cards are rapidly becoming an industry standard with a wide range of storage capacities. By upgrading the VACM to a PCMCIA storage system with a flexible microprocessor, the VACM should continue to be a viable instrument into the next century

  5. The value of a transformation zone component in anal cytology to detect HSIL.

    PubMed

    Roberts, Jennifer M; Jin, Fengyi; Thurloe, Julia K; Ekman, Deborah; Adams, Marjorie K; McDonald, Ross L; Biro, Clare; Poynten, I Mary; Grulich, Andrew E; Farnsworth, Annabelle

    2016-08-01

    In a cytology-based screening program intended to prevent anal cancer, the anal transformation zone (TZ) should be adequately sampled because it is the site most susceptible to the development of the cancer precursor, high-grade squamous intraepithelial lesion (HSIL). An adequate TZ component is defined as comprising at least 10 rectal columnar or squamous metaplastic cells. In the current study, the authors examined whether the presence of a TZ component in anal cytology correlated with the detection of histological HSIL. In a natural history study of anal human papillomavirus infection in homosexual men, all participants underwent liquid-based cytology and high-resolution anoscopy (HRA) with or without biopsy at each visit. True-negative cytology (negative cytology with non-HSIL biopsy or negative HRA), false-negative cytology (negative cytology with HSIL biopsy), and true-positive cytology (abnormal cytology with HSIL biopsy) were compared with regard to the presence or absence of a TZ component. Of 617 participants, baseline results included 155 true-positive results, 191 true-negative results, and 31 false-negative results. The absence of an adequate TZ component was found to be significantly higher for false-negative (32.3%) than for either true-positive (11.0%; P = .0034) or true-negative (13.1%; P = .0089) results. Significantly more false-negative cases lacked a TZ component compared with either true-positive or true-negative cases. TZ cells may be an important indicator of sample quality for anal cytology because, unlike cervical sampling, the anal canal is not visualized during cytology sampling. Cancer Cytopathol 2016;124:596-601. © 2016 American Cancer Society. © 2016 American Cancer Society.

  6. Comparison of the egg flotation and egg candling techniques for estimating incubation day of Canada Goose nests

    USGS Publications Warehouse

    Reiter, M.E.; Andersen, D.E.

    2008-01-01

    Both egg flotation and egg candling have been used to estimate incubation day (often termed nest age) in nesting birds, but little is known about the relative accuracy of these two techniques. We used both egg flotation and egg candling to estimate incubation day for Canada Geese (Branta canadensis interior) nesting near Cape Churchill, Manitoba, from 2000 to 2007. We modeled variation in the difference between estimates of incubation day using each technique as a function of true incubation day, as well as, variation in error rates with each technique as a function of the true incubation day. We also evaluated the effect of error in the estimated incubation day on estimates of daily survival rate (DSR) and nest success using simulations. The mean difference between concurrent estimates of incubation day based on egg flotation minus egg candling at the same nest was 0.85 ?? 0.06 (SE) days. The positive difference in favor of egg flotation and the magnitude of the difference in estimates of incubation day did not vary as a function of true incubation day. Overall, both egg flotation and egg candling overestimated incubation day early in incubation and underestimated incubation day later in incubation. The average difference between true hatch date and estimated hatch date did not differ from zero (days) for egg flotation, but egg candling overestimated true hatch date by about 1 d (true - estimated; days). Our simulations suggested that error associated with estimating the incubation day of nests and subsequently exposure days using either egg candling or egg flotation would have minimal effects on estimates of DSR and nest success. Although egg flotation was slightly less biased, both methods provided comparable and accurate estimates of incubation day and subsequent estimates of hatch date and nest success throughout the entire incubation period. ?? 2008 Association of Field Ornithologists.

  7. Case-control studies in neurosurgery.

    PubMed

    Nesvick, Cody L; Thompson, Clinton J; Boop, Frederick A; Klimo, Paul

    2014-08-01

    Observational studies, such as cohort and case-control studies, are valuable instruments in evidence-based medicine. Case-control studies, in particular, are becoming increasingly popular in the neurosurgical literature due to their low cost and relative ease of execution; however, no one has yet systematically assessed these types of studies for quality in methodology and reporting. The authors performed a literature search using PubMed/MEDLINE to identify all studies that explicitly identified themselves as "case-control" and were published in the JNS Publishing Group journals (Journal of Neurosurgery, Journal of Neurosurgery: Pediatrics, Journal of Neurosurgery: Spine, and Neurosurgical Focus) or Neurosurgery. Each paper was evaluated for 22 descriptive variables and then categorized as having either met or missed the basic definition of a case-control study. All studies that evaluated risk factors for a well-defined outcome were considered true case-control studies. The authors sought to identify key features or phrases that were or were not predictive of a true case-control study. Those papers that satisfied the definition were further evaluated using the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist. The search detected 67 papers that met the inclusion criteria, of which 32 (48%) represented true case-control studies. The frequency of true case-control studies has not changed with time. Use of odds ratios (ORs) and logistic regression (LR) analysis were strong positive predictors of true case-control studies (for odds ratios, OR 15.33 and 95% CI 4.52-51.97; for logistic regression analysis, OR 8.77 and 95% CI 2.69-28.56). Conversely, negative predictors included focus on a procedure/intervention (OR 0.35, 95% CI 0.13-0.998) and use of the word "outcome" in the Results section (OR 0.23, 95% CI 0.082-0.65). After exclusion of nested case-control studies, the negative correlation between focus on a procedure/intervention and true case-control studies was strengthened (OR 0.053, 95% CI 0.0064-0.44). There was a trend toward a negative association between the use of survival analysis or Kaplan-Meier curves and true case-control studies (OR 0.13, 95% CI 0.015-1.12). True case-control studies were no more likely than their counterparts to use a potential study design "expert" (OR 1.50, 95% CI 0.57-3.95). The overall average STROBE score was 72% (range 50-86%). Examples of reporting deficiencies were reporting of bias (28%), missing data (55%), and funding (44%). The results of this analysis show that the majority of studies in the neurosurgical literature that identify themselves as "case-control" studies are, in fact, labeled incorrectly. Positive and negative predictors were identified. The authors provide several recommendations that may reverse the incorrect and inappropriate use of the term "case-control" and improve the quality of design and reporting of true case-control studies in neurosurgery.

  8. Diurnal rhythm and concordance between objective and subjective hot flashes: the Hilo Women's Health Study.

    PubMed

    Sievert, Lynnette L; Reza, Angela; Mills, Phoebe; Morrison, Lynn; Rahberg, Nichole; Goodloe, Amber; Sutherland, Michael; Brown, Daniel E

    2010-01-01

    The aims of this study were to test for a diurnal pattern in hot flashes in a multiethnic population living in a hot, humid environment and to examine the rates of concordance between objective and subjective measures of hot flashes using ambulatory and laboratory measures. Study participants aged 45 to 55 years were recruited from the general population of Hilo, HI. Women wore a Biolog hot flash monitor (UFI, Morro Bay, CA), kept a diary for 24 hours, and also participated in 3-hour laboratory measures (n = 199). Diurnal patterns were assessed using polynomial regression. For each woman, objectively recorded hot flashes that matched subjective experience were treated as true-positive readings. Subjective hot flashes were considered the standard for computing false-positive and false-negative readings. True-positive, false-positive, and false-negative readings were compared across ethnic groups by chi analyses. Frequencies of sternal, nuchal, and subjective hot flashes peaked at 1500 +/- 1 hours with no difference by ethnicity. Laboratory results supported the pattern seen in ambulatory monitoring. Sternal and nuchal monitoring showed the same frequency of true-positive measures, but nonsternal electrodes picked up more false-positive readings. Laboratory monitoring showed very low frequencies of false negatives. There were no ethnic differences in the frequency of true-positive or false-positive measures. Women of European descent were more likely to report hot flashes that were not objectively demonstrated (false-negative measures). The diurnal pattern and peak in hot flash occurrence in the hot humid environment of Hilo were similar to results from more temperate environments. Lack of variation in sternal versus nonsternal measures and in true-positive measures across ethnicities suggests no appreciable effect of population variation in sweating patterns.

  9. Diurnal rhythm and concordance between objective and subjective hot flashes: The Hilo Women’s Health Study

    PubMed Central

    Sievert, Lynnette L.; Reza, Angela; Mills, Phoebe; Morrison, Lynn; Rahberg, Nichole; Goodloe, Amber; Sutherland, Michael; Brown, Daniel E.

    2010-01-01

    Objective To test for a diurnal pattern in hot flashes in a multi-ethnic population living in a hot, humid environment. To examine rates of concordance between objective and subjective measures of hot flashes using ambulatory and laboratory measures. Methods Study participants aged 45–55 were recruited from the general population of Hilo, Hawaii. Women wore a Biolog hot flash monitor, kept a diary for 24-hours, and also participated in 3-hour laboratory measures (n=199). Diurnal patterns were assessed using polynomial regression. For each woman, objectively recorded hot flashes that matched subjective experience were treated as true positive readings. Subjective hot flashes were considered the standard for computing false positive and false negative readings. True positive, false positive, and false negative readings were compared across ethnic groups by chi-square analyses. Results Frequencies of sternal, nuchal and subjective hot flashes peaked at 15:00 ± 1 hour with no difference by ethnicity. Laboratory results supported the pattern seen in ambulatory monitoring. Sternal and nuchal monitoring showed the same frequency of true positive measures, but non-sternal electrodes picked up more false positive readings. Laboratory monitoring showed very low frequencies of false negatives. There were no ethnic differences in the frequency of true positive or false positive measures. Women of European descent were more likely to report hot flashes that were not objectively demonstrated (false negative measures). Conclusions The diurnal pattern and peak in hot flash occurrence in the hot humid environment of Hilo was similar to results from more temperate environments. Lack of variation in sternal vs. non-sternal measures, and in true positive measures across ethnicities suggests no appreciable effect of population variation in sweating patterns. PMID:20220538

  10. True self-alienation positively predicts reports of mindwandering.

    PubMed

    Vess, Matthew; Leal, Stephanie A; Hoeldtke, Russell T; Schlegel, Rebecca J; Hicks, Joshua A

    2016-10-01

    Two studies assessed the relationship between feelings of uncertainty about who one truly is (i.e., true self-alienation) and self-reported task-unrelated thoughts (i.e., mindwandering) during performance tasks. Because true self-alienation is conceptualized as the subjective disconnect between conscious awareness and actual experience, we hypothesized that greater feelings of true self-alienation would positively relate to subjective reports of mindwandering. Two convergent studies supported this hypothesis. Moreover, this relationship could not consistently be accounted for by the independent influence of other aspects of authenticity, negative mood, mindfulness, or broad personality dimensions. These findings suggest that individual differences in true self-alienation are reliably associated with subjective reports of mindwandering. The implications of these findings for the true self-alienation construct, the ways that personality relates to mindwandering, and future research directions focused on curtailing mindwandering and improving performance and achievement are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Chronic bacterial osteomyelitis: prospective comparison of (18)F-FDG imaging with a dual-head coincidence camera and (111)In-labelled autologous leucocyte scintigraphy.

    PubMed

    Meller, J; Köster, G; Liersch, T; Siefker, U; Lehmann, K; Meyer, I; Schreiber, K; Altenvoerde, G; Becker, W

    2002-01-01

    Indium-111-labelled white blood cells ((111)In-WBCs) are currently considered the tracer of choice in the diagnostic work-up of suspected active chronic osteomyelitis (COM). Previous studies in a limited number of patients, performed with dedicated PET systems, have shown that [(18)F]2'-deoxy-2-fluoro- D-glucose (FDG) imaging may offer at least similar diagnostic accuracy. The aim of this prospective study was to compare FDG imaging with a dual-head coincidence camera (DHCC) and (111)In-WBC imaging in patients with suspected COM. Thirty consecutive non-diabetic patients with possible COM underwent combined skeletal scintigraphy (30/30 patients), (111)In-WBC imaging (28/30 patients) and FDG-PET with a DHCC (30/30 patients). During diagnostic work-up, COM was proven in 11/36 regions of suspected skeletal infection and subsequently excluded in 25/36 regions. In addition, soft tissue infection was present in five patients and septic arthritis in three. (111)In-WBC imaging in 28 patients was true positive in 2/11 regions with proven COM and true negative in 21/23 regions without further evidence of COM. False-positive results occurred in two regions and false-negative results in nine regions suspected for COM. Most of the false-negative results (7/9) occurred in the central skeleton. If the analysis was restricted to the 18 regions with available histology ( n=17) or culture ( n=1), (111)In-WBC imaging was true positive in 2/18 regions, true negative in 8/18 regions, false negative in 7/18 regions and false positive in 1/18 regions. FDG-DHCC imaging was true positive in 11/11 regions with proven COM and true negative in 23/25 regions without further evidence of COM. False-positive results occurred in two regions. If the analysis was restricted to the 19 regions with available histology ( n=18) or culture ( n=1), FDG-DHCC imaging was true positive in 9/9 regions with proven COM and true negative in 10/10 regions without further evidence of COM. It is concluded that FDG-DHCC imaging is superior to (111)In-WBC scintigraphy in the diagnosis of COM in the central skeleton and therefore should be considered the method of choice for this indication. This seems to hold true for peripheral lesions as well, but in our series the number of cases with proven infection was too small to permit a final conclusion.

  12. Low statistical power in biomedical science: a review of three human research domains.

    PubMed

    Dumas-Mallet, Estelle; Button, Katherine S; Boraud, Thomas; Gonon, Francois; Munafò, Marcus R

    2017-02-01

    Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0-10% or 11-20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation.

  13. Low statistical power in biomedical science: a review of three human research domains

    PubMed Central

    Dumas-Mallet, Estelle; Button, Katherine S.; Boraud, Thomas; Gonon, Francois

    2017-01-01

    Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0–10% or 11–20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation. PMID:28386409

  14. On the average configuration of the geomagnetic tail

    NASA Technical Reports Server (NTRS)

    Fairfield, D. H.

    1978-01-01

    Over 3000 hours of IMP-6 magnetic field data obtained between 20 and 33 R sub E in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5 minute averages of B sub Z as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks than near midnight. The tail field projected in the solar magnetospheric equatorial plane deviates from the X axis due to flaring and solar wind aberration by an angle alpha = -0.9 y sub SM - 1.7 where Y sub SM is in earth radii and alpha is in degrees. After removing these effects the Y component of the tail field is found to depend on interplanetary sector structure. During an away sector the B sub Y component of the tail field is on average 0.5 gamma greater than that during a toward sector, a result that is true in both tail lobes and is independent of location across the tail.

  15. The use of an essay examination in evaluating medical students during the surgical clerkship.

    PubMed

    Smart, Blair J; Rinewalt, Daniel; Daly, Shaun C; Janssen, Imke; Luu, Minh B; Myers, Jonathan A

    2016-01-01

    Third-year medical students are graded according to subjective performance evaluations and standardized tests written by the National Board of Medical Examiners (NBME). Many "poor" standardized test takers believe the heavily weighted NBME does not evaluate their true fund of knowledge and would prefer a more open-ended forum to display their individualized learning experiences. Our study examined the use of an essay examination as part of the surgical clerkship evaluation. We retrospectively examined the final surgical clerkship grades of 781 consecutive medical students enrolled in a large urban academic medical center from 2005 to 2011. We examined final grades with and without the inclusion of the essay examination for all students using a paired t test and then sought any relationship between the essay and NBME using Pearson correlations. Final average with and without the essay examination was 72.2% vs 71.3% (P < .001), with the essay examination increasing average scores by .4, 1.8, and 2.5 for those receiving high pass, pass, and fail, respectively. The essay decreased the average score for those earning an honors by .4. Essay scores were found to overall positively correlate with the NBME (r = .32, P < .001). The inclusion of an essay examination as part of the third-year surgical core clerkship final did increase the final grade a modest degree, especially for those with lower scores who may identify themselves as "poor" standardized test takers. A more open-ended forum may allow these students an opportunity to overcome this deficiency and reveal their true fund of surgical knowledge. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. A framework for automatic information quality ranking of diabetes websites.

    PubMed

    Belen Sağlam, Rahime; Taskaya Temizel, Tugba

    2015-01-01

    Objective: When searching for particular medical information on the internet the challenge lies in distinguishing the websites that are relevant to the topic, and contain accurate information. In this article, we propose a framework that automatically identifies and ranks diabetes websites according to their relevance and information quality based on the website content. Design: The proposed framework ranks diabetes websites according to their content quality, relevance and evidence based medicine. The framework combines information retrieval techniques with a lexical resource based on Sentiwordnet making it possible to work with biased and untrusted websites while, at the same time, ensuring the content relevance. Measurement: The evaluation measurements used were Pearson-correlation, true positives, false positives and accuracy. We tested the framework with a benchmark data set consisting of 55 websites with varying degrees of information quality problems. Results: The proposed framework gives good results that are comparable with the non-automated information quality measuring approaches in the literature. The correlation between the results of the proposed automated framework and ground-truth is 0.68 on an average with p < 0.001 which is greater than the other proposed automated methods in the literature (r score in average is 0.33).

  17. Guilty, but not ashamed: "true" self-conceptions influence affective responses to personal shortcomings.

    PubMed

    Vess, Matthew; Schlegel, Rebecca J; Hicks, Joshua A; Arndt, Jamie

    2014-06-01

    The current research examined how true self-conceptions (who a person believes he or she truly is) influence negative self-relevant emotions in response to shortcomings. In Study 1 (N = 83), an Internet sample of adults completed a measure of authenticity, reflected on a shortcoming or positive life event, and completed state shame and guilt measures. In Study 2 (N = 49), undergraduates focused on true versus other determined self-attributes, received negative performance feedback, and completed state shame and guilt measures. In Study 3 (N = 138), undergraduates focused on self-determined versus other determined self-aspects, reflected on a shortcoming or neutral event, and completed state shame, guilt, and self-esteem measures. In Study 4 (N = 75), undergraduates thought about true self-attributes, an achievement, or an ordinary event; received positive or negative performance feedback; and completed state shame and guilt measures. In Study 1, differences in true self-expression positively predicted shame-free guilt (but not guilt-free shame) following reminders of a shortcoming. Studies 2-4 found that experimental activation of true self-conceptions increased shame-free guilt and generally decreased guilt-free shame in response to negative evaluative experiences. The findings offer novel insights into true self-conceptions by revealing their impact on negative self-conscious emotions. © 2013 Wiley Periodicals, Inc.

  18. Radio structure effects on the optical and radio representations of the ICRF

    NASA Astrophysics Data System (ADS)

    Andrei, A. H.; da Silva Neto, D. N.; Assafin, M.; Vieira Martins, R.

    Silva Neto et al. (2002) show that comparing the ICRF Ext.1 sources standard radio position (Ma et al. 1998) against their optical counterpart position (Zacharias et al. 1999, Monet et al., 1998), a systematic pattern appears, which depends on the radio structure index (Fey and Charlot, 2000). The optical to radio offsets produce a distribution suggestive of a coincidence of the optical and radio centroids worse for the radio extended than for the radio compact sources. On average, the coincidence between the optical and radio centroids is found 7.9±1.1 mas smaller for the compact than for the extended sources. Such an effect is reasonably large, and certainly much too large to be due to errors on the VLBI radio position. On the other hand, it is too small to be accounted to the errors on the optical position, which moreover should be independent from the radio stucture. Thus, other than a true pattern of centroids non-coincidence, the remaining explanation is of a hazard result. This paper summarizes the several statistical tests used to discard the hazard explanation.

  19. Relationship between dairy cow genetic merit and profit on commercial spring calving dairy farms.

    PubMed

    Ramsbottom, G; Cromie, A R; Horan, B; Berry, D P

    2012-07-01

    Because not all animal factors influencing profitability can be included in total merit breeding indices for profitability, the association between animal total merit index and true profitability, taking cognisance of all factors associated with costs and revenues, is generally not known. One method to estimate such associations is at the herd level, associating herd average genetic merit with herd profitability. The objective of this study was to primarily relate herd average genetic merit for a range of traits, including the Irish total merit index, with indicators of performance, including profitability, using correlation and multiple regression analyses. Physical, genetic and financial performance data from 1131 Irish seasonal calving pasture-based dairy farms were available following edits; data on some herds were available for more than 1 year of the 3-year study period (2007 to 2009). Herd average economic breeding index (EBI) was associated with reduced herd average phenotypic milk yield but with greater milk composition, resulting in higher milk prices. Moderate positive correlations (0.26 to 0.61) existed between genetic merit for an individual trait and average herd performance for that trait (e.g. genetic merit for milk yield and average per cow milk yield). Following adjustment for year, stocking rate, herd size and quantity of purchased feed in the multiple regression analysis, average herd EBI was positively and linearly associated with net margin per cow and per litre as well as gross revenue output per cow and per litre. The change in net margin per cow per unit change in the total merit index was €1.94 (s.e. = 0.42), which was not different from the expectation of €2. This study, based on a large data set of commercial herds with accurate information on profitability and genetic merit, confirms that, after accounting for confounding factors, the change in herd profitability per unit change in herd genetic merit for the total merit index is within expectations.

  20. Deep mantle structure as a reference frame for movements in and on the Earth

    PubMed Central

    Torsvik, Trond H.; van der Voo, Rob; Doubrovine, Pavel V.; Burke, Kevin; Steinberger, Bernhard; Ashwal, Lewis D.; Trønnes, Reidar G.; Webb, Susan J.; Bull, Abigail L.

    2014-01-01

    Earth’s residual geoid is dominated by a degree-2 mode, with elevated regions above large low shear-wave velocity provinces on the core–mantle boundary beneath Africa and the Pacific. The edges of these deep mantle bodies, when projected radially to the Earth’s surface, correlate with the reconstructed positions of large igneous provinces and kimberlites since Pangea formed about 320 million years ago. Using this surface-to-core–mantle boundary correlation to locate continents in longitude and a novel iterative approach for defining a paleomagnetic reference frame corrected for true polar wander, we have developed a model for absolute plate motion back to earliest Paleozoic time (540 Ma). For the Paleozoic, we have identified six phases of slow, oscillatory true polar wander during which the Earth’s axis of minimum moment of inertia was similar to that of Mesozoic times. The rates of Paleozoic true polar wander (<1°/My) are compatible with those in the Mesozoic, but absolute plate velocities are, on average, twice as high. Our reconstructions generate geologically plausible scenarios, with large igneous provinces and kimberlites sourced from the margins of the large low shear-wave velocity provinces, as in Mesozoic and Cenozoic times. This absolute kinematic model suggests that a degree-2 convection mode within the Earth’s mantle may have operated throughout the entire Phanerozoic. PMID:24889632

  1. Deep mantle structure as a reference frame for movements in and on the Earth.

    PubMed

    Torsvik, Trond H; van der Voo, Rob; Doubrovine, Pavel V; Burke, Kevin; Steinberger, Bernhard; Ashwal, Lewis D; Trønnes, Reidar G; Webb, Susan J; Bull, Abigail L

    2014-06-17

    Earth's residual geoid is dominated by a degree-2 mode, with elevated regions above large low shear-wave velocity provinces on the core-mantle boundary beneath Africa and the Pacific. The edges of these deep mantle bodies, when projected radially to the Earth's surface, correlate with the reconstructed positions of large igneous provinces and kimberlites since Pangea formed about 320 million years ago. Using this surface-to-core-mantle boundary correlation to locate continents in longitude and a novel iterative approach for defining a paleomagnetic reference frame corrected for true polar wander, we have developed a model for absolute plate motion back to earliest Paleozoic time (540 Ma). For the Paleozoic, we have identified six phases of slow, oscillatory true polar wander during which the Earth's axis of minimum moment of inertia was similar to that of Mesozoic times. The rates of Paleozoic true polar wander (<1°/My) are compatible with those in the Mesozoic, but absolute plate velocities are, on average, twice as high. Our reconstructions generate geologically plausible scenarios, with large igneous provinces and kimberlites sourced from the margins of the large low shear-wave velocity provinces, as in Mesozoic and Cenozoic times. This absolute kinematic model suggests that a degree-2 convection mode within the Earth's mantle may have operated throughout the entire Phanerozoic.

  2. The Effect of atmospheric humidity level to the determination of Islamic Fajr/morning prayer time and twilight appearance

    NASA Astrophysics Data System (ADS)

    Rohmah, Nihayatur

    2016-11-01

    Islamic prayer times are based on the astronomical position of the Sun in the sky. One of them is the Fajr prayer. It is marked by some indicators in the morning twilight which is white light spread in the Eastern horizon. However, determining the true time of twilight can be difficult. One of the reasons is the effect of atmospheric humidity to the appearance of morning twilight. The higher the humidity, the redder twilight sky appearance. This paper discusses this effect. It is shown that despite of the same Sun's position, sky color can vary considerably. Observations of various solar dip angle have been made to study this effect. Visibility for different angle can change accordingly. We obtained that the average solar dip for Fajr prayer by morning twilight images was -18°39'29.4".

  3. Abstract Interface Specifications for the A-7E Device Interface Module.

    DTIC Science & Technology

    1980-11-20

    Undesired events +GSINSAGE+ pl:time;O !+SINS attitude age +! %SINS not enabled% p2 :time:O !+SINS position age +! p3:time:O !+SINS velocity age +! +G SINS...attitude age +! The elapsed time since new valid attitude data was provided by the SINS hardware. !+SINS attitude valid+! True iff SINS attitude data is valid...horizontal plane. !+SINS position age +! The elapsed time since new valid position data was provided by the SINS hardware. !+SINS position valid+! True iff

  4. Cumulative risk of false positive test in relation to breast symptoms in mammography screening: a historical prospective cohort study.

    PubMed

    Singh, Deependra; Pitkäniemi, Janne; Malila, Nea; Anttila, Ahti

    2016-09-01

    Mammography has been found effective as the primary screening test for breast cancer. We estimated the cumulative probability of false positive screening test results with respect to symptom history reported at screen. A historical prospective cohort study was done using individual screening data from 413,611 women aged 50-69 years with 2,627,256 invitations for mammography screening between 1992 and 2012 in Finland. Symptoms (lump, retraction, and secretion) were reported at 56,805 visits, and 48,873 visits resulted in a false positive mammography result. Generalized linear models were used to estimate the probability of at least one false positive test and true positive at screening visits. The estimates were compared among women with and without symptoms history. The estimated cumulative probabilities were 18 and 6 % for false positive and true positive results, respectively. In women with a history of a lump, the cumulative probabilities of false positive test and true positive were 45 and 16 %, respectively, compared to 17 and 5 % with no reported lump. In women with a history of any given symptom, the cumulative probabilities of false positive test and true positive were 38 and 13 %, respectively. Likewise, women with a history of a 'lump and retraction' had the cumulative false positive probability of 56 %. The study showed higher cumulative risk of false positive tests and more cancers detected in women who reported symptoms compared to women who did not report symptoms at screen. The risk varies substantially, depending on symptom types and characteristics. Information on breast symptoms influences the balance of absolute benefits and harms of screening.

  5. A Comparative Analysis of the Snort and Suricata Intrusion-Detection Systems

    DTIC Science & Technology

    2011-09-01

    Category: Test Rules Test #6: Simple LFI Attack 43 Snort True Positive: Snort generated an alert based on the ‘/etc/ passwd ’ string passed...through an HTTP command. Suricata True Positive: Suricata generated an alert based on the ‘/etc/ passwd ’ string passed through an HTTP command

  6. Comparison of two data mining techniques in labeling diagnosis to Iranian pharmacy claim dataset: artificial neural network (ANN) versus decision tree model.

    PubMed

    Rezaei-Darzi, Ehsan; Farzadfar, Farshad; Hashemi-Meshkini, Amir; Navidi, Iman; Mahmoudi, Mahmoud; Varmaghani, Mehdi; Mehdipour, Parinaz; Soudi Alamdari, Mahsa; Tayefi, Batool; Naderimagham, Shohreh; Soleymani, Fatemeh; Mesdaghinia, Alireza; Delavari, Alireza; Mohammad, Kazem

    2014-12-01

    This study aimed to evaluate and compare the prediction accuracy of two data mining techniques, including decision tree and neural network models in labeling diagnosis to gastrointestinal prescriptions in Iran. This study was conducted in three phases: data preparation, training phase, and testing phase. A sample from a database consisting of 23 million pharmacy insurance claim records, from 2004 to 2011 was used, in which a total of 330 prescriptions were assessed and used to train and test the models simultaneously. In the training phase, the selected prescriptions were assessed by both a physician and a pharmacist separately and assigned a diagnosis. To test the performance of each model, a k-fold stratified cross validation was conducted in addition to measuring their sensitivity and specificity. Generally, two methods had very similar accuracies. Considering the weighted average of true positive rate (sensitivity) and true negative rate (specificity), the decision tree had slightly higher accuracy in its ability for correct classification (83.3% and 96% versus 80.3% and 95.1%, respectively). However, when the weighted average of ROC area (AUC between each class and all other classes) was measured, the ANN displayed higher accuracies in predicting the diagnosis (93.8% compared with 90.6%). According to the result of this study, artificial neural network and decision tree model represent similar accuracy in labeling diagnosis to GI prescription.

  7. A fast automatic recognition and location algorithm for fetal genital organs in ultrasound images.

    PubMed

    Tang, Sheng; Chen, Si-ping

    2009-09-01

    Severe sex ratio imbalance at birth is now becoming an important issue in several Asian countries. Its leading immediate cause is prenatal sex-selective abortion following illegal sex identification by ultrasound scanning. In this paper, a fast automatic recognition and location algorithm for fetal genital organs is proposed as an effective method to help prevent ultrasound technicians from unethically and illegally identifying the sex of the fetus. This automatic recognition algorithm can be divided into two stages. In the 'rough' stage, a few pixels in the image, which are likely to represent the genital organs, are automatically chosen as points of interest (POIs) according to certain salient characteristics of fetal genital organs. In the 'fine' stage, a specifically supervised learning framework, which fuses an effective feature data preprocessing mechanism into the multiple classifier architecture, is applied to every POI. The basic classifiers in the framework are selected from three widely used classifiers: radial basis function network, backpropagation network, and support vector machine. The classification results of all the POIs are then synthesized to determine whether the fetal genital organ is present in the image, and to locate the genital organ within the positive image. Experiments were designed and carried out based on an image dataset comprising 658 positive images (images with fetal genital organs) and 500 negative images (images without fetal genital organs). The experimental results showed true positive (TP) and true negative (TN) results from 80.5% (265 from 329) and 83.0% (415 from 500) of samples, respectively. The average computation time was 453 ms per image.

  8. Assessment of real-time PCR cycle threshold values in Microsporum canis culture-positive and culture-negative cats in an animal shelter: a field study.

    PubMed

    Jacobson, Linda S; McIntyre, Lauren; Mykusz, Jenny

    2018-02-01

    Objectives Real-time PCR provides quantitative information, recorded as the cycle threshold (Ct) value, about the number of organisms detected in a diagnostic sample. The Ct value correlates with the number of copies of the target organism in an inversely proportional and exponential relationship. The aim of the study was to determine whether Ct values could be used to distinguish between culture-positive and culture-negative samples. Methods This was a retrospective analysis of Ct values from dermatophyte PCR results in cats with suspicious skin lesions or suspected exposure to dermatophytosis. Results One hundred and thirty-two samples were included. Using culture as the gold standard, 28 were true positives, 12 were false positives and 92 were true negatives. The area under the curve for the pretreatment time point was 96.8% (95% confidence interval [CI] 94.2-99.5) compared with 74.3% (95% CI 52.6-96.0) for pooled data during treatment. Before treatment, a Ct cut-off of <35.7 (approximate DNA count 300) provided a sensitivity of 92.3% and specificity of 95.2%. There was no reliable cut-off Ct value between culture-positive and culture-negative samples during treatment. Ct values prior to treatment differed significantly between the true-positive and false-positive groups ( P = 0.0056). There was a significant difference between the pretreatment and first and second negative culture time points ( P = 0.0002 and P <0.0001, respectively). However, there was substantial overlap between Ct values for true positives and true negatives, and for pre- and intra-treatment time points. Conclusions and relevance Ct values had limited usefulness for distinguishing between culture-positive and culture-negative cases when field study samples were analyzed. In addition, Ct values were less reliable than fungal culture for determining mycological cure.

  9. Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.

    PubMed

    Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack

    2017-06-01

    In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.

  10. Evidence of Non-Coincidence between Radio and Optical Positions of ICRF Sources.

    NASA Astrophysics Data System (ADS)

    Andrei, A. H.; da Silva, D. N.; Assafin, M.; Vieira Martins, R.

    2003-11-01

    Silva Neto et al. (SNAAVM: 2002) show that comparing the ICRF Ext1 sources standard radio position (Ma et al., 1998) against their optical counterpart position(ZZHJVW: Zacharias et al., 1999; USNO A2.0: Monet et al., 1998), a systematic pattern appears, which depends on the radio structure index (Fey and Charlot, 2000). The optical to radio offsets produce a distribution suggestive of a coincidence of the optical and radio centroids worse for the radio extended than for the radio compact sources. On average, the coincidence between the optical and radio centroids is found 7.9 +/- 1.1 mas smaller for the compact than for the extended sources. Such an effect is reasonably large, and certainly much too large to be due to errors on the VLBI radio position. On the other hand, it is too small to be accounted to the errors on the optical position, which moreover should be independent from the radio structure. Thus, other than a true pattern of centroids non-coincidence, the remaining explanation is of a hazard result. This paper summarizes the several statistical tests used to discard the hazard explanation.

  11. Effect of obesity on preterm delivery prediction by transabdominal recording of uterine electromyography.

    PubMed

    Lucovnik, Miha; Chambliss, Linda R; Blumrick, Richard; Balducci, James; Gersak, Ksenija; Garfield, Robert E

    2016-10-01

    It has been shown that noninvasive uterine electromyography (EMG) can identify true preterm labor more accurately than methods available to clinicians today. The objective of this study was to evaluate the effect of body mass index (BMI) on the accuracy of uterine EMG in predicting preterm delivery. Predictive values of uterine EMG for preterm delivery were compared in obese versus overweight/normal BMI patients. Hanley-McNeil test was used to compare receiver operator characteristics curves in these groups. Previously reported EMG cutoffs were used to determine groups with false positive/false negative and true positive/true negative EMG results. BMI in these groups was compared with Student t test (p < 0.05 significant). A total of 88 patients were included: 20 obese, 64 overweight, and four with normal BMI. EMG predicted preterm delivery within 7 days with area under the curve = 0.95 in the normal/overweight group, and with area under the curve = 1.00 in the obese group (p = 0.08). Six patients in true preterm labor (delivering within 7 days from EMG measurement) had low EMG values (false negative group). There were no false positive results. No significant differences in patient's BMI were noted between false negative group patients and preterm labor patients with high EMG values (true positive group) and nonlabor patients with low EMG values (true negative group; p = 0.32). Accuracy of noninvasive uterine EMG monitoring and its predictive value for preterm delivery are not affected by obesity. Copyright © 2016. Published by Elsevier B.V.

  12. A Retrospective Analysis of Urine Drugs of Abuse Immunoassay True Positive Rates at a National Reference Laboratory.

    PubMed

    Johnson-Davis, Kamisha L; Sadler, Aaron J; Genzen, Jonathan R

    2016-03-01

    Urine drug screens are commonly performed to identify drug use or monitor adherence to drug therapy. The purpose of this retrospective study was to evaluate the true positive and false positive rates of one of our in-house urine drug screen panels. The urine drugs of abuse panel studied consists of screening by immunoassay then positive immunoassay results were confirmed by mass spectrometry. Reagents from Syva and Microgenics were used for the immunoassay screen. The screen was performed on a Beckman AU5810 random access automated clinical analyzer. The percent of true positives for each immunoassay was determined. Agreement with previously validated GC-MS or LC-MS-MS confirmatory methods was also evaluated. There were 8,825 de-identified screening results for each of the drugs in the panel, except for alcohol (N = 2,296). The percent of samples that screened positive were: 10.0% for amphetamine/methamphetamine/3,4-methylenedioxy-methamphetamine (MDMA), 12.8% for benzodiazepines, 43.7% for opiates (including oxycodone) and 20.3% for tetrahydrocannabinol (THC). The false positive rate for amphetamine/methamphetamine was ∼14%, ∼34% for opiates (excluding oxycodone), 25% for propoxyphene and 100% for phencyclidine and MDMA immunoassays. Based on the results from this retrospective study, the true positive rate for THC drug use among adults were similar to the rate of illicit drug use in young adults from the 2013 National Survey; however, our positivity rate for cocaine was higher than the National Survey. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Healthcare-associated pneumonia with positive respiratory methicillin-resistant Staphylococcus aureus culture: Predictors of the true pathogenicity.

    PubMed

    Enomoto, Yasunori; Yokomura, Koshi; Hasegawa, Hirotsugu; Ozawa, Yuichi; Matsui, Takashi; Suda, Takafumi

    2017-03-01

    Although methicillin-resistant Staphylococcus aureus (MRSA) is commonly isolated from respiratory specimens in healthcare-associated pneumonia (HCAP), it is difficult to determine the causative pathogen because of the possibilities of contamination/colonization. The present study aimed to identify clinical predictors of the true pathogenicity of MRSA in HCAP. Patients with HCAP with positive MRSA cultures in the sputum or endotracheal aspirates who were admitted to Seirei Mikatahara General Hospital, Hamamatsu, Japan, from 2009 to 2014 were enrolled. According to the administered drugs and the treatment outcomes, patients with true MRSA pneumonia (MP) and those with contamination/colonization of MRSA (false MP) were identified. Baseline characteristics were compared between groups, and clinical predictors of true MP were evaluated by logistic regression analyses. A total of 93 patients (mean age 78.7 ± 12.6 years) were identified and classified into the true MP (n = 16) or false MP (n = 77) groups. Although baseline characteristics were broadly similar between groups, the true MP group had significantly more patients with PaO 2  ≤ 60 Torr/pulse oximetry saturation ≤90% and those with MRSA single cultivation. Both variables were significant predictors of true MP in multivariate analysis (odds ratio of PaO 2  ≤ 60 Torr/pulse oximetry saturation ≤90%: 5.64, 95% confidence interval 1.17-27.32; odds ratio of MRSA single cultivation: 4.76, 95% confidence interval 1.22-18.60). Poor oxygenation and MRSA single cultivation imply the true pathogenicity of MRSA in HCAP with positive respiratory MRSA cultures. The present results might be helpful for the proper use of anti-MRSA drugs in this population. Geriatr Gerontol Int 2017; 17: 456-462. © 2016 Japan Geriatrics Society.

  14. Bone mineral density referral for dual-energy X-ray absorptiometry using quantitative ultrasound as a prescreening tool in postmenopausal women from the general population: a cost-effectiveness analysis.

    PubMed

    Marín, F; López-Bastida, J; Díez-Pérez, A; Sacristán, J A

    2004-03-01

    The aim of our study was to assess, from the perspective of the National Health Services in Spain, the cost-effectiveness of quantitative ultrasound (QUS) as a prescreen referral method for bone mineral density (BMD) assessment by dual-energy X-ray absorptiometry (DXA) in postmenopausal women of the general population. Using femoral neck DXA and heel QUS. We evaluated 267 consecutive postmenopausal women 65 years and older and attending primary care physician offices for any medical reason. Subjects were classified as osteoporotic or nonosteoporotic (normal or osteopenic) using the WHO definition for DXA. Effectiveness was assessed in terms of the sensitivity and specificity of the referral decisions based on the QUS measurement. Local costs were estimated from health services and actual resource used. Cost-effectiveness was evaluated in terms of the expected cost per true positive osteoporotic case detected. Baseline prevalence of osteoporosis evaluated by DXA was 55.8%. The sensitivity and specificity for the diagnosis of osteoporosis by QUS using the optimal cutoff thresholds for the estimated heel BMD T-score were 97% and 94%, respectively. The average cost per osteoporotic case detected based on DXA measurement alone was 23.85 euros. The average cost per osteoporotic case detected using QUS as a prescreen was 22.00 euros. The incremental cost-effectiveness of DXA versus QUS was 114.00 euros per true positive case detected. Our results suggest that screening for osteoporosis with QUS while applying strict cufoff values in postmenopausal women of the general population is not substantially more cost-effective than DXA alone for the diagnosis of osteoporosis. However, the screening strategy with QUS may be an option in those circumstances where the diagnosis of osteoporosis is deficient because of the difficulty in accessing DXA equipment.

  15. The BMPix and PEAK Tools: New Methods for Automated Laminae Recognition and Counting - Application to Glacial Varves From Antarctic Marine Sediment

    NASA Astrophysics Data System (ADS)

    Weber, M. E.; Reichelt, L.; Kuhn, G.; Thurow, J. W.; Ricken, W.

    2009-12-01

    We present software-based tools for rapid and quantitative detection of sediment lamination. The BMPix tool extracts color and gray-scale curves from images at ultrahigh (pixel) resolution. The PEAK tool uses the gray-scale curve and performs, for the first time, fully automated counting of laminae based on three methods. The maximum count algorithm counts every bright peak of a couplet of two laminae (annual resolution) in a Gaussian smoothed gray-scale curve. The zero-crossing algorithm counts every positive and negative halfway-passage of the gray-scale curve through a wide moving average. Hence, the record is separated into bright and dark intervals (seasonal resolution). The same is true for the frequency truncation method, which uses Fourier transformation to decompose the gray-scale curve into its frequency components, before positive and negative passages are count. We applied the new methods successfully to tree rings and to well-dated and already manually counted marine varves from Saanich Inlet before we adopted the tools to rather complex marine laminae from the Antarctic continental margin. In combination with AMS14C dating, we found convincing evidence that the laminations from three Weddell Sea sites represent true varves that were deposited on sediment ridges over several millennia during the last glacial maximum (LGM). There are apparently two seasonal layers of terrigenous composition, a coarser-grained bright layer, and a finer-grained dark layer. The new tools offer several advantages over previous tools. The counting procedures are based on a moving average generated from gray-scale curves instead of manual counting. Hence, results are highly objective and rely on reproducible mathematical criteria. Since PEAK associates counts with a specific depth, the thickness of each year or each season is also measured which is an important prerequisite for later spectral analysis. Since all information required to conduct the analysis is displayed graphically, interactive optimization of the counting algorithms can be achieved quickly and conveniently.

  16. Estimating rotavirus gastroenteritis hospitalisations by using hospital episode statistics before and after the introduction of rotavirus vaccine in Australia.

    PubMed

    Jayasinghe, Sanjay; Macartney, Kristine

    2013-01-30

    Hospital discharge records and laboratory data have shown a substantial early impact from the rotavirus vaccination program that commenced in 2007 in Australia. However, these assessments are affected by the validity and reliability of hospital discharge coding and stool testing to measure the true incidence of hospitalised disease. The aim of this study was to assess the validity of these data sources for disease estimation, both before and after, vaccine introduction. All hospitalisations at a major paediatric centre in children aged <5 years from 2000 to 2009 containing acute gastroenteritis (AGE) ICD 10 AM diagnosis codes were linked to hospital laboratory stool testing data. The validity of the rotavirus-specific diagnosis code (A08.0) and the incidence of hospitalisations attributable to rotavirus by both direct estimation and with adjustments for non-testing and miscoding were calculated for pre- and post-vaccination periods. A laboratory record of stool testing was available for 36% of all AGE hospitalisations (n=4948) the rotavirus code had high specificity (98.4%; 95% CI, 97.5-99.1%) and positive predictive value (96.8%; 94.8-98.3%), and modest sensitivity (61.6%; 58-65.1%). Of all rotavirus test positive hospitalisations only a third had a rotavirus code. The estimated annual average number of rotavirus hospitalisations, following adjustment for non-testing and miscoding was 5- and 6-fold higher than identified, respectively, from testing and coding alone. Direct and adjusted estimates yielded similar percentage reductions in annual average rotavirus hospitalisations of over 65%. Due to the limited use of stool testing and poor sensitivity of the rotavirus-specific diagnosis code routine hospital discharge and laboratory data substantially underestimate the true incidence of rotavirus hospitalisations and absolute vaccine impact. However, this data can still be used to monitor vaccine impact as the effects of miscoding and under-testing appear to be comparable between pre and post vaccination periods. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. The effect of mood on false memory for emotional DRM word lists.

    PubMed

    Zhang, Weiwei; Gross, Julien; Hayne, Harlene

    2017-04-01

    In the present study, we investigated the effect of participants' mood on true and false memories of emotional word lists in the Deese-Roediger-McDermott (DRM) paradigm. In Experiment 1, we constructed DRM word lists in which all the studied words and corresponding critical lures reflected a specified emotional valence. In Experiment 2, we used these lists to assess mood-congruent true and false memory. Participants were randomly assigned to one of three induced-mood conditions (positive, negative, or neutral) and were presented with word lists comprised of positive, negative, or neutral words. For both true and false memory, there was a mood-congruent effect in the negative mood condition; this effect was due to a decrease in true and false recognition of the positive and neutral words. These findings are consistent with both spreading-activation and fuzzy-trace theories of DRM performance and have practical implications for our understanding of the effect of mood on memory.

  18. Long-term benefit of liposuction in patients with lipoedema: a follow-up study after an average of 4 and 8 years.

    PubMed

    Baumgartner, A; Hueppe, M; Schmeller, W

    2016-05-01

    Long-term results following liposuction in patients with lipoedema are available only for an average period of 4 years. To find out whether the improvement of complaints persists for a further 4 years. In a single-centre study, 85 patients with lipoedema had already been examined after 4 years. A mail questionnaire - often in combination with clinical controls - was repeated after another 4 years (8 years after liposuction). Compared with the results after 4 years, the improvement in spontaneous pain, sensitivity to pressure, oedema, bruising and restriction of movement persisted. The same held true for patient self-assessment of cosmetic appearance, quality of life and overall impairment. Eight years after surgery, the reduction in the amount of conservative treatment (combined decongestive therapy, compression garments) was similar to that observed 4 years earlier. These results demonstrate for the first time the long-lasting positive effects of liposuction in patients with lipoedema. © 2015 British Association of Dermatologists.

  19. BlackOPs: increasing confidence in variant detection through mappability filtering.

    PubMed

    Cabanski, Christopher R; Wilkerson, Matthew D; Soloway, Matthew; Parker, Joel S; Liu, Jinze; Prins, Jan F; Marron, J S; Perou, Charles M; Hayes, D Neil

    2013-10-01

    Identifying variants using high-throughput sequencing data is currently a challenge because true biological variants can be indistinguishable from technical artifacts. One source of technical artifact results from incorrectly aligning experimentally observed sequences to their true genomic origin ('mismapping') and inferring differences in mismapped sequences to be true variants. We developed BlackOPs, an open-source tool that simulates experimental RNA-seq and DNA whole exome sequences derived from the reference genome, aligns these sequences by custom parameters, detects variants and outputs a blacklist of positions and alleles caused by mismapping. Blacklists contain thousands of artifact variants that are indistinguishable from true variants and, for a given sample, are expected to be almost completely false positives. We show that these blacklist positions are specific to the alignment algorithm and read length used, and BlackOPs allows users to generate a blacklist specific to their experimental setup. We queried the dbSNP and COSMIC variant databases and found numerous variants indistinguishable from mapping errors. We demonstrate how filtering against blacklist positions reduces the number of potential false variants using an RNA-seq glioblastoma cell line data set. In summary, accounting for mapping-caused variants tuned to experimental setups reduces false positives and, therefore, improves genome characterization by high-throughput sequencing.

  20. Use of data mining techniques to classify soil CO2 emission induced by crop management in sugarcane field.

    PubMed

    Farhate, Camila Viana Vieira; Souza, Zigomar Menezes de; Oliveira, Stanley Robson de Medeiros; Tavares, Rose Luiza Moraes; Carvalho, João Luís Nunes

    2018-01-01

    Soil CO2 emissions are regarded as one of the largest flows of the global carbon cycle and small changes in their magnitude can have a large effect on the CO2 concentration in the atmosphere. Thus, a better understanding of this attribute would enable the identification of promoters and the development of strategies to mitigate the risks of climate change. Therefore, our study aimed at using data mining techniques to predict the soil CO2 emission induced by crop management in sugarcane areas in Brazil. To do so, we used different variable selection methods (correlation, chi-square, wrapper) and classification (Decision tree, Bayesian models, neural networks, support vector machine, bagging with logistic regression), and finally we tested the efficiency of different approaches through the Receiver Operating Characteristic (ROC) curve. The original dataset consisted of 19 variables (18 independent variables and one dependent (or response) variable). The association between cover crop and minimum tillage are effective strategies to promote the mitigation of soil CO2 emissions, in which the average CO2 emissions are 63 kg ha-1 day-1. The variables soil moisture, soil temperature (Ts), rainfall, pH, and organic carbon were most frequently selected for soil CO2 emission classification using different methods for attribute selection. According to the results of the ROC curve, the best approaches for soil CO2 emission classification were the following: (I)-the Multilayer Perceptron classifier with attribute selection through the wrapper method, that presented rate of false positive of 13,50%, true positive of 94,20% area under the curve (AUC) of 89,90% (II)-the Bagging classifier with logistic regression with attribute selection through the Chi-square method, that presented rate of false positive of 13,50%, true positive of 94,20% AUC of 89,90%. However, the (I) approach stands out in relation to (II) for its higher positive class accuracy (high CO2 emission) and lower computational cost.

  1. Use of data mining techniques to classify soil CO2 emission induced by crop management in sugarcane field

    PubMed Central

    de Souza, Zigomar Menezes; Oliveira, Stanley Robson de Medeiros; Tavares, Rose Luiza Moraes; Carvalho, João Luís Nunes

    2018-01-01

    Soil CO2 emissions are regarded as one of the largest flows of the global carbon cycle and small changes in their magnitude can have a large effect on the CO2 concentration in the atmosphere. Thus, a better understanding of this attribute would enable the identification of promoters and the development of strategies to mitigate the risks of climate change. Therefore, our study aimed at using data mining techniques to predict the soil CO2 emission induced by crop management in sugarcane areas in Brazil. To do so, we used different variable selection methods (correlation, chi-square, wrapper) and classification (Decision tree, Bayesian models, neural networks, support vector machine, bagging with logistic regression), and finally we tested the efficiency of different approaches through the Receiver Operating Characteristic (ROC) curve. The original dataset consisted of 19 variables (18 independent variables and one dependent (or response) variable). The association between cover crop and minimum tillage are effective strategies to promote the mitigation of soil CO2 emissions, in which the average CO2 emissions are 63 kg ha-1 day-1. The variables soil moisture, soil temperature (Ts), rainfall, pH, and organic carbon were most frequently selected for soil CO2 emission classification using different methods for attribute selection. According to the results of the ROC curve, the best approaches for soil CO2 emission classification were the following: (I)–the Multilayer Perceptron classifier with attribute selection through the wrapper method, that presented rate of false positive of 13,50%, true positive of 94,20% area under the curve (AUC) of 89,90% (II)–the Bagging classifier with logistic regression with attribute selection through the Chi-square method, that presented rate of false positive of 13,50%, true positive of 94,20% AUC of 89,90%. However, the (I) approach stands out in relation to (II) for its higher positive class accuracy (high CO2 emission) and lower computational cost. PMID:29513765

  2. True left-sided gallbladder: A case report and comparison with the literature for the different techniques of laparoscopic cholecystectomy for such anomalies.

    PubMed

    Saafan, Tamer; Hu, James Yi; Mahfouz, Ahmed-Emad; Abdelaal, Abdelrahman

    2018-01-01

    True left-sided gallbladder (LSG) is a rare finding that may present with symptoms similar to those of a normally positioned gallbladder. Moreover, it may be missed by preoperative imaging studies such as ultrasound, computed tomography (CT), magnetic resonance imaging (MRI), or endoscopic ultrasound. True left-sided gallbladder is a surgical challenge and surgical technique may need to be modified for the completion of laparoscopic cholecystectomy. In this case report, we present a case of true left-sided gallbladder that produced right-sided abdominal symptoms. Ultrasound of the abdomen failed to show the left-sided position of the gallbladder. MRI showed the gallbladder located to the left of the ligamentum teres underneath segment III of the liver. Intraoperatively, the gallbladder was grasped and retracted to the right under the falciform ligament and it was removed using classical right-sided ports with no modification to the technique. No complications were encountered intraoperatively or postoperatively. True LSG is a rare anomaly that may present with right-sided symptoms like normally positioned gallbladder. It may be missed in preoperative imaging studies and can be discovered only intraoperatively. Modification of laparoscopic ports, change in patient's position and/or surgeon's position, or conversion to open cholecystectomy may be needed for safe removal of the gallbladder. Classical technique of laparoscopic cholecystectomy is feasible for left-sided gallbladder. However, if the anatomy is not clear, modifications of the surgical technique may be necessary for the safe dissection of the gallbladder. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Estimating past and current attendance at winter sports areas...a pilot study

    Treesearch

    Richard L. Bury; James W. Hall

    1963-01-01

    Routine business records of towlift tickets or restaurant receipts provided estimates of total attendance over a 2-month period within 8 percent of true attendance, and attendance on an average day within 18 to 24 percent of true attendance. The chances were that estimates would fall within these limits 2 out of 3 times. Guides for field use can be worked out after...

  4. Vision Screening of School Children by Teachers as a Community Based Strategy to Address the Challenges of Childhood Blindness.

    PubMed

    Kaur, Gurvinder; Koshy, Jacob; Thomas, Satish; Kapoor, Harpreet; Zachariah, Jiju George; Bedi, Sahiba

    2016-04-01

    Early detection and treatment of vision problems in children is imperative to meet the challenges of childhood blindness. Considering the problems of inequitable distribution of trained manpower and limited access of quality eye care services to majority of our population, innovative community based strategies like 'Teachers training in vision screening' need to be developed for effective utilization of the available human resources. To evaluate the effectiveness of introducing teachers as the first level vision screeners. Teacher training programs were conducted for school teachers to educate them about childhood ocular disorders and the importance of their early detection. Teachers from government and semi-government schools located in Ludhiana were given training in vision screening. These teachers then conducted vision screening of children in their schools. Subsequently an ophthalmology team visited these schools for re-evaluation of children identified with low vision. Refraction was performed for all children identified with refractive errors and spectacles were prescribed. Children requiring further evaluation were referred to the base hospital. The project was done in two phases. True positives, false positives, true negatives and false negatives were calculated for evaluation. In phase 1, teachers from 166 schools underwent training in vision screening. The teachers screened 30,205 children and reported eye problems in 4523 (14.97%) children. Subsequently, the ophthalmology team examined 4150 children and confirmed eye problems in 2137 children. Thus, the teachers were able to correctly identify eye problems (true positives) in 47.25% children. Also, only 13.69% children had to be examined by the ophthalmology team, thus reducing their work load. Similarly, in phase 2, 46.22% children were correctly identified to have eye problems (true positives) by the teachers. By random sampling, 95.65% children were correctly identified as normal (true negatives) by the teachers. Considering the high true negative rates and reasonably good true positive rates and the wider coverage provided by the program, vision screening in schools by teachers is an effective method of identifying children with low vision. This strategy is also valuable in reducing the workload of the eye care staff.

  5. 40 CFR 91.206 - Trading.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Trading. 91.206 Section 91.206... EMISSIONS FROM MARINE SPARK-IGNITION ENGINES Averaging, Banking, and Trading Provisions § 91.206 Trading. (a... manufacturers in trading. These credits must be used in the same averaging set as generated. (b) Credits for...

  6. Mixed group validation: a method to address the limitations of criterion group validation in research on malingering detection.

    PubMed

    Frederick, R I

    2000-01-01

    Mixed group validation (MGV) is offered as an alternative to criterion group validation (CGV) to estimate the true positive and false positive rates of tests and other diagnostic signs. CGV requires perfect confidence about each research participant's status with respect to the presence or absence of pathology. MGV determines diagnostic efficiencies based on group data; knowing an individual's status with respect to pathology is not required. MGV can use relatively weak indicators to validate better diagnostic signs, whereas CGV requires perfect diagnostic signs to avoid error in computing true positive and false positive rates. The process of MGV is explained, and a computer simulation demonstrates the soundness of the procedure. MGV of the Rey 15-Item Memory Test (Rey, 1958) for 723 pre-trial criminal defendants resulted in higher estimates of true positive rates and lower estimates of false positive rates as compared with prior research conducted with CGV. The author demonstrates how MGV addresses all the criticisms Rogers (1997b) outlined for differential prevalence designs in malingering detection research. Copyright 2000 John Wiley & Sons, Ltd.

  7. An effective rate equation approach to reaction kinetics in small volumes: theory and application to biochemical reactions in nonequilibrium steady-state conditions.

    PubMed

    Grima, R

    2010-07-21

    Chemical master equations provide a mathematical description of stochastic reaction kinetics in well-mixed conditions. They are a valid description over length scales that are larger than the reactive mean free path and thus describe kinetics in compartments of mesoscopic and macroscopic dimensions. The trajectories of the stochastic chemical processes described by the master equation can be ensemble-averaged to obtain the average number density of chemical species, i.e., the true concentration, at any spatial scale of interest. For macroscopic volumes, the true concentration is very well approximated by the solution of the corresponding deterministic and macroscopic rate equations, i.e., the macroscopic concentration. However, this equivalence breaks down for mesoscopic volumes. These deviations are particularly significant for open systems and cannot be calculated via the Fokker-Planck or linear-noise approximations of the master equation. We utilize the system-size expansion including terms of the order of Omega(-1/2) to derive a set of differential equations whose solution approximates the true concentration as given by the master equation. These equations are valid in any open or closed chemical reaction network and at both the mesoscopic and macroscopic scales. In the limit of large volumes, the effective mesoscopic rate equations become precisely equal to the conventional macroscopic rate equations. We compare the three formalisms of effective mesoscopic rate equations, conventional rate equations, and chemical master equations by applying them to several biochemical reaction systems (homodimeric and heterodimeric protein-protein interactions, series of sequential enzyme reactions, and positive feedback loops) in nonequilibrium steady-state conditions. In all cases, we find that the effective mesoscopic rate equations can predict very well the true concentration of a chemical species. This provides a useful method by which one can quickly determine the regions of parameter space in which there are maximum differences between the solutions of the master equation and the corresponding rate equations. We show that these differences depend sensitively on the Fano factors and on the inherent structure and topology of the chemical network. The theory of effective mesoscopic rate equations generalizes the conventional rate equations of physical chemistry to describe kinetics in systems of mesoscopic size such as biological cells.

  8. Clinical evaluation of a medical high dynamic range display

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marchessoux, Cedric, E-mail: cedric.marchessoux@ba

    Purpose: Recent new medical displays do have higher contrast and higher luminance but do not have a High Dynamic Range (HDR). HDR implies a minimum luminance value close to zero. A medical HDR display prototype based on two Liquid Crystal layers has been developed. The goal of this study is to evaluate the potential clinical benefit of such display in comparison with a low dynamic range (LDR) display. Methods: The study evaluated the clinical performance of the displays in a search and detection task. Eight radiologists read chest x-ray images some of which contained simulated lung nodules. The study usedmore » a JAFROC (Jacknife Free Receiver Operating Characteristic) approach for analyzing FROC data. The calculated figure of merit (FoM) is the probability that a lesion is rated higher than all rated nonlesions on all images. Time per case and accuracy for locating the center of the nodules were also compared. The nodules were simulated using Samei’s model. 214 CR and DR images [half were “healthy images” (chest nodule-free) and half “diseased images”] were used resulting in a total number of nodules equal to 199 with 25 images with 1 nodule, 51 images with 2 nodules, and 24 images with 3 nodules. A dedicated software interface was designed for visualizing the images for each session. For the JAFROC1 statistical analysis, the study is done per nodule category: all nodules, difficult nodules, and very difficult nodules. Results: For all nodules, the averaged FoM{sub HDR} is slightly higher than FoM{sub LDR} with 0.09% of difference. For the difficult nodules, the averaged FoM{sub HDR} is slightly higher than FoM{sub LDR} with 1.38% of difference. The averaged FoM{sub HDR} is slightly higher than FoM{sub LDR} with 0.71% of difference. For the true positive fraction (TPF), both displays (the HDR and the LDR ones) have similar TPF for all nodules, but looking at difficult and very difficult nodules, there are more TP for the HDR display. The true positive fraction has been also computed in function of the local average luminance around the nodules. For the lowest luminance range, there is more than 30% in favor of the HDR display. For the highest luminance range, there is less than 6% in favor of the LDR display. Conclusions: This study shows the potential benefit of using a HDR display in radiology.« less

  9. SU-F-E-19: A Novel Method for TrueBeam Jaw Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corns, R; Zhao, Y; Huang, V

    2016-06-15

    Purpose: A simple jaw calibration method is proposed for Varian TrueBeam using an EPID-Encoder combination that gives accurate fields sizes and a homogeneous junction dose. This benefits clinical applications such as mono-isocentric half-beam block breast cancer or head and neck cancer treatment with junction/field matching. Methods: We use EPID imager with pixel size 0.392 mm × 0.392 mm to determine the radiation jaw position as measured from radio-opaque markers aligned with the crosshair. We acquire two images with different symmetric field sizes and record each individual jaw encoder values. A linear relationship between each jaw’s position and its encoder valuemore » is established, from which we predict the encoder values that produce the jaw positions required by TrueBeam’s calibration procedure. During TrueBeam’s jaw calibration procedure, we move the jaw with the pendant to set the jaw into position using the predicted encoder value. The overall accuracy is under 0.1 mm. Results: Our in-house software analyses images and provides sub-pixel accuracy to determine field centre and radiation edges (50% dose of the profile). We verified the TrueBeam encoder provides a reliable linear relationship for each individual jaw position (R{sup 2}>0.9999) from which the encoder values necessary to set jaw calibration points (1 cm and 19 cm) are predicted. Junction matching dose inhomogeneities were improved from >±20% to <±6% using this new calibration protocol. However, one technical challenge exists for junction matching, if the collimator walkout is large. Conclusion: Our new TrueBeam jaw calibration method can systematically calibrate the jaws to crosshair within sub-pixel accuracy and provides both good junction doses and field sizes. This method does not compensate for a larger collimator walkout, but can be used as the underlying foundation for addressing the walkout issue.« less

  10. Accuracy for detection of simulated lesions: comparison of fluid-attenuated inversion-recovery, proton density--weighted, and T2-weighted synthetic brain MR imaging

    NASA Technical Reports Server (NTRS)

    Herskovits, E. H.; Itoh, R.; Melhem, E. R.

    2001-01-01

    OBJECTIVE: The objective of our study was to determine the effects of MR sequence (fluid-attenuated inversion-recovery [FLAIR], proton density--weighted, and T2-weighted) and of lesion location on sensitivity and specificity of lesion detection. MATERIALS AND METHODS: We generated FLAIR, proton density-weighted, and T2-weighted brain images with 3-mm lesions using published parameters for acute multiple sclerosis plaques. Each image contained from zero to five lesions that were distributed among cortical-subcortical, periventricular, and deep white matter regions; on either side; and anterior or posterior in position. We presented images of 540 lesions, distributed among 2592 image regions, to six neuroradiologists. We constructed a contingency table for image regions with lesions and another for image regions without lesions (normal). Each table included the following: the reviewer's number (1--6); the MR sequence; the side, position, and region of the lesion; and the reviewer's response (lesion present or absent [normal]). We performed chi-square and log-linear analyses. RESULTS: The FLAIR sequence yielded the highest true-positive rates (p < 0.001) and the highest true-negative rates (p < 0.001). Regions also differed in reviewers' true-positive rates (p < 0.001) and true-negative rates (p = 0.002). The true-positive rate model generated by log-linear analysis contained an additional sequence-location interaction. The true-negative rate model generated by log-linear analysis confirmed these associations, but no higher order interactions were added. CONCLUSION: We developed software with which we can generate brain images of a wide range of pulse sequences and that allows us to specify the location, size, shape, and intrinsic characteristics of simulated lesions. We found that the use of FLAIR sequences increases detection accuracy for cortical-subcortical and periventricular lesions over that associated with proton density- and T2-weighted sequences.

  11. Feedback, the various tasks of the doctor, and the feedforward alternative.

    PubMed

    Kluger, Avraham N; Van Dijk, Dina

    2010-12-01

    This study aims to alert users of feedback to its dangers, explain some of its complexities and offer the feedforward alternative. We review the damage that feedback may cause to both motivation and performance. We provide an initial solution to the puzzle of the feedback sign (positive versus negative) using the concepts of promotion focus and prevention focus. We discuss additional open questions pertaining to feedback sign and consider implications for health care systems. Feedback that threatens the self is likely to debilitate recipients and, on average, positive and negative feedback are similar in their effects on performance. Positive feedback contributes to motivation and performance under promotion focus, but the same is true for negative feedback under prevention focus. We offer an alternative to feedback--the feedforward interview--and describe a brief protocol and suggestions on how it might be used in medical education. Feedback is a double-edged sword; its effective application includes careful consideration of regulatory focus and of threats to the self. Feedforward may be a good substitute for feedback in many settings. © Blackwell Publishing Ltd 2010.

  12. The Effect of Surface Electrical Stimulation on Vocal Fold Position

    PubMed Central

    Humbert, Ianessa A.; Poletto, Christopher J.; Saxon, Keith G.; Kearney, Pamela R.; Ludlow, Christy L.

    2008-01-01

    Objectives/Hypothesis Closure of the true and false vocal folds is a normal part of airway protection during swallowing. Individuals with reduced or delayed true vocal fold closure can be at risk for aspiration and benefit from intervention to ameliorate the problem. Surface electrical stimulation is currently used during therapy for dysphagia, despite limited knowledge of its physiological effects. Design Prospective single effects study. Methods The immediate physiological effect of surface stimulation on true vocal fold angle was examined at rest in 27 healthy adults using ten different electrode placements on the submental and neck regions. Fiberoptic nasolaryngoscopic recordings during passive inspiration were used to measure change in true vocal fold angle with stimulation. Results Vocal fold angles changed only to a small extent during two electrode placements (p ≤ 0.05). When two sets of electrodes were placed vertically on the neck the mean true vocal fold abduction was 2.4 degrees; while horizontal placements of electrodes in the submental region produced a mean adduction of 2.8 degrees (p=0.03). Conclusions Surface electrical stimulation to the submental and neck regions does not produce immediate true vocal fold adduction adequate for airway protection during swallowing and one position may produce a slight increase in true vocal fold opening. PMID:18043496

  13. True navigation and magnetic maps in spiny lobsters.

    PubMed

    Boles, Larry C; Lohmann, Kenneth J

    2003-01-02

    Animals are capable of true navigation if, after displacement to a location where they have never been, they can determine their position relative to a goal without relying on familiar surroundings, cues that emanate from the destination, or information collected during the outward journey. So far, only a few animals, all vertebrates, have been shown to possess true navigation. Those few invertebrates that have been carefully studied return to target areas using path integration, landmark recognition, compass orientation and other mechanisms that cannot compensate for displacements into unfamiliar territory. Here we report, however, that the spiny lobster Panulirus argus oriented reliably towards a capture site when displaced 12-37 km to unfamiliar locations, even when deprived of all known orientation cues en route. Little is known about how lobsters and other animals determine position during true navigation. To test the hypothesis that lobsters derive positional information from the Earth's magnetic field, lobsters were exposed to fields replicating those that exist at specific locations in their environment. Lobsters tested in a field north of the capture site oriented themselves southwards, whereas those tested in a field south of the capture site oriented themselves northwards. These results imply that true navigation in spiny lobsters, and perhaps in other animals, is based on a magnetic map sense.

  14. First-line sonographic diagnosis of pneumothorax in major trauma: accuracy of e-FAST and comparison with multidetector computed tomography.

    PubMed

    Ianniello, Stefania; Di Giacomo, Vincenza; Sessa, Barbara; Miele, Vittorio

    2014-09-01

    Combined clinical examination and supine chest radiography have shown low accuracy in the assessment of pneumothorax in unstable patients with major chest trauma during the primary survey in the emergency room. The aim of our study was to evaluate the diagnostic accuracy of extended-focused assessment with sonography in trauma (e-FAST), in the diagnosis of pneumothorax, compared with the results of multidetector computed tomography (MDCT) and of invasive interventions (thoracostomy tube placement). This was a retrospective case series involving 368 consecutive unstable adult patients (273 men and 95 women; average age, 25 years; range, 16-68 years) admitted to our hospital's emergency department between January 2011 and December 2012 for major trauma (Injury Severity Score ≥ 15). We evaluated the accuracy of thoracic ultrasound in the detection of pneumothorax compared with the results of MDCT and invasive interventions (thoracostomy tube placement). Institutional review board approval was obtained prior to commencement of this study. Among the 736 lung fields included in the study, 87 pneumothoraces were detected with thoracic CT scans (23.6%). e-FAST detected 67/87 and missed 20 pneumothoraces (17 mild, 3 moderate). The diagnostic performance of ultrasound was: sensitivity 77% (74% in 2011 and 80% in 2012), specificity 99.8%, positive predictive value 98.5%, negative predictive value 97%, accuracy 97.2% (67 true positive; 668 true negative; 1 false positive; 20 false negative); 17 missed mild pneumothoraces were not immediately life-threatening (thickness less than 5 mm). Thoracic ultrasound (e-FAST) is a rapid and accurate first-line, bedside diagnostic modality for the diagnosis of pneumothorax in unstable patients with major chest trauma during the primary survey in the emergency room.

  15. 99mTc-EDDA/HYNIC-Tyr(3)-octreotide for staging and follow-up of patients with neuroendocrine gastro-entero-pancreatic tumors.

    PubMed

    Gabriel, M; Muehllechner, P; Decristoforo, C; von Guggenberg, E; Kendler, D; Prommegger, R; Profanter, C; Moncayo, R; Virgolini, I

    2005-09-01

    To evaluate the use of 99mTc-EDDA-hydrazinonicotinyl-Tyr3-octreotide (Tc-TOC) for staging and follow-up of neuroendocrine gastro-entero-pancreatic (GEP) tumors with special focus on the acquisition protocol including single photon emission computed tomography (SPECT). Eighty-eight patients (37 female, 51 male; age range: 16 to 81 years; mean age: 56.3 years) were studied: 42 patients for staging after initial histological confirmation and 46 patients during post-therapy follow-up. An average activity of 400 MBq of the radiopharmaceutical was injected. All tumors originated from neuroendocrine tissue of the gastroenteropancreatic tract. Whole body scintigrams at 4 h postinjection and SPECT of the abdomen were obtained in all patients. Additional planar images of the abdomen were acquired at 2 h after injection in 68 patients. The Tc-TOC scan result was true-positive in 56 patients, true-negative in 17, false-negative in 14, and false-positive in 1 patient. The false-positive finding was caused by a colonic adenoma. Overall, a scan sensitivity of 80% (56/70 patients), specificity of 94.4% (17/18 patients) and accuracy of 82.9% (73/88 patients) were calculated on patient basis. In total, Tc-TOC detected 357 foci in 69 patients. In 7 patients equivocal findings were observed in the bowel at 4 h postinjection without corresponding tracer uptake in the scan 2 h earlier, meaning that these abnormal findings were correctly classified as non-malignant. In addition to planar views, SPECT revealed further 62 lesions. Tc-TOC with one-day, dual-time acquisition protocol is an accurate staging procedure in patients with neuroendocrine GEP tumors. SPECT shows high sensitivity for detection of abdominal lesions, while earlier images improve the reliability of abnormal abdominal findings.

  16. Bone marrow cells stained by azide-conjugated Alexa fluors in the absence of an alkyne label.

    PubMed

    Lin, Guiting; Ning, Hongxiu; Banie, Lia; Qiu, Xuefeng; Zhang, Haiyang; Lue, Tom F; Lin, Ching-Shwun

    2012-09-01

    Thymidine analog 5-ethynyl-2'-deoxyuridine (EdU) has recently been introduced as an alternative to 5-bromo-2-deoxyuridine (BrdU) for cell labeling and tracking. Incorporation of EdU into replicating DNA can be detected by azide-conjugated fluors (eg, Alexa-azide) through a Cu(i)-catalyzed click reaction between EdU's alkyne moiety and azide. While this cell labeling method has proven to be valuable for tracking transplanted stem cells in various tissues, we have found that some bone marrow cells could be stained by Alexa-azide in the absence of EdU label. In intact rat femoral bone marrow, ~3% of nucleated cells were false-positively stained, and in isolated bone marrow cells, ~13%. In contrast to true-positive stains, which localize in the nucleus, the false-positive stains were cytoplasmic. Furthermore, while true-positive staining requires Cu(i), false-positive staining does not. Reducing the click reaction time or reducing the Alexa-azide concentration failed to improve the distinction between true- and false-positive staining. Hematopoietic and mesenchymal stem cell markers CD34 and Stro-1 did not co-localize with the false-positively stained cells, and these cells' identity remains unknown.

  17. Recursive regularization for inferring gene networks from time-course gene expression profiles

    PubMed Central

    Shimamura, Teppei; Imoto, Seiya; Yamaguchi, Rui; Fujita, André; Nagasaki, Masao; Miyano, Satoru

    2009-01-01

    Background Inferring gene networks from time-course microarray experiments with vector autoregressive (VAR) model is the process of identifying functional associations between genes through multivariate time series. This problem can be cast as a variable selection problem in Statistics. One of the promising methods for variable selection is the elastic net proposed by Zou and Hastie (2005). However, VAR modeling with the elastic net succeeds in increasing the number of true positives while it also results in increasing the number of false positives. Results By incorporating relative importance of the VAR coefficients into the elastic net, we propose a new class of regularization, called recursive elastic net, to increase the capability of the elastic net and estimate gene networks based on the VAR model. The recursive elastic net can reduce the number of false positives gradually by updating the importance. Numerical simulations and comparisons demonstrate that the proposed method succeeds in reducing the number of false positives drastically while keeping the high number of true positives in the network inference and achieves two or more times higher true discovery rate (the proportion of true positives among the selected edges) than the competing methods even when the number of time points is small. We also compared our method with various reverse-engineering algorithms on experimental data of MCF-7 breast cancer cells stimulated with two ErbB ligands, EGF and HRG. Conclusion The recursive elastic net is a powerful tool for inferring gene networks from time-course gene expression profiles. PMID:19386091

  18. Environmental Assessment: Convert Slow Routes 300 and 301 to Instrument Routes

    DTIC Science & Technology

    2007-07-01

    in the number of eggs, nestlings , or successful fledglings per nest. Table 4-3 summarizes the success and productivity results from the study. Table...Average eggs per nest 3.47 3.56 Average nestlings per nest 2.27 2.28 Average young/occupied per nest 1.84 1.80 Average young/successful per nest...Gardnervi ll e, State of Nevada, and that the annexed is a full, true and correct copy of # 921278 attached advertisement . Public Meeting - Proposed

  19. Experimental Analysis of Heat Transfer Characteristics and Pressure Drop through Screen Regenerative Heat Exchangers

    DTIC Science & Technology

    1993-12-01

    of fluid T1 initial temperature of matrix and fluid Tf1 average inlet temperature after the step change Tii average inlet temperature before the step...respectively, of the regenerator. The horizontal distances shown with Tf1 , Tj, and T,2 illustrate the time interval for which the average values were...temperature was not a true step function, the investigator made an approximation. The approximation was based on an average temperature. Tf1 was the

  20. Nearby kinematic wiggles from LEGUE

    NASA Astrophysics Data System (ADS)

    Carlin, J. L.; Newberg, H. J.; Deng, L.; Delaunay, J.; Gole, D.; Grabowski, K.; Liu, C.; Xu, Y.; Yang, F.; Zhang, H.

    2014-01-01

    In its first two observing seasons, the LEGUE (LAMOST Experiment for Galactic Understanding and Exploration; Deng et al., Zhao et al. 2012) survey has obtained ~1.7 million science-quality spectra. We apply corrections to the PPMXL proper motions (PMs; Roeser et al. 2010) as a function of position, as determined from the measured PMs of extragalactic objects discovered in LAMOST spectra (see Fig. 1, left and center panels). LAMOST radial velocities and corrected PMs are used to derive 3D space velocities for ~480,000 F-stars (assuming M V =4 to derive distances). The right panel of Fig. 1 shows the radial component of Galactic cylindrical velocities (V R ) for stars between 7.80° (3rd Galactic quadrant), but not for θ<0°. The velocities are also asymmetric across the Galactic plane for θ<0° (2nd quadrant), with most positions < V R >> 0 above the disk (radially outward), and < V R > < 0 below the disk. Similar structure to this apparent ``shearing'' motion has been seen in RAVE (e.g., Williams et al. 2013; Siebert et al. 2012), and SDSS (Widrow et al. 2012).

  1. Groundspeed filtering for CTAS

    NASA Technical Reports Server (NTRS)

    Slater, Gary L.

    1994-01-01

    Ground speed is one of the radar observables which is obtained along with position and heading from NASA Ames Center radar. Within the Center TRACON Automation System (CTAS), groundspeed is converted into airspeed using the wind speeds which CTAS obtains from the NOAA weather grid. This airspeed is then used in the trajectory synthesis logic which computes the trajectory for each individual aircraft. The time history of the typical radar groundspeed data is generally quite noisy, with high frequency variations on the order of five knots, and occasional 'outliers' which can be significantly different from the probable true speed. To try to smooth out these speeds and make the ETA estimate less erratic, filtering of the ground speed is done within CTAS. In its base form, the CTAS filter is a 'moving average' filter which averages the last ten radar values. In addition, there is separate logic to detect and correct for 'outliers', and acceleration logic which limits the groundspeed change in adjacent time samples. As will be shown, these additional modifications do cause significant changes in the actual groundspeed filter output. The conclusion is that the current ground speed filter logic is unable to track accurately the speed variations observed on many aircraft. The Kalman filter logic however, appears to be an improvement to the current algorithm used to smooth ground speed variations, while being simpler and more efficient to implement. Additional logic which can test for true 'outliers' can easily be added by looking at the difference in the a priori and post priori Kalman estimates, and not updating if the difference in these quantities is too large.

  2. A nuclear phylogenetic analysis: SNPs, indels and SSRs deliver new insights into the relationships in the ‘true citrus fruit trees’ group (Citrinae, Rutaceae) and the origin of cultivated species

    PubMed Central

    Garcia-Lor, Andres; Curk, Franck; Snoussi-Trifa, Hager; Morillon, Raphael; Ancillo, Gema; Luro, François; Navarro, Luis; Ollitrault, Patrick

    2013-01-01

    Background and Aims Despite differences in morphology, the genera representing ‘true citrus fruit trees’ are sexually compatible, and their phylogenetic relationships remain unclear. Most of the important commercial ‘species’ of Citrus are believed to be of interspecific origin. By studying polymorphisms of 27 nuclear genes, the average molecular differentiation between species was estimated and some phylogenetic relationships between ‘true citrus fruit trees’ were clarified. Methods Sanger sequencing of PCR-amplified fragments from 18 genes involved in metabolite biosynthesis pathways and nine putative genes for salt tolerance was performed for 45 genotypes of Citrus and relatives of Citrus to mine single nucleotide polymorphisms (SNPs) and indel polymorphisms. Fifty nuclear simple sequence repeats (SSRs) were also analysed. Key Results A total of 16 238 kb of DNA was sequenced for each genotype, and 1097 single nucleotide polymorphisms (SNPs) and 50 indels were identified. These polymorphisms were more valuable than SSRs for inter-taxon differentiation. Nuclear phylogenetic analysis revealed that Citrus reticulata and Fortunella form a cluster that is differentiated from the clade that includes three other basic taxa of cultivated citrus (C. maxima, C. medica and C. micrantha). These results confirm the taxonomic subdivision between the subgenera Metacitrus and Archicitrus. A few genes displayed positive selection patterns within or between species, but most of them displayed neutral patterns. The phylogenetic inheritance patterns of the analysed genes were inferred for commercial Citrus spp. Conclusions Numerous molecular polymorphisms (SNPs and indels), which are potentially useful for the analysis of interspecific genetic structures, have been identified. The nuclear phylogenetic network for Citrus and its sexually compatible relatives was consistent with the geographical origins of these genera. The positive selection observed for a few genes will help further works to analyse the molecular basis of the variability of the associated traits. This study presents new insights into the origin of C. sinensis. PMID:23104641

  3. A nuclear phylogenetic analysis: SNPs, indels and SSRs deliver new insights into the relationships in the 'true citrus fruit trees' group (Citrinae, Rutaceae) and the origin of cultivated species.

    PubMed

    Garcia-Lor, Andres; Curk, Franck; Snoussi-Trifa, Hager; Morillon, Raphael; Ancillo, Gema; Luro, François; Navarro, Luis; Ollitrault, Patrick

    2013-01-01

    Despite differences in morphology, the genera representing 'true citrus fruit trees' are sexually compatible, and their phylogenetic relationships remain unclear. Most of the important commercial 'species' of Citrus are believed to be of interspecific origin. By studying polymorphisms of 27 nuclear genes, the average molecular differentiation between species was estimated and some phylogenetic relationships between 'true citrus fruit trees' were clarified. Sanger sequencing of PCR-amplified fragments from 18 genes involved in metabolite biosynthesis pathways and nine putative genes for salt tolerance was performed for 45 genotypes of Citrus and relatives of Citrus to mine single nucleotide polymorphisms (SNPs) and indel polymorphisms. Fifty nuclear simple sequence repeats (SSRs) were also analysed. A total of 16 238 kb of DNA was sequenced for each genotype, and 1097 single nucleotide polymorphisms (SNPs) and 50 indels were identified. These polymorphisms were more valuable than SSRs for inter-taxon differentiation. Nuclear phylogenetic analysis revealed that Citrus reticulata and Fortunella form a cluster that is differentiated from the clade that includes three other basic taxa of cultivated citrus (C. maxima, C. medica and C. micrantha). These results confirm the taxonomic subdivision between the subgenera Metacitrus and Archicitrus. A few genes displayed positive selection patterns within or between species, but most of them displayed neutral patterns. The phylogenetic inheritance patterns of the analysed genes were inferred for commercial Citrus spp. Numerous molecular polymorphisms (SNPs and indels), which are potentially useful for the analysis of interspecific genetic structures, have been identified. The nuclear phylogenetic network for Citrus and its sexually compatible relatives was consistent with the geographical origins of these genera. The positive selection observed for a few genes will help further works to analyse the molecular basis of the variability of the associated traits. This study presents new insights into the origin of C. sinensis.

  4. Hyperedge bundling: A practical solution to spurious interactions in MEG/EEG source connectivity analyses.

    PubMed

    Wang, Sheng H; Lobier, Muriel; Siebenhühner, Felix; Puoliväli, Tuomas; Palva, Satu; Palva, J Matias

    2018-06-01

    Inter-areal functional connectivity (FC), neuronal synchronization in particular, is thought to constitute a key systems-level mechanism for coordination of neuronal processing and communication between brain regions. Evidence to support this hypothesis has been gained largely using invasive electrophysiological approaches. In humans, neuronal activity can be non-invasively recorded only with magneto- and electroencephalography (MEG/EEG), which have been used to assess FC networks with high temporal resolution and whole-scalp coverage. However, even in source-reconstructed MEG/EEG data, signal mixing, or "source leakage", is a significant confounder for FC analyses and network localization. Signal mixing leads to two distinct kinds of false-positive observations: artificial interactions (AI) caused directly by mixing and spurious interactions (SI) arising indirectly from the spread of signals from true interacting sources to nearby false loci. To date, several interaction metrics have been developed to solve the AI problem, but the SI problem has remained largely intractable in MEG/EEG all-to-all source connectivity studies. Here, we advance a novel approach for correcting SIs in FC analyses using source-reconstructed MEG/EEG data. Our approach is to bundle observed FC connections into hyperedges by their adjacency in signal mixing. Using realistic simulations, we show here that bundling yields hyperedges with good separability of true positives and little loss in the true positive rate. Hyperedge bundling thus significantly decreases graph noise by minimizing the false-positive to true-positive ratio. Finally, we demonstrate the advantage of edge bundling in the visualization of large-scale cortical networks with real MEG data. We propose that hypergraphs yielded by bundling represent well the set of true cortical interactions that are detectable and dissociable in MEG/EEG connectivity analysis. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  5. A Dempster-Shafer Method for Multi-Sensor Fusion

    DTIC Science & Technology

    2012-03-01

    position results for DST Mean Mod (or DST Mean for Case 1) to follow the same general path as the true aircraft . Also, if the points diverged from the true...of the aircraft . Its results were similar to those for DST True and Kalman. Also DST Mean had the same clustering of points as the others specifically...DST Mean values increasingly diverged as time t increased. Then Run 27 was very similar to Case 2 Run 5. Instead of following the true aircraft path

  6. True status of smear-positive pulmonary tuberculosis defaulters in Malawi.

    PubMed Central

    Kruyt, M. L.; Kruyt, N. D.; Boeree, M. J.; Harries, A. D.; Salaniponi, F. M.; van Noord, P. A.

    1999-01-01

    The article reports the results of a study to determine the true outcome of 8 months of treatment received by smear-positive pulmonary tuberculosis (PTB) patients who had been registered as defaulters in the Queen Elizabeth Central Hospital (QECH) and Mlambe Mission Hospital (MMH), Blantyre, Malawi. The treatment outcomes were documented from the tuberculosis registers of all patients registered between 1 October 1994 and 30 September 1995. The true treatment outcome for patients who had been registered as defaulters was determined by making personal inquiries at the treatment units and the residences of patients or relatives and, in a few cases, by writing to the appropriate postal address. Interviews were carried out with patients who had defaulted and were still alive and with matched, fully compliant PTB patients who had successfully completed the treatment to determine the factors associated with defaulter status. Of the 1099 patients, 126 (11.5%) had been registered as defaulters, and the true treatment outcome was determined for 101 (80%) of the latter; only 22 were true defaulters, 31 had completed the treatment, 31 had died during the treatment period, and 17 had left the area. A total of 8 of the 22 true defaulters were still alive and were compared with the compliant patients. Two significant characteristics were associated with the defaulters; they were unmarried; and they did not know the correct duration of antituberculosis treatment. Many of the smear-positive tuberculosis patients who had been registered as defaulters in the Blantyre district were found to have different treatment outcomes, without defaulting. The quality of reporting in the health facilities must therefore be improved in order to exclude individuals who are not true defaulters. PMID:10361755

  7. Children's False Memory and True Disclosure in the Face of Repeated Questions

    ERIC Educational Resources Information Center

    Schaaf, Jennifer M.; Alexander, Kristen Weede; Goodman, Gail S.

    2008-01-01

    The current study was designed to investigate children's memory and suggestibility for events differing in valence (positive or negative) and veracity (true or false). A total of 82 3- and 5-year-olds were asked repeated questions about true and false events, either in a grouped order (i.e., all questions about a certain event asked consecutively)…

  8. Why do young adults develop a passion for Internet activities? The associations among personality, revealing "true self" on the Internet, and passion for the Internet.

    PubMed

    Tosun, Leman Pinar; Lajunen, Timo

    2009-08-01

    This study examines the associations of harmonious passion (HP) and obsessive passion (OP) for Internet activities with Eysenckian personality dimensions in a sample of 421 university students. Results show that psychoticism correlates positively with both HP and OP; extroversion correlates positively with HP only; and neuroticism has no correlation with passion for Internet activities. Additionally, the study examines participants' tendency to express their "true self" on the Internet, and the results reveal that this tendency has a positive association with psychoticism, neuroticism, and both types of passion for the Internet. Moreover, the relationship between psychoticism and passion (both harmonious and obsessive) is mediated by the tendency to express true self on the Internet. The results were interpreted from the media dependency perspective.

  9. Transformation of apparent ocean wave spectra observed from an aircraft sensor platform

    NASA Technical Reports Server (NTRS)

    Poole, L. R.

    1976-01-01

    The problem considered was transformation of a unidirectional apparent ocean wave spectrum observed from an aircraft sensor platform into the true spectrum that would be observed from a stationary platform. Spectral transformation equations were developed in terms of the linear wave dispersion relationship and the wave group speed. An iterative solution to the equations was outlined and used to transform reference theoretical apparent spectra for several assumed values of average water depth. Results show that changing the average water depth leads to a redistribution of energy density among the various frequency bands of the transformed spectrum. This redistribution is most severe when much of the energy density is expected, a priori, to reside at relatively low true frequencies.

  10. Phantom studies investigating extravascular density imaging for partial volume correction of 3-D PET /sup 18/FDG studies

    NASA Astrophysics Data System (ADS)

    Wassenaar, R. W.; Beanlands, R. S. B.; deKemp, R. A.

    2004-02-01

    Limited scanner resolution and cardiac motion contribute to partial volume (PV) averaging of cardiac PET images. An extravascular (EV) density image, created from the subtraction of a blood pool scan from a transmission image, has been used to correct for PV averaging in H/sub 2//sup 15/O studies using 2-D imaging but not with 3-D imaging of other tracers such as /sup 18/FDG. A cardiac phantom emulating the left ventricle was used to characterize the method for use in 3-D PET studies. Measurement of the average myocardial activity showed PV losses of 32% below the true activity (p<0.001). Initial application of the EV density correction still yielded a myocardial activity 13% below the true value (p<0.001). This failure of the EV density image was due to the 1.66 mm thick plastic barrier separating the myocardial and ventricular chambers within the phantom. Upon removal of this artifact by morphological dilation of the blood pool, the corrected myocardial value was within 2% of the true value (p=ns). Spherical ROIs (diameter of 2 to 10 mm), evenly distributed about the myocardium, were also used to calculate the average activity. The EV density image was able to account for PV averaging throughout the range of diameters to within a 5% accuracy, however, a small bias was seen as the size of the ROIs increased. This indicated a slight mismatch between the emission and transmission image resolutions, a result of the difference in data acquisitions (i.e., span and ring difference) and default smoothing. These results show that the use of EV density image to correct for PV averaging is possible with 3-D PET. A method of correcting barrier effects in phantoms has been presented, as well as a process for evaluating resolution mismatch.

  11. Relative Packing Groups in Template-Based Structure Prediction: Cooperative Effects of True Positive Constraints

    PubMed Central

    Day, Ryan; Qu, Xiaotao; Swanson, Rosemarie; Bohannan, Zach; Bliss, Robert

    2011-01-01

    Abstract Most current template-based structure prediction methods concentrate on finding the correct backbone conformation and then packing sidechains within that backbone. Our packing-based method derives distance constraints from conserved relative packing groups (RPGs). In our refinement approach, the RPGs provide a level of resolution that restrains global topology while allowing conformational sampling. In this study, we test our template-based structure prediction method using 51 prediction units from CASP7 experiments. RPG-based constraints are able to substantially improve approximately two-thirds of starting templates. Upon deeper investigation, we find that true positive spatial constraints, especially those non-local in sequence, derived from the RPGs were important to building nearer native models. Surprisingly, the fraction of incorrect or false positive constraints does not strongly influence the quality of the final candidate. This result indicates that our RPG-based true positive constraints sample the self-consistent, cooperative interactions of the native structure. The lack of such reinforcing cooperativity explains the weaker effect of false positive constraints. Generally, these findings are encouraging indications that RPGs will improve template-based structure prediction. PMID:21210729

  12. Income-related health inequalities across regions in Korea

    PubMed Central

    2011-01-01

    Introduction In addition to economic inequalities, there has been growing concern over socioeconomic inequalities in health across income levels and/or regions. This study measures income-related health inequalities within and between regions and assesses the possibility of convergence of socioeconomic inequalities in health as regional incomes converge. Methods We considered a total of 45,233 subjects (≥ 19 years) drawn from the four waves of the Korean National Health and Nutrition Examination Survey (KNHANES). We considered true health as a latent variable following a lognormal distribution. We obtained ill-health scores by matching self-rated health (SRH) to its distribution and used the Gini Coefficient (GC) and an income-related ill-health Concentration Index (CI) to examine inequalities in income and health, respectively. Results The GC estimates were 0.3763 and 0.0657 for overall and spatial inequalities, respectively. The overall CI was -0.1309, and the spatial CI was -0.0473. The spatial GC and CI estimates were smaller than their counterparts, indicating substantial inequalities in income (from 0.3199 in Daejeon to 0.4233 Chungnam) and income-related health inequalities (from -0.1596 in Jeju and -0.0844 in Ulsan) within regions. The results indicate a positive relationship between the GC and the average ill-health and a negative relationship between the CI and the average ill-health. Those regions with a low level of health tended to show an unequal distribution of income and health. In addition, there was a negative relationship between the GC and the CI, that is, the larger the income inequalities, the larger the health inequalities were. The GC was negatively related to the average regional income, indicating that an increase in a region's average income reduced income inequalities in the region. On the other hand, the CI showed a positive relationship, indicating that an increase in a region's average income reduced health inequalities in the region. Conclusion The results suggest that reducing health inequalities across regions require a more equitable distribution of income and a higher level of average income and that the higher the region's average income, the smaller its health inequalities are. PMID:21967804

  13. Personalized Offline and Pseudo-Online BCI Models to Detect Pedaling Intent

    PubMed Central

    Rodríguez-Ugarte, Marisol; Iáñez, Eduardo; Ortíz, Mario; Azorín, Jose M.

    2017-01-01

    The aim of this work was to design a personalized BCI model to detect pedaling intention through EEG signals. The approach sought to select the best among many possible BCI models for each subject. The choice was between different processing windows, feature extraction algorithms and electrode configurations. Moreover, data was analyzed offline and pseudo-online (in a way suitable for real-time applications), with a preference for the latter case. A process for selecting the best BCI model was described in detail. Results for the pseudo-online processing with the best BCI model of each subject were on average 76.7% of true positive rate, 4.94 false positives per minute and 55.1% of accuracy. The personalized BCI model approach was also found to be significantly advantageous when compared to the typical approach of using a fixed feature extraction algorithm and electrode configuration. The resulting approach could be used to more robustly interface with lower limb exoskeletons in the context of the rehabilitation of stroke patients. PMID:28744212

  14. Taxonomic Position and Species Identity of the Cultivated Yeongji 'Ganoderma lucidum' in Korea

    PubMed Central

    Kwon, O-Chul; Park, Young-Jin; Kim, Hong-Il; Kong, Won-Sik; Cho, Jae-Han

    2016-01-01

    Ganoderma lucidum has a long history of use as a traditional medicine in Asian countries. However, the taxonomy of Ganoderma species remains controversial, since they were initially classified on the basis of their morphological characteristics. Recently, it was proposed that G. lucidum from China be renamed as G. sichuanense or G. lingzhi. In the present study, phylogenetic analysis using the internal transcribed spacer region rDNA sequences of the Ganoderma species indicated that all strains of the Korean 'G. lucidum' clustered into one group together with G. sichuanense and G. lingzhi from China. However, strains from Europe and North American, which were regarded as true G. lucidum, were positioned in a clearly different group. In addition, the average size of the basidiospores from the Korean cultivated Yeongji strains was similar to that of G. lingzhi. Based on these results, we propose that the Korean cultivated Yeongji strains of 'G. lucidum' should be renamed as G. lingzhi. PMID:27103848

  15. Taxonomic Position and Species Identity of the Cultivated Yeongji 'Ganoderma lucidum' in Korea.

    PubMed

    Kwon, O-Chul; Park, Young-Jin; Kim, Hong-Il; Kong, Won-Sik; Cho, Jae-Han; Lee, Chang-Soo

    2016-03-01

    Ganoderma lucidum has a long history of use as a traditional medicine in Asian countries. However, the taxonomy of Ganoderma species remains controversial, since they were initially classified on the basis of their morphological characteristics. Recently, it was proposed that G. lucidum from China be renamed as G. sichuanense or G. lingzhi. In the present study, phylogenetic analysis using the internal transcribed spacer region rDNA sequences of the Ganoderma species indicated that all strains of the Korean 'G. lucidum' clustered into one group together with G. sichuanense and G. lingzhi from China. However, strains from Europe and North American, which were regarded as true G. lucidum, were positioned in a clearly different group. In addition, the average size of the basidiospores from the Korean cultivated Yeongji strains was similar to that of G. lingzhi. Based on these results, we propose that the Korean cultivated Yeongji strains of 'G. lucidum' should be renamed as G. lingzhi.

  16. Towards SSVEP-based, portable, responsive Brain-Computer Interface.

    PubMed

    Kaczmarek, Piotr; Salomon, Pawel

    2015-08-01

    A Brain-Computer Interface in motion control application requires high system responsiveness and accuracy. SSVEP interface consisted of 2-8 stimuli and 2 channel EEG amplifier was presented in this paper. The observed stimulus is recognized based on a canonical correlation calculated in 1 second window, ensuring high interface responsiveness. A threshold classifier with hysteresis (T-H) was proposed for recognition purposes. Obtained results suggest that T-H classifier enables to significantly increase classifier performance (resulting in accuracy of 76%, while maintaining average false positive detection rate of stimulus different then observed one between 2-13%, depending on stimulus frequency). It was shown that the parameters of T-H classifier, maximizing true positive rate, can be estimated by gradient-based search since the single maximum was observed. Moreover the preliminary results, performed on a test group (N=4), suggest that for T-H classifier exists a certain set of parameters for which the system accuracy is similar to accuracy obtained for user-trained classifier.

  17. Progress toward the determination of correct classification rates in fire debris analysis.

    PubMed

    Waddell, Erin E; Song, Emma T; Rinke, Caitlin N; Williams, Mary R; Sigman, Michael E

    2013-07-01

    Principal components analysis (PCA), linear discriminant analysis (LDA), and quadratic discriminant analysis (QDA) were used to develop a multistep classification procedure for determining the presence of ignitable liquid residue in fire debris and assigning any ignitable liquid residue present into the classes defined under the American Society for Testing and Materials (ASTM) E 1618-10 standard method. A multistep classification procedure was tested by cross-validation based on model data sets comprised of the time-averaged mass spectra (also referred to as total ion spectra) of commercial ignitable liquids and pyrolysis products from common building materials and household furnishings (referred to simply as substrates). Fire debris samples from laboratory-scale and field test burns were also used to test the model. The optimal model's true-positive rate was 81.3% for cross-validation samples and 70.9% for fire debris samples. The false-positive rate was 9.9% for cross-validation samples and 8.9% for fire debris samples. © 2013 American Academy of Forensic Sciences.

  18. Personalized Offline and Pseudo-Online BCI Models to Detect Pedaling Intent.

    PubMed

    Rodríguez-Ugarte, Marisol; Iáñez, Eduardo; Ortíz, Mario; Azorín, Jose M

    2017-01-01

    The aim of this work was to design a personalized BCI model to detect pedaling intention through EEG signals. The approach sought to select the best among many possible BCI models for each subject. The choice was between different processing windows, feature extraction algorithms and electrode configurations. Moreover, data was analyzed offline and pseudo-online (in a way suitable for real-time applications), with a preference for the latter case. A process for selecting the best BCI model was described in detail. Results for the pseudo-online processing with the best BCI model of each subject were on average 76.7% of true positive rate, 4.94 false positives per minute and 55.1% of accuracy. The personalized BCI model approach was also found to be significantly advantageous when compared to the typical approach of using a fixed feature extraction algorithm and electrode configuration. The resulting approach could be used to more robustly interface with lower limb exoskeletons in the context of the rehabilitation of stroke patients.

  19. Environmental invariants in the representation of motion: Implied dynamics and representational momentum, gravity, friction, and centripetal force.

    PubMed

    Hubbard, T L

    1995-09-01

    Memory for the final position of a moving target is often shifted or displaced from the true final position of that target. Early studies of this memory shift focused on parallels between the momentum of the target and the momentum of the representation of the target and called this displacementrepresentational momentum, but many factors other than momentum contribute to the memory shift. A consideration of the empirical literature on representational momentum and related types of displacement suggests there are at least four different types of factors influencing the direction and magnitude of such memory shifts: stimulus characteristics (e.g., target direction, target velocity), implied dynamics and environmental invariants (e.g., implied momentum, gravity, friction, centripetal force), memory averaging of target and nontarget context (e.g., biases toward previous target locations or nontarget context), and observers' expectations (both tacit and conscious) regarding future target motion and target/context interactions. Several theories purporting to account for representational momentum and related types of displacement are also considered.

  20. Methods for consistent forewarning of critical events across multiple data channels

    DOEpatents

    Hively, Lee M.

    2006-11-21

    This invention teaches further method improvements to forewarn of critical events via phase-space dissimilarity analysis of data from biomedical equipment, mechanical devices, and other physical processes. One improvement involves conversion of time-serial data into equiprobable symbols. A second improvement is a method to maximize the channel-consistent total-true rate of forewarning from a plurality of data channels over multiple data sets from the same patient or process. This total-true rate requires resolution of the forewarning indications into true positives, true negatives, false positives and false negatives. A third improvement is the use of various objective functions, as derived from the phase-space dissimilarity measures, to give the best forewarning indication. A fourth improvement uses various search strategies over the phase-space analysis parameters to maximize said objective functions. A fifth improvement shows the usefulness of the method for various biomedical and machine applications.

  1. Some interesting examples of binormal degeneracy and analysis using a contaminated binormal ROC model

    NASA Astrophysics Data System (ADS)

    Berbaum, Kevin S.; Dorfman, Donald D.

    2001-06-01

    Receiver operating characteristic (ROC) data with false positive fractions of zero are often difficult to fit with standard ROC methodology, and are sometimes discarded. Some extreme examples of such data were analyzed. A new ROC model is proposed that assumes that for a proportion of abnormalities, no signal information is captured and that those abnormalities have the same distribution as noise along the latent decision axis. Rating reports of fracture for single view ankle radiographs were also analyzed with the binormal ROC model and two proper ROC models. The conventional models gave ROC area close to one, implying a true positive fraction close to one. The data contained no such fractions. When all false positive fractions were zero, conventional ROC areas gave little or no hint of unmistakable differences in true positive fractions. In contrast, the new model can fit ROC data in which some or all of the ROC points have false positive fractions of zero and true positive fractions less than one without concluding perfect performance. These data challenge the validity and robustness of conventional ROC models, but the contaminated binormal model accounts for these data. This research has been published for a different audience.

  2. VarBin, a novel method for classifying true and false positive variants in NGS data

    PubMed Central

    2013-01-01

    Background Variant discovery for rare genetic diseases using Illumina genome or exome sequencing involves screening of up to millions of variants to find only the one or few causative variant(s). Sequencing or alignment errors create "false positive" variants, which are often retained in the variant screening process. Methods to remove false positive variants often retain many false positive variants. This report presents VarBin, a method to prioritize variants based on a false positive variant likelihood prediction. Methods VarBin uses the Genome Analysis Toolkit variant calling software to calculate the variant-to-wild type genotype likelihood ratio at each variant change and position divided by read depth. The resulting Phred-scaled, likelihood-ratio by depth (PLRD) was used to segregate variants into 4 Bins with Bin 1 variants most likely true and Bin 4 most likely false positive. PLRD values were calculated for a proband of interest and 41 additional Illumina HiSeq, exome and whole genome samples (proband's family or unrelated samples). At variant sites without apparent sequencing or alignment error, wild type/non-variant calls cluster near -3 PLRD and variant calls typically cluster above 10 PLRD. Sites with systematic variant calling problems (evident by variant quality scores and biases as well as displayed on the iGV viewer) tend to have higher and more variable wild type/non-variant PLRD values. Depending on the separation of a proband's variant PLRD value from the cluster of wild type/non-variant PLRD values for background samples at the same variant change and position, the VarBin method's classification is assigned to each proband variant (Bin 1 to Bin 4). Results To assess VarBin performance, Sanger sequencing was performed on 98 variants in the proband and background samples. True variants were confirmed in 97% of Bin 1 variants, 30% of Bin 2, and 0% of Bin 3/Bin 4. Conclusions These data indicate that VarBin correctly classifies the majority of true variants as Bin 1 and Bin 3/4 contained only false positive variants. The "uncertain" Bin 2 contained both true and false positive variants. Future work will further differentiate the variants in Bin 2. PMID:24266885

  3. Control System of a Three DOF Spacecraft Simulator by Vectorable Thrusters and Control Moment Gyros

    DTIC Science & Technology

    2006-12-01

    1 s 1 s -K- -K- -K- -K- -K- -K- -K- -K- -K- 2 STATE 1 REF Tc urel Fx Fy Figure 42. Controller SIMULINK Model As an initial step in the...f1c a1c a1True a2c a2True f1act Thruster 1 Firing Logic [DelTrue] [a1True] [a2True] [DelTrue] Fx Fy T theta del F1c a1c F2c a2c Tcmg fcn Tc deltrue...cmgdd command CMG Steering Logic 3 theta 2 act_fb 1 uCOMMANDED Fx Fy Tc 52 to slew and fire independently, MSGCMG position is used to generate a

  4. Spreadsheet Simulation of the Law of Large Numbers

    ERIC Educational Resources Information Center

    Boger, George

    2005-01-01

    If larger and larger samples are successively drawn from a population and a running average calculated after each sample has been drawn, the sequence of averages will converge to the mean, [mu], of the population. This remarkable fact, known as the law of large numbers, holds true if samples are drawn from a population of discrete or continuous…

  5. Scaling effect of fraction of vegetation cover retrieved by algorithms based on linear mixture model

    NASA Astrophysics Data System (ADS)

    Obata, Kenta; Miura, Munenori; Yoshioka, Hiroki

    2010-08-01

    Differences in spatial resolution among sensors have been a source of error among satellite data products, known as a scaling effect. This study investigates the mechanism of the scaling effect on fraction of vegetation cover retrieved by a linear mixture model which employs NDVI as one of the constraints. The scaling effect is induced by the differences in texture, and the differences between the true endmember spectra and the endmember spectra assumed during retrievals. A mechanism of the scaling effect was analyzed by focusing on the monotonic behavior of spatially averaged FVC as a function of spatial resolution. The number of endmember is limited into two to proceed the investigation analytically. Although the spatially-averaged NDVI varies monotonically along with spatial resolution, the corresponding FVC values does not always vary monotonically. The conditions under which the averaged FVC varies monotonically for a certain sequence of spatial resolutions, were derived analytically. The increasing and decreasing trend of monotonic behavior can be predicted from the true and assumed endmember spectra of vegetation and non-vegetation classes regardless the distributions of the vegetation class within a fixed area. The results imply that the scaling effect on FVC is more complicated than that on NDVI, since, unlike NDVI, FVC becomes non-monotonic under a certain condition determined by the true and assumed endmember spectra.

  6. Skin irritation, false positives and the local lymph node assay: a guideline issue?

    PubMed

    Basketter, David A; Kimber, Ian

    2011-10-01

    Since the formal validation and regulatory acceptance of the local lymph node assay (LLNA) there have been commentaries suggesting that the irritant properties of substances can give rise to false positives. As toxicology aspires to progress rapidly towards the age of in vitro alternatives, it is of increasing importance that issues relating to assay selectivity and performance are understood fully, and that true false positive responses are distinguished clearly from those that are simply unpalatable. In the present review, we have focused on whether skin irritation per se is actually a direct cause of true false positive results in the LLNA. The body of published work has been examined critically and considered in relation to our current understanding of the mechanisms of skin irritation and skin sensitisation. From these analyses it is very clear that, of itself, skin irritation is not a cause of false positive results. The corollary is, therefore, that limiting test concentrations in the LLNA for the purpose of avoiding skin irritation may lead, unintentionally, to false negatives. Where a substance is a true false positive in the LLNA, the classic example being sodium lauryl sulphate, explanations for that positivity will have to reach beyond the seductive, but incorrect, recourse to its skin irritation potential. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Computerized mass detection in whole breast ultrasound images: reduction of false positives using bilateral subtraction technique

    NASA Astrophysics Data System (ADS)

    Ikedo, Yuji; Fukuoka, Daisuke; Hara, Takeshi; Fujita, Hiroshi; Takada, Etsuo; Endo, Tokiko; Morita, Takako

    2007-03-01

    The comparison of left and right mammograms is a common technique used by radiologists for the detection and diagnosis of masses. In mammography, computer-aided detection (CAD) schemes using bilateral subtraction technique have been reported. However, in breast ultrasonography, there are no reports on CAD schemes using comparison of left and right breasts. In this study, we propose a scheme of false positive reduction based on bilateral subtraction technique in whole breast ultrasound images. Mass candidate regions are detected by using the information of edge directions. Bilateral breast images are registered with reference to the nipple positions and skin lines. A false positive region is detected based on a comparison of the average gray values of a mass candidate region and a region with the same position and same size as the candidate region in the contralateral breast. In evaluating the effectiveness of the false positive reduction method, three normal and three abnormal bilateral pairs of whole breast images were employed. These abnormal breasts included six masses larger than 5 mm in diameter. The sensitivity was 83% (5/6) with 13.8 (165/12) false positives per breast before applying the proposed reduction method. By applying the method, false positives were reduced to 4.5 (54/12) per breast without removing a true positive region. This preliminary study indicates that the bilateral subtraction technique is effective for improving the performance of a CAD scheme in whole breast ultrasound images.

  8. Segmentation of Retinal Blood Vessels Based on Cake Filter

    PubMed Central

    Bao, Xi-Rong; Ge, Xin; She, Li-Huang; Zhang, Shi

    2015-01-01

    Segmentation of retinal blood vessels is significant to diagnosis and evaluation of ocular diseases like glaucoma and systemic diseases such as diabetes and hypertension. The retinal blood vessel segmentation for small and low contrast vessels is still a challenging problem. To solve this problem, a new method based on cake filter is proposed. Firstly, a quadrature filter band called cake filter band is made up in Fourier field. Then the real component fusion is used to separate the blood vessel from the background. Finally, the blood vessel network is got by a self-adaption threshold. The experiments implemented on the STARE database indicate that the new method has a better performance than the traditional ones on the small vessels extraction, average accuracy rate, and true and false positive rate. PMID:26636095

  9. A corrected formulation for marginal inference derived from two-part mixed models for longitudinal semi-continuous data

    PubMed Central

    Su, Li; Farewell, Vernon T

    2013-01-01

    For semi-continuous data which are a mixture of true zeros and continuously distributed positive values, the use of two-part mixed models provides a convenient modelling framework. However, deriving population-averaged (marginal) effects from such models is not always straightforward. Su et al. presented a model that provided convenient estimation of marginal effects for the logistic component of the two-part model but the specification of marginal effects for the continuous part of the model presented in that paper was based on an incorrect formulation. We present a corrected formulation and additionally explore the use of the two-part model for inferences on the overall marginal mean, which may be of more practical relevance in our application and more generally. PMID:24201470

  10. Assessment of Isometric Trunk Strength - The Relevance of Body Position and Relationship between Planes of Movement.

    PubMed

    Kocjan, Andrej; Sarabon, Nejc

    2014-05-01

    The aim of the study was to assess the differences in maximal isometric trunk extension and flexion strength during standing, sitting and kneeling. Additionally, we were interested in correlations between the maximal strength in sagittal, frontal and transverse plane, measured in the sitting position. Sixty healthy subjects (24 male, 36 female; age 41.3 ± 15.1 yrs; body height 1.70 ± 0.09 m; body mass 72.7 ± 13.3 kg) performed maximal voluntary isometric contractions of the trunk flexor and extensor muscles in standing, sitting and kneeling position. The subjects also performed lateral flexions and rotations in the sitting position. Each task was repeated three times and average of maximal forces was used for data analysis. RANOVA with post-hoc testing was applied to the flexion and extension data. The level of statistical significance was set to p < 0.05. Overall, in both genders together, the highest average force for trunk extension was recorded in sitting posture (910.5 ± 271.5 N), followed by kneeling (834.3 ± 242.9 N) and standing (504.0 ± 165.4 N), compared with flexion, where we observed the opposite trend (508.5 ± 213.0 N, 450.9 ± 165.7 N and 443.4 ± 153.1 N, respectively). Post-hoc tests showed significant differences in all extension positions (p < 0.0001) and between sitting/standing (p = 0.018) and kneeling/standing (p = 0.033) flexion exertions. The extension/flexion ratio for sitting was 2.1 ± 0.4, for kneeling 1.9 ± 0.4, followed by standing, where motion forward approximately equals motion backward (1.1 ± 0.6). Trunk sagittal-transverse strength showed the strongest correlation, followed by frontal-transverse and sagittal-frontal plane correlation pairs (R(2) = 0.830, 0.712 and 0.657). The baseline trunk isometric strength data provided by this study should help further strength diagnostics, more precisely, the prevention of low back disorders. Key pointsMaximal voluntary isometric force of the trunk extensors increased with the angle at the hips (highest in sitting, medium in kneeling and lowest in upright standing).The opposite trend was true for isometric MVC force of trunk flexors (both genders together and men only).In the sitting position, the strongest correlation between MVC forces was found between sagittal (average flexion/extension) and transverse plane (average left/right rotation).IN ORDER TO INCREASE THE VALIDITY OF TRUNK STRENGTH TESTING THE LETTER SHOULD INCLUDE: specific warm-up, good pelvic fixation and visual feedback.

  11. Assessment of Isometric Trunk Strength – The Relevance of Body Position and Relationship between Planes of Movement

    PubMed Central

    Kocjan, Andrej; Sarabon, Nejc

    2014-01-01

    The aim of the study was to assess the differences in maximal isometric trunk extension and flexion strength during standing, sitting and kneeling. Additionally, we were interested in correlations between the maximal strength in sagittal, frontal and transverse plane, measured in the sitting position. Sixty healthy subjects (24 male, 36 female; age 41.3 ± 15.1 yrs; body height 1.70 ± 0.09 m; body mass 72.7 ± 13.3 kg) performed maximal voluntary isometric contractions of the trunk flexor and extensor muscles in standing, sitting and kneeling position. The subjects also performed lateral flexions and rotations in the sitting position. Each task was repeated three times and average of maximal forces was used for data analysis. RANOVA with post-hoc testing was applied to the flexion and extension data. The level of statistical significance was set to p < 0.05. Overall, in both genders together, the highest average force for trunk extension was recorded in sitting posture (910.5 ± 271.5 N), followed by kneeling (834.3 ± 242.9 N) and standing (504.0 ± 165.4 N), compared with flexion, where we observed the opposite trend (508.5 ± 213.0 N, 450.9 ± 165.7 N and 443.4 ± 153.1 N, respectively). Post-hoc tests showed significant differences in all extension positions (p < 0.0001) and between sitting/standing (p = 0.018) and kneeling/standing (p = 0.033) flexion exertions. The extension/flexion ratio for sitting was 2.1 ± 0.4, for kneeling 1.9 ± 0.4, followed by standing, where motion forward approximately equals motion backward (1.1 ± 0.6). Trunk sagittal-transverse strength showed the strongest correlation, followed by frontal-transverse and sagittal-frontal plane correlation pairs (R2 = 0.830, 0.712 and 0.657). The baseline trunk isometric strength data provided by this study should help further strength diagnostics, more precisely, the prevention of low back disorders. Key points Maximal voluntary isometric force of the trunk extensors increased with the angle at the hips (highest in sitting, medium in kneeling and lowest in upright standing). The opposite trend was true for isometric MVC force of trunk flexors (both genders together and men only). In the sitting position, the strongest correlation between MVC forces was found between sagittal (average flexion/extension) and transverse plane (average left/right rotation). In order to increase the validity of trunk strength testing the letter should include: specific warm-up, good pelvic fixation and visual feedback. PMID:24790491

  12. Spatial averaging of fields from half-wave dipole antennas and corresponding SAR calculations in the NORMAN human voxel model between 65 MHz and 2 GHz.

    PubMed

    Findlay, R P; Dimbylow, P J

    2009-04-21

    If an antenna is located close to a person, the electric and magnetic fields produced by the antenna will vary in the region occupied by the human body. To obtain a mean value of the field for comparison with reference levels, the Institute of Electrical and Electronic Engineers (IEEE) and International Commission on Non-Ionizing Radiation Protection (ICNIRP) recommend spatially averaging the squares of the field strength over the height the body. This study attempts to assess the validity and accuracy of spatial averaging when used for half-wave dipoles at frequencies between 65 MHz and 2 GHz and distances of lambda/2, lambda/4 and lambda/8 from the body. The differences between mean electric field values calculated using ten field measurements and that of the true averaged value were approximately 15% in the 600 MHz to 2 GHz range. The results presented suggest that the use of modern survey equipment, which takes hundreds rather than tens of measurements, is advisable to arrive at a sufficiently accurate mean field value. Whole-body averaged and peak localized SAR values, normalized to calculated spatially averaged fields, were calculated for the NORMAN voxel phantom. It was found that the reference levels were conservative for all whole-body SAR values, but not for localized SAR, particularly in the 1-2 GHz region when the dipole was positioned very close to the body. However, if the maximum field is used for normalization of calculated SAR as opposed to the lower spatially averaged value, the reference levels provide a conservative estimate of the localized SAR basic restriction for all frequencies studied.

  13. Assimilation of concentration measurements for retrieving multiple point releases in atmosphere: A least-squares approach to inverse modelling

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Rani, Raj

    2015-10-01

    The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.

  14. Biologically Inspired Network (BiONet) Authentication using Logical and Pathological RF DNA Credential Pairs

    DTIC Science & Technology

    2017-09-14

    e.g. 000111) may be emitted along an ultra- high frequency (UHF) communications path as a possible waveform state generated by some circuit...Positive Rate TN True Negative TNR True Negative Rate TVR True Verification Rate Tx Transmitter UHF Ultra High Frequency 21 BIOLOGICALLY...otherwise healthy RF networks. More specifically, a representative miniaturized ultra- high frequency (UHF) CubeSat uplink access boundary, protected

  15. Joint correction of respiratory motion artifact and partial volume effect in lung/thoracic PET/CT imaging.

    PubMed

    Chang, Guoping; Chang, Tingting; Pan, Tinsu; Clark, John W; Mawlawi, Osama R

    2010-12-01

    Respiratory motion artifacts and partial volume effects (PVEs) are two degrading factors that affect the accuracy of image quantification in PET/CT imaging. In this article, the authors propose a joint motion and PVE correction approach (JMPC) to improve PET quantification by simultaneously correcting for respiratory motion artifacts and PVE in patients with lung/thoracic cancer. The objective of this article is to describe this approach and evaluate its performance using phantom and patient studies. The proposed joint correction approach incorporates a model of motion blurring, PVE, and object size/shape. A motion blurring kernel (MBK) is then estimated from the deconvolution of the joint model, while the activity concentration (AC) of the tumor is estimated from the normalization of the derived MBK. To evaluate the performance of this approach, two phantom studies and eight patient studies were performed. In the phantom studies, two motion waveforms-a linear sinusoidal and a circular motion-were used to control the motion of a sphere, while in the patient studies, all participants were instructed to breathe regularly. For the phantom studies, the resultant MBK was compared to the true MBK by measuring a correlation coefficient between the two kernels. The measured sphere AC derived from the proposed method was compared to the true AC as well as the ACs in images exhibiting PVE only and images exhibiting both PVE and motion blurring. For the patient studies, the resultant MBK was compared to the motion extent derived from a 4D-CT study, while the measured tumor AC was compared to the AC in images exhibiting both PVE and motion blurring. For the phantom studies, the estimated MBK approximated the true MBK with an average correlation coefficient of 0.91. The tumor ACs following the joint correction technique were similar to the true AC with an average difference of 2%. Furthermore, the tumor ACs on the PVE only images and images with both motion blur and PVE effects were, on average, 75% and 47.5% (10%) of the true AC, respectively, for the linear (circular) motion phantom study. For the patient studies, the maximum and mean AC/SUV on the PET images following the joint correction are, on average, increased by 125.9% and 371.6%, respectively, when compared to the PET images with both PVE and motion. The motion extents measured from the derived MBK and 4D-CT exhibited an average difference of 1.9 mm. The proposed joint correction approach can improve the accuracy of PET quantification by simultaneously compensating for the respiratory motion artifacts and PVE in lung/thoracic PET/CT imaging.

  16. Pilot Study: Detection of Gastric Cancer From Exhaled Air Analyzed With an Electronic Nose in Chinese Patients.

    PubMed

    Schuermans, Valérie N E; Li, Ziyu; Jongen, Audrey C H M; Wu, Zhouqiao; Shi, Jinyao; Ji, Jiafu; Bouvy, Nicole D

    2018-06-01

    The aim of this pilot study is to investigate the ability of an electronic nose (e-nose) to distinguish malignant gastric histology from healthy controls in exhaled breath. In a period of 3 weeks, all preoperative gastric carcinoma (GC) patients (n = 16) in the Beijing Oncology Hospital were asked to participate in the study. The control group (n = 28) consisted of family members screened by endoscopy and healthy volunteers. The e-nose consists of 3 sensors with which volatile organic compounds in the exhaled air react. Real-time analysis takes place within the e-nose, and binary data are exported and interpreted by an artificial neuronal network. This is a self-learning computational system. The inclusion rate of the study was 100%. Baseline characteristics differed significantly only for age: the average age of the patient group was 57 years and that of the healthy control group 37 years ( P value = .000). Weight loss was the only significant different symptom ( P value = .040). A total of 16 patients and 28 controls were included; 13 proved to be true positive and 20 proved to be true negative. The receiver operating characteristic curve showed a sensitivity of 81% and a specificity of 71%, with an accuracy of 75%. These results give a positive predictive value of 62% and a negative predictive value of 87%. This pilot study shows that the e-nose has the capability of diagnosing GC based on exhaled air, with promising predictive values for a screening purpose.

  17. MEASUREMENT: ACCOUNTING FOR RELIABILITY IN PERFORMANCE ESTIMATES.

    PubMed

    Waterman, Brian; Sutter, Robert; Burroughs, Thomas; Dunagan, W Claiborne

    2014-01-01

    When evaluating physician performance measures, physician leaders are faced with the quandary of determining whether departures from expected physician performance measurements represent a true signal or random error. This uncertainty impedes the physician leader's ability and confidence to take appropriate performance improvement actions based on physician performance measurements. Incorporating reliability adjustment into physician performance measurement is a valuable way of reducing the impact of random error in the measurements, such as those caused by small sample sizes. Consequently, the physician executive has more confidence that the results represent true performance and is positioned to make better physician performance improvement decisions. Applying reliability adjustment to physician-level performance data is relatively new. As others have noted previously, it's important to keep in mind that reliability adjustment adds significant complexity to the production, interpretation and utilization of results. Furthermore, the methods explored in this case study only scratch the surface of the range of available Bayesian methods that can be used for reliability adjustment; further study is needed to test and compare these methods in practice and to examine important extensions for handling specialty-specific concerns (e.g., average case volumes, which have been shown to be important in cardiac surgery outcomes). Moreover, it's important to note that the provider group average as a basis for shrinkage is one of several possible choices that could be employed in practice and deserves further exploration in future research. With these caveats, our results demonstrate that incorporating reliability adjustment into physician performance measurements is feasible and can notably reduce the incidence of "real" signals relative to what one would expect to see using more traditional approaches. A physician leader who is interested in catalyzing performance improvement through focused, effective physician performance improvement is well advised to consider the value of incorporating reliability adjustments into their performance measurement system.

  18. Object motion computation for the initiation of smooth pursuit eye movements in humans.

    PubMed

    Wallace, Julian M; Stone, Leland S; Masson, Guillaume S

    2005-04-01

    Pursuing an object with smooth eye movements requires an accurate estimate of its two-dimensional (2D) trajectory. This 2D motion computation requires that different local motion measurements are extracted and combined to recover the global object-motion direction and speed. Several combination rules have been proposed such as vector averaging (VA), intersection of constraints (IOC), or 2D feature tracking (2DFT). To examine this computation, we investigated the time course of smooth pursuit eye movements driven by simple objects of different shapes. For type II diamond (where the direction of true object motion is dramatically different from the vector average of the 1-dimensional edge motions, i.e., VA not equal IOC = 2DFT), the ocular tracking is initiated in the vector average direction. Over a period of less than 300 ms, the eye-tracking direction converges on the true object motion. The reduction of the tracking error starts before the closing of the oculomotor loop. For type I diamonds (where the direction of true object motion is identical to the vector average direction, i.e., VA = IOC = 2DFT), there is no such bias. We quantified this effect by calculating the direction error between responses to types I and II and measuring its maximum value and time constant. At low contrast and high speeds, the initial bias in tracking direction is larger and takes longer to converge onto the actual object-motion direction. This effect is attenuated with the introduction of more 2D information to the extent that it was totally obliterated with a texture-filled type II diamond. These results suggest a flexible 2D computation for motion integration, which combines all available one-dimensional (edge) and 2D (feature) motion information to refine the estimate of object-motion direction over time.

  19. A qubit coupled with confined phonons: The interplay between true and fake decoherence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pouthier, Vincent

    2013-08-07

    The decoherence of a qubit coupled with the phonons of a finite-size lattice is investigated. The confined phonons no longer behave as a reservoir. They remain sensitive to the qubit so that the origin of the decoherence is twofold. First, a qubit-phonon entanglement yields an incomplete true decoherence. Second, the qubit renormalizes the phonon frequency resulting in fake decoherence when a thermal average is performed. To account for the initial thermalization of the lattice, the qua- ntum Langevin theory is applied so that the phonons are viewed as an open system coupled with a thermal bath of harmonic oscillators. Consequently,more » it is shown that the finite lifetime of the phonons does not modify fake decoherence but strongly affects true decoherence. Depending on the values of the model parameters, the interplay between fake and true decoherence yields a very rich dynamics with various regimes.« less

  20. Effectiveness of Computer-Aided Detection in Community Mammography Practice

    PubMed Central

    Abraham, Linn; Taplin, Stephen H.; Geller, Berta M.; Carney, Patricia A.; D’Orsi, Carl; Elmore, Joann G.; Barlow, William E.

    2011-01-01

    Background Computer-aided detection (CAD) is applied during screening mammography for millions of US women annually, although it is uncertain whether CAD improves breast cancer detection when used by community radiologists. Methods We investigated the association between CAD use during film-screen screening mammography and specificity, sensitivity, positive predictive value, cancer detection rates, and prognostic characteristics of breast cancers (stage, size, and node involvement). Records from 684 956 women who received more than 1.6 million film-screen mammograms at Breast Cancer Surveillance Consortium facilities in seven states in the United States from 1998 to 2006 were analyzed. We used random-effects logistic regression to estimate associations between CAD and specificity (true-negative examinations among women without breast cancer), sensitivity (true-positive examinations among women with breast cancer diagnosed within 1 year of mammography), and positive predictive value (breast cancer diagnosed after positive mammograms) while adjusting for mammography registry, patient age, time since previous mammography, breast density, use of hormone replacement therapy, and year of examination (1998–2002 vs 2003–2006). All statistical tests were two-sided. Results Of 90 total facilities, 25 (27.8%) adopted CAD and used it for an average of 27.5 study months. In adjusted analyses, CAD use was associated with statistically significantly lower specificity (OR = 0.87, 95% confidence interval [CI] = 0.85 to 0.89, P < .001) and positive predictive value (OR = 0.89, 95% CI = 0.80 to 0.99, P = .03). A non-statistically significant increase in overall sensitivity with CAD (OR = 1.06, 95% CI = 0.84 to 1.33, P = .62) was attributed to increased sensitivity for ductal carcinoma in situ (OR = 1.55, 95% CI = 0.83 to 2.91; P = .17), although sensitivity for invasive cancer was similar with or without CAD (OR = 0.96, 95% CI = 0.75 to 1.24; P = .77). CAD was not associated with higher breast cancer detection rates or more favorable stage, size, or lymph node status of invasive breast cancer. Conclusion CAD use during film-screen screening mammography in the United States is associated with decreased specificity but not with improvement in the detection rate or prognostic characteristics of invasive breast cancer. PMID:21795668

  1. Comparisons of stuttering frequency during and after speech initiation in unaltered feedback, altered auditory feedback and choral speech conditions.

    PubMed

    Saltuklaroglu, Tim; Kalinowski, Joseph; Robbins, Mary; Crawcour, Stephen; Bowers, Andrew

    2009-01-01

    Stuttering is prone to strike during speech initiation more so than at any other point in an utterance. The use of auditory feedback (AAF) has been found to produce robust decreases in the stuttering frequency by creating an electronic rendition of choral speech (i.e., speaking in unison). However, AAF requires users to self-initiate speech before it can go into effect and, therefore, it might not be as helpful as true choral speech during speech initiation. To examine how AAF and choral speech differentially enhance fluency during speech initiation and in subsequent portions of utterances. Ten participants who stuttered read passages without altered feedback (NAF), under four AAF conditions and under a true choral speech condition. Each condition was blocked into ten 10 s trials separated by 5 s intervals so each trial required 'cold' speech initiation. In the first analysis, comparisons of stuttering frequencies were made across conditions. A second, finer grain analysis involved examining stuttering frequencies on the initial syllable, the subsequent four syllables produced and the five syllables produced immediately after the midpoint of each trial. On average, AAF reduced stuttering by approximately 68% relative to the NAF condition. Stuttering frequencies on the initial syllables were considerably higher than on the other syllables analysed (0.45 and 0.34 for NAF and AAF conditions, respectively). After the first syllable was produced, stuttering frequencies dropped precipitously and remained stable. However, this drop in stuttering frequency was significantly greater (approximately 84%) in the AAF conditions than in the NAF condition (approximately 66%) with frequencies on the last nine syllables analysed averaging 0.15 and 0.05 for NAF and AAF conditions, respectively. In the true choral speech condition, stuttering was virtually (approximately 98%) eliminated across all utterances and all syllable positions. Altered auditory feedback effectively inhibits stuttering immediately after speech has been initiated. However, unlike a true choral signal, which is exogenously initiated and offers the most complete fluency enhancement, AAF requires speech to be initiated by the user and 'fed back' before it can directly inhibit stuttering. It is suggested that AAF can be a viable clinical option for those who stutter and should often be used in combination with therapeutic techniques, particularly those that aid speech initiation. The substantially higher rate of stuttering occurring on initiation supports a hypothesis that overt stuttering events help 'release' and 'inhibit' central stuttering blocks. This perspective is examined in the context of internal models and mirror neurons.

  2. A perfect correlate does not a surrogate make

    PubMed Central

    Baker, Stuart G; Kramer, Barnett S

    2003-01-01

    Background There is common belief among some medical researchers that if a potential surrogate endpoint is highly correlated with a true endpoint, then a positive (or negative) difference in potential surrogate endpoints between randomization groups would imply a positive (or negative) difference in unobserved true endpoints between randomization groups. We investigate this belief when the potential surrogate and unobserved true endpoints are perfectly correlated within each randomization group. Methods We use a graphical approach. The vertical axis is the unobserved true endpoint and the horizontal axis is the potential surrogate endpoint. Perfect correlation within each randomization group implies that, for each randomization group, potential surrogate and true endpoints are related by a straight line. In this scenario the investigator does not know the slopes or intercepts. We consider a plausible example where the slope of the line is higher for the experimental group than for the control group. Results In our example with unknown lines, a decrease in mean potential surrogate endpoints from control to experimental groups corresponds to an increase in mean true endpoint from control to experimental groups. Thus the potential surrogate endpoints give the wrong inference. Similar results hold for binary potential surrogate and true outcomes (although the notion of correlation does not apply). The potential surrogate endpointwould give the correct inference if either (i) the unknown lines for the two group coincided, which means that the distribution of true endpoint conditional on potential surrogate endpoint does not depend on treatment group, which is called the Prentice Criterion or (ii) if one could accurately predict the lines based on data from prior studies. Conclusion Perfect correlation between potential surrogate and unobserved true outcomes within randomized groups does not guarantee correct inference based on a potential surrogate endpoint. Even in early phase trials, investigators should not base conclusions on potential surrogate endpoints in which the only validation is high correlation with the true endpoint within a group. PMID:12962545

  3. Sustainable diversity. This year's Top 25 minority Executives in Healthcare highlights leadership diversity, but some question the true level of inclusion.

    PubMed

    Kirchheimer, Barbara

    2008-04-07

    While more minority candidates are moving into top healthcare positions, many challenges remain. One is to address true inclusion, an expert says. And Andrea Price, left, selected as one of Modem Healthcare's Top 25 Minority Executives in Healthcare, says minorities can have a hard time rising to top positions. "The industry needs to understand that everyone doesn't start off on the same playing field," she says.

  4. Evaluation of a risk-based environmental hot spot delineation algorithm.

    PubMed

    Sinha, Parikhit; Lambert, Michael B; Schew, William A

    2007-10-22

    Following remedial investigations of hazardous waste sites, remedial strategies may be developed that target the removal of "hot spots," localized areas of elevated contamination. For a given exposure area, a hot spot may be defined as a sub-area that causes risks for the whole exposure area to be unacceptable. The converse of this statement may also apply: when a hot spot is removed from within an exposure area, risks for the exposure area may drop below unacceptable thresholds. The latter is the motivation for a risk-based approach to hot spot delineation, which was evaluated using Monte Carlo simulation. Random samples taken from a virtual site ("true site") were used to create an interpolated site. The latter was gridded and concentrations from the center of each grid box were used to calculate 95% upper confidence limits on the mean site contaminant concentration and corresponding hazard quotients for a potential receptor. Grid cells with the highest concentrations were removed and hazard quotients were recalculated until the site hazard quotient dropped below the threshold of 1. The grid cells removed in this way define the spatial extent of the hot spot. For each of the 100,000 Monte Carlo iterations, the delineated hot spot was compared to the hot spot in the "true site." On average, the algorithm was able to delineate hot spots that were collocated with and equal to or greater in size than the "true hot spot." When delineated hot spots were mapped onto the "true site," setting contaminant concentrations in the mapped area to zero, the hazard quotients for these "remediated true sites" were on average within 5% of the acceptable threshold of 1.

  5. Exposure measurement error in PM2.5 health effects studies: A pooled analysis of eight personal exposure validation studies

    PubMed Central

    2014-01-01

    Background Exposure measurement error is a concern in long-term PM2.5 health studies using ambient concentrations as exposures. We assessed error magnitude by estimating calibration coefficients as the association between personal PM2.5 exposures from validation studies and typically available surrogate exposures. Methods Daily personal and ambient PM2.5, and when available sulfate, measurements were compiled from nine cities, over 2 to 12 days. True exposure was defined as personal exposure to PM2.5 of ambient origin. Since PM2.5 of ambient origin could only be determined for five cities, personal exposure to total PM2.5 was also considered. Surrogate exposures were estimated as ambient PM2.5 at the nearest monitor or predicted outside subjects’ homes. We estimated calibration coefficients by regressing true on surrogate exposures in random effects models. Results When monthly-averaged personal PM2.5 of ambient origin was used as the true exposure, calibration coefficients equaled 0.31 (95% CI:0.14, 0.47) for nearest monitor and 0.54 (95% CI:0.42, 0.65) for outdoor home predictions. Between-city heterogeneity was not found for outdoor home PM2.5 for either true exposure. Heterogeneity was significant for nearest monitor PM2.5, for both true exposures, but not after adjusting for city-average motor vehicle number for total personal PM2.5. Conclusions Calibration coefficients were <1, consistent with previously reported chronic health risks using nearest monitor exposures being under-estimated when ambient concentrations are the exposure of interest. Calibration coefficients were closer to 1 for outdoor home predictions, likely reflecting less spatial error. Further research is needed to determine how our findings can be incorporated in future health studies. PMID:24410940

  6. Evaluation of lens absorbed dose with Cone Beam IGRT procedures.

    PubMed

    Palomo, R; Pujades, M C; Gimeno-Olmos, J; Carmona, V; Lliso, F; Candela-Juan, C; Vijande, J; Ballester, F; Perez-Calatayud, J

    2015-12-01

    The purpose of this work is to evaluate the absorbed dose to the eye lenses due to the cone beam computed tomography (CBCT) system used to accurately position the patient during head-and-neck image guided procedures. The on-board imaging (OBI) systems (v.1.5) of Clinac iX and TrueBeam (Varian) accelerators were used to evaluate the imparted dose to the eye lenses and some additional points of the head. All CBCT scans were acquired with the Standard-Dose Head protocol from Varian. Doses were measured using thermoluminescence dosimeters (TLDs) placed in an anthropomorphic phantom. TLDs were calibrated at the beam quality used to reduce their energy dependence. Average dose to the lens due to the OBI systems of the Clinac iX and the TrueBeam were 0.71  ±  0.07 mGy/CBCT and 0.70  ±  0.08 mGy/CBCT, respectively. The extra absorbed dose received by the eye lenses due to one CBCT acquisition with the studied protocol is far below the 500 mGy threshold established by ICRP for cataract formation (ICRP 2011 Statement on Tissue Reactions). However, the incremental effect of several CBCT acquisitions during the whole treatment should be taken into account.

  7. Temporal response improvement for computed tomography fluoroscopy

    NASA Astrophysics Data System (ADS)

    Hsieh, Jiang

    1997-10-01

    Computed tomography fluoroscopy (CTF) has attracted significant attention recently. This is mainly due to the growing clinical application of CTF in interventional procedures, such as guided biopsy. Although many studies have been conducted for its clinical efficacy, little attention has been paid to the temporal response and the inherent limitations of the CTF system. For example, during a biopsy operation, when needle is inserted at a relatively high speed, the true needle position will not be correctly depicted in the CTF image due to the time delay. This could result in an overshoot or misplacement of the biopsy needle by the operator. In this paper, we first perform a detailed analysis of the temporal response of the CTF by deriving a set of equations to describe the average location of a moving object observed by the CTF system. The accuracy of the equations is verified by computer simulations and experiments. We show that the CT reconstruction process acts as a low pass filter to the motion function. As a result, there is an inherent time delay in the CTF process to the true biopsy needle motion and locations. Based on this study, we propose a generalized underscan weighting scheme which significantly improve the performance of CTF in terms of time lag and delay.

  8. Frequency of false positive rapid HIV serologic tests in African men and women receiving PrEP for HIV prevention: implications for programmatic roll-out of biomedical interventions.

    PubMed

    Ndase, Patrick; Celum, Connie; Kidoguchi, Lara; Ronald, Allan; Fife, Kenneth H; Bukusi, Elizabeth; Donnell, Deborah; Baeten, Jared M

    2015-01-01

    Rapid HIV assays are the mainstay of HIV testing globally. Delivery of effective biomedical HIV prevention strategies such as antiretroviral pre-exposure prophylaxis (PrEP) requires periodic HIV testing. Because rapid tests have high (>95%) but imperfect specificity, they are expected to generate some false positive results. We assessed the frequency of true and false positive rapid results in the Partners PrEP Study, a randomized, placebo-controlled trial of PrEP. HIV testing was performed monthly using 2 rapid tests done in parallel with HIV enzyme immunoassay (EIA) confirmation following all positive rapid tests. A total of 99,009 monthly HIV tests were performed; 98,743 (99.7%) were dual-rapid HIV negative. Of the 266 visits with ≥1 positive rapid result, 99 (37.2%) had confirmatory positive EIA results (true positives), 155 (58.3%) had negative EIA results (false positives), and 12 (4.5%) had discordant EIA results. In the active PrEP arms, over two-thirds of visits with positive rapid test results were false positive results (69.2%, 110 of 159), although false positive results occurred at <1% (110/65,945) of total visits. When HIV prevalence or incidence is low due to effective HIV prevention interventions, rapid HIV tests result in a high number of false relative to true positive results, although the absolute number of false results will be low. Program roll-out for effective interventions should plan for quality assurance of HIV testing, mechanisms for confirmatory HIV testing, and counseling strategies for persons with positive rapid test results.

  9. True detection limits in an experimental linearly heteroscedastic system. Part 1

    NASA Astrophysics Data System (ADS)

    Voigtman, Edward; Abraham, Kevin T.

    2011-11-01

    Using a lab-constructed laser-excited filter fluorimeter deliberately designed to exhibit linearly heteroscedastic, additive Gaussian noise, it has been shown that accurate estimates may be made of the true theoretical Currie decision levels ( YC and XC) and true Currie detection limits ( YD and XD) for the detection of rhodamine 6 G tetrafluoroborate in ethanol. The obtained experimental values, for 5% probability of false positives and 5% probability of false negatives, were YC = 56.1 mV, YD = 125. mV, XC = 0.132 μg /mL and XD = 0.294 μg /mL. For 5% probability of false positives and 1% probability of false negatives, the obtained detection limits were YD = 158. mV and XD = 0.372 μg /mL. These decision levels and corresponding detection limits were shown to pass the ultimate test: they resulted in observed probabilities of false positives and false negatives that were statistically equivalent to the a priori specified values.

  10. 25 CFR 30.116 - If a school fails to achieve its annual measurable objectives, what other methods may it use to...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... participated in the assessment. (b) Method B—Uniform Averaging Procedure. A school may use uniform averaging... 25 Indians 1 2012-04-01 2011-04-01 true If a school fails to achieve its annual measurable... Adequate Yearly Progress § 30.116 If a school fails to achieve its annual measurable objectives, what other...

  11. Micro-Droplet Detection Method for Measuring the Concentration of Alkaline Phosphatase-Labeled Nanoparticles in Fluorescence Microscopy

    PubMed Central

    Li, Rufeng; Wang, Yibei; Xu, Hong; Fei, Baowei; Qin, Binjie

    2017-01-01

    This paper developed and evaluated a quantitative image analysis method to measure the concentration of the nanoparticles on which alkaline phosphatase (AP) was immobilized. These AP-labeled nanoparticles are widely used as signal markers for tagging biomolecules at nanometer and sub-nanometer scales. The AP-labeled nanoparticle concentration measurement can then be directly used to quantitatively analyze the biomolecular concentration. Micro-droplets are mono-dispersed micro-reactors that can be used to encapsulate and detect AP-labeled nanoparticles. Micro-droplets include both empty micro-droplets and fluorescent micro-droplets, while fluorescent micro-droplets are generated from the fluorescence reaction between the APs adhering to a single nanoparticle and corresponding fluorogenic substrates within droplets. By detecting micro-droplets and calculating the proportion of fluorescent micro-droplets to the overall micro-droplets, we can calculate the AP-labeled nanoparticle concentration. The proposed micro-droplet detection method includes the following steps: (1) Gaussian filtering to remove the noise of overall fluorescent targets, (2) a contrast-limited, adaptive histogram equalization processing to enhance the contrast of weakly luminescent micro-droplets, (3) an red maximizing inter-class variance thresholding method (OTSU) to segment the enhanced image for getting the binary map of the overall micro-droplets, (4) a circular Hough transform (CHT) method to detect overall micro-droplets and (5) an intensity-mean-based thresholding segmentation method to extract the fluorescent micro-droplets. The experimental results of fluorescent micro-droplet images show that the average accuracy of our micro-droplet detection method is 0.9586; the average true positive rate is 0.9502; and the average false positive rate is 0.0073. The detection method can be successfully applied to measure AP-labeled nanoparticle concentration in fluorescence microscopy. PMID:29160812

  12. Micro-Droplet Detection Method for Measuring the Concentration of Alkaline Phosphatase-Labeled Nanoparticles in Fluorescence Microscopy.

    PubMed

    Li, Rufeng; Wang, Yibei; Xu, Hong; Fei, Baowei; Qin, Binjie

    2017-11-21

    This paper developed and evaluated a quantitative image analysis method to measure the concentration of the nanoparticles on which alkaline phosphatase (AP) was immobilized. These AP-labeled nanoparticles are widely used as signal markers for tagging biomolecules at nanometer and sub-nanometer scales. The AP-labeled nanoparticle concentration measurement can then be directly used to quantitatively analyze the biomolecular concentration. Micro-droplets are mono-dispersed micro-reactors that can be used to encapsulate and detect AP-labeled nanoparticles. Micro-droplets include both empty micro-droplets and fluorescent micro-droplets, while fluorescent micro-droplets are generated from the fluorescence reaction between the APs adhering to a single nanoparticle and corresponding fluorogenic substrates within droplets. By detecting micro-droplets and calculating the proportion of fluorescent micro-droplets to the overall micro-droplets, we can calculate the AP-labeled nanoparticle concentration. The proposed micro-droplet detection method includes the following steps: (1) Gaussian filtering to remove the noise of overall fluorescent targets, (2) a contrast-limited, adaptive histogram equalization processing to enhance the contrast of weakly luminescent micro-droplets, (3) an red maximizing inter-class variance thresholding method (OTSU) to segment the enhanced image for getting the binary map of the overall micro-droplets, (4) a circular Hough transform (CHT) method to detect overall micro-droplets and (5) an intensity-mean-based thresholding segmentation method to extract the fluorescent micro-droplets. The experimental results of fluorescent micro-droplet images show that the average accuracy of our micro-droplet detection method is 0.9586; the average true positive rate is 0.9502; and the average false positive rate is 0.0073. The detection method can be successfully applied to measure AP-labeled nanoparticle concentration in fluorescence microscopy.

  13. Fast EEG spike detection via eigenvalue analysis and clustering of spatial amplitude distribution

    NASA Astrophysics Data System (ADS)

    Fukami, Tadanori; Shimada, Takamasa; Ishikawa, Bunnoshin

    2018-06-01

    Objective. In the current study, we tested a proposed method for fast spike detection in electroencephalography (EEG). Approach. We performed eigenvalue analysis in two-dimensional space spanned by gradients calculated from two neighboring samples to detect high-amplitude negative peaks. We extracted the spike candidates by imposing restrictions on parameters regarding spike shape and eigenvalues reflecting detection characteristics of individual medical doctors. We subsequently performed clustering, classifying detected peaks by considering the amplitude distribution at 19 scalp electrodes. Clusters with a small number of candidates were excluded. We then defined a score for eliminating spike candidates for which the pattern of detected electrodes differed from the overall pattern in a cluster. Spikes were detected by setting the score threshold. Main results. Based on visual inspection by a psychiatrist experienced in EEG, we evaluated the proposed method using two statistical measures of precision and recall with respect to detection performance. We found that precision and recall exhibited a trade-off relationship. The average recall value was 0.708 in eight subjects with the score threshold that maximized the F-measure, with 58.6  ±  36.2 spikes per subject. Under this condition, the average precision was 0.390, corresponding to a false positive rate 2.09 times higher than the true positive rate. Analysis of the required processing time revealed that, using a general-purpose computer, our method could be used to perform spike detection in 12.1% of the recording time. The process of narrowing down spike candidates based on shape occupied most of the processing time. Significance. Although the average recall value was comparable with that of other studies, the proposed method significantly shortened the processing time.

  14. A corrected formulation for marginal inference derived from two-part mixed models for longitudinal semi-continuous data.

    PubMed

    Tom, Brian Dm; Su, Li; Farewell, Vernon T

    2016-10-01

    For semi-continuous data which are a mixture of true zeros and continuously distributed positive values, the use of two-part mixed models provides a convenient modelling framework. However, deriving population-averaged (marginal) effects from such models is not always straightforward. Su et al. presented a model that provided convenient estimation of marginal effects for the logistic component of the two-part model but the specification of marginal effects for the continuous part of the model presented in that paper was based on an incorrect formulation. We present a corrected formulation and additionally explore the use of the two-part model for inferences on the overall marginal mean, which may be of more practical relevance in our application and more generally. © The Author(s) 2013.

  15. Can integrated 18F-FDG PET/MR replace sentinel lymph node resection in malignant melanoma?

    PubMed

    Schaarschmidt, Benedikt Michael; Grueneisen, Johannes; Stebner, Vanessa; Klode, Joachim; Stoffels, Ingo; Umutlu, Lale; Schadendorf, Dirk; Heusch, Philipp; Antoch, Gerald; Pöppel, Thorsten Dirk

    2018-06-06

    To compare the sensitivity and specificity of 18F-fluordesoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT), 18F-FDG PET/magnetic resonance (18F-FDG PET/MR) and 18F-FDG PET/MR including diffusion weighted imaging (DWI) in the detection of sentinel lymph node metastases in patients suffering from malignant melanoma. Fifty-two patients with malignant melanoma (female: n = 30, male: n = 22, mean age 50.5 ± 16.0 years, mean tumor thickness 2.28 ± 1.97 mm) who underwent 18F-FDG PET/CT and subsequent PET/MR & DWI for distant metastasis staging were included in this retrospective study. After hybrid imaging, lymphoscintigraphy including single photon emission computed tomography/CT (SPECT/CT) was performed to identify the sentinel lymph node prior to sentinel lymph node biopsy (SLNB). In a total of 87 sentinel lymph nodes in 64 lymph node basins visible on SPECT/CT, 17 lymph node metastases were detected by histopathology. In separate sessions PET/CT, PET/MR, and PET/MR & DWI were assessed for sentinel lymph node metastases by two independent readers. Discrepant results were resolved in a consensus reading. Sensitivities, specificities, positive predictive values and negative predictive values were calculated with histopathology following SPECT/CT guided SLNB as a reference standard. Compared with histopathology, lymph nodes were true positive in three cases, true negative in 65 cases, false positive in three cases and false negative in 14 cases in PET/CT. PET/MR was true positive in four cases, true negative in 63 cases, false positive in two cases and false negative in 13 cases. Hence, we observed a sensitivity, specificity, positive predictive value and negative predictive value of 17.7, 95.6, 50.0 and 82.3% for PET/CT and 23.5, 96.9, 66.7 and 82.3% for PET/MR. In DWI, 56 sentinel lymph node basins could be analyzed. Here, the additional analysis of DWI led to two additional false positive findings, while the number of true positive findings could not be increased. In conclusion, integrated 18F-FDG PET/MR does not reliably differentiate N-positive from N-negative melanoma patients. Additional DWI does not increase the sensitivity of 18F-FDG PET/MR. Hence, sentinel lymph node biopsy cannot be replaced by 18F-FDG-PE/MR or 18F-FDG-PET/CT.

  16. Lack of Utility of the Lysis-Centrifugation Blood Culture Method for Detection of Fungemia in Immunocompromised Cancer Patients

    PubMed Central

    Creger, Richard J.; Weeman, Kisa E.; Jacobs, Michael R.; Morrissey, Anne; Parker, Pamela; Fox, Robert M.; Lazarus, Hillard M.

    1998-01-01

    We retrospectively compared the utility of a fungal isolation device (Isolator) versus conventional techniques for recovering fungal organisms from blood cultures obtained from neutropenic cancer patients. Positive cultures were deemed true pathogens, possible pathogens, or contaminants according to laboratory and clinical criteria. Fifty-three patients had 66 positive blood cultures for fungi, nine on multiple occasions. In 20 episodes true pathogens were recovered, 6 from broth medium alone, 4 from the Isolator system alone, and 10 from both systems. False-negative cultures were noted in 4 of 20 (20%) cases in which broth medium was used and in 6 of 20 (30%) cases in which the Isolator system was used. Possible pathogens were detected in 4 of 66 blood culture-positive cases. Forty-two positive cultures were considered contaminants, 1 collected from standard medium and 41 of 42 (98%) which grew only in Isolators. Eleven of 18 patients with true fungal infections expired as a result of infection, while 4 of 33 patients with a contaminant expired, none from a fungal cause. We do not advocate the routine use of Isolator tubes in the evaluation of the febrile, neutropenic patient due to the high rates of false positives and of contamination. PMID:9431970

  17. The Psychological Benefits of Being Authentic on Facebook.

    PubMed

    Grieve, Rachel; Watkinson, Jarrah

    2016-07-01

    Having others acknowledge and validate one's true self is associated with better psychological health. Existing research indicates that an individual's true self may be more readily expressed on Facebook than in person. This study brought together these two premises by investigating for the first time the psychosocial outcomes associated with communicating one's true self on Facebook. Participants (n = 164) completed a personality assessment once as their true self and once as the self they present on Facebook (Facebook self), as well as measures of social connectedness, subjective well-being, depression, anxiety, and stress. Euclidean distances quantified the difference between one's true self and the Facebook self. Hypotheses received partial support. Better coherence between the true self and the Facebook self was associated with better social connectedness and less stress. Two models provided evidence of mediation effects. Findings highlight that authentic self-presentation on Facebook can be associated with positive psychological outcomes.

  18. The Addition of Enhanced Capabilities to NATO GMTIF STANAG 4607 to Support RADARSAT-2 GMTI Data

    DTIC Science & Technology

    2007-12-01

    However, the cost is a loss in the accuracy of the position specification and its dependence on the particular ellipsoid and/or geoid models used in...platform provides these parameters. Table B-3. Reference Coordinate Systems COORDINATE SYSTEM VALUE Unidentified 0 GEI: Geocentric Equatorial...Inertial, also known as True Equator and True Equinox of Date, True of Date (TOD), ECI, or GCI 1 J2000: Geocentric Equatorial Inertial for epoch J2000.0

  19. Bacterial screening of apheresis platelets with a rapid test: a 113-month single center experience.

    PubMed

    Ruby, Kristen N; Thomasson, Reggie R; Szczepiorkowski, Zbigniew M; Dunbar, Nancy M

    2018-04-17

    The 2016 Food and Drug Administration draft guidance describes the use of a rapid test (RT) to enhance platelet transfusion safety and availability. This study reports a 113-month experience of screening of apheresis platelets (APs) by RT. From July 2008 to October 2015, all APs underwent an RT on Day 4. Day 6 and 7 units were transfused with transfusion medicine physician approval. Any units remaining on Day 8 had a second RT performed. From November 2015 to November 2017, APs underwent an RT on Day 5 with a repeat RT on Days 6 and 7. During both periods, positive RTs underwent confirmatory testing with culture when repeat testing was positive. A total of 9009 APs underwent an RT on Day 4 or 5. Of these, 45 (0.5%) were RT positive, with no true positives. A total of 754 underwent a second RT on Day 8, with no positives. Since November 2015, 1152 platelets have undergone a second RT on Day 6; 391 have undergone a third RT on Day 7. Of these, five (0.4%) were RT positive on Day 6, with no true positives. There were no septic transfusion reactions identified by passive surveillance at our institution during either study period. To date, we have not detected any true positives after performing 11,306 tests on 9009 APs. A total of 1906 underwent testing twice, and 391 underwent testing three times. We did not identify any conversions from negative to positive on repeat testing. © 2018 AABB.

  20. Mass Detection in Mammographic Images Using Wavelet Processing and Adaptive Threshold Technique.

    PubMed

    Vikhe, P S; Thool, V R

    2016-04-01

    Detection of mass in mammogram for early diagnosis of breast cancer is a significant assignment in the reduction of the mortality rate. However, in some cases, screening of mass is difficult task for radiologist, due to variation in contrast, fuzzy edges and noisy mammograms. Masses and micro-calcifications are the distinctive signs for diagnosis of breast cancer. This paper presents, a method for mass enhancement using piecewise linear operator in combination with wavelet processing from mammographic images. The method includes, artifact suppression and pectoral muscle removal based on morphological operations. Finally, mass segmentation for detection using adaptive threshold technique is carried out to separate the mass from background. The proposed method has been tested on 130 (45 + 85) images with 90.9 and 91 % True Positive Fraction (TPF) at 2.35 and 2.1 average False Positive Per Image(FP/I) from two different databases, namely Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM). The obtained results show that, the proposed technique gives improved diagnosis in the early breast cancer detection.

  1. A spectroscopic tool for identifying sources of origin for materials of military interest

    NASA Astrophysics Data System (ADS)

    Miziolek, Andrzej W.; De Lucia, Frank C.

    2014-05-01

    There is a need to identify the source of origin for many items of military interest, including ammunition and weapons that may be circulated and traded in illicit markets. Both fieldable systems (man-portable or handheld) as well as benchtop systems in field and home base laboratories are desired for screening and attribution purposes. Laser Induced Breakdown Spectroscopy (LIBS) continues to show significant capability as a promising new tool for materials identification, matching, and provenance. With the use of the broadband, high resolution spectrometer systems, the LIBS devices can not only determine the elemental inventory of the sample, but they are also capable of elemental fingerprinting to signify sources of origin of various materials. We present the results of an initial study to differentiate and match spent cartridges from different manufacturers and countries. We have found that using Partial Least Squares Discriminant Analysis (PLS-DA) we are able to achieve on average 93.3% True Positives and 5.3% False Positives. These results add to the large body of publications that have demonstrated that LIBS is a particularly suitable tool for source of origin determinations.

  2. True detection limits in an experimental linearly heteroscedastic system.. Part 2

    NASA Astrophysics Data System (ADS)

    Voigtman, Edward; Abraham, Kevin T.

    2011-11-01

    Despite much different processing of the experimental fluorescence detection data presented in Part 1, essentially the same estimates were obtained for the true theoretical Currie decision levels ( YC and XC) and true Currie detection limits ( YD and XD). The obtained experimental values, for 5% probability of false positives and 5% probability of false negatives, were YC = 56.0 mV, YD = 125. mV, XC = 0.132 μg/mL and XD = 0.293 μg/mL. For 5% probability of false positives and 1% probability of false negatives, the obtained detection limits were YD = 158 . mV and XD = 0.371 μg/mL. Furthermore, by using bootstrapping methodology on the experimental data for the standards and the analytical blank, it was possible to validate previously published experimental domain expressions for the decision levels ( yC and xC) and detection limits ( yD and xD). This was demonstrated by testing the generated decision levels and detection limits for their performance in regard to false positives and false negatives. In every case, the obtained numbers of false negatives and false positives were as specified a priori.

  3. Determination of Dynamic Recrystallization Process by Equivalent Strain

    NASA Astrophysics Data System (ADS)

    Qin, Xiaomei; Deng, Wei

    Based on Tpнoвckiй's displacement field, equivalent strain expression was derived. And according to the dynamic recrystallization (DRX) critical strain, DRX process was determined by equivalent strain. It was found that equivalent strain distribution in deformed specimen is inhomogeneous, and it increases with increasing true strain. Under a certain true strain, equivalent strains at the center, demisemi radius or on tangential plane just below the surface of the specimen are higher than the true strain. Thus, micrographs at those positions can not exactly reflect the true microstructures under the certain true strain. With increasing strain rate, the initial and finish time of DRX decrease. The frozen microstructures of 20Mn23AlV steel with the experimental condition validate the feasibility of predicting DRX process by equivalent strain.

  4. [Predictive factors of contamination in a blood culture with bacterial growth in an Emergency Department].

    PubMed

    Hernández-Bou, S; Trenchs Sainz de la Maza, V; Esquivel Ojeda, J N; Gené Giralt, A; Luaces Cubells, C

    2015-06-01

    The aim of this study is to identify predictive factors of bacterial contamination in positive blood cultures (BC) collected in an emergency department. A prospective, observational and analytical study was conducted on febrile children aged on to 36 months, who had no risk factors of bacterial infection, and had a BC collected in the Emergency Department between November 2011 and October 2013 in which bacterial growth was detected. The potential BC contamination predicting factors analysed were: maximum temperature, time to positivity, initial Gram stain result, white blood cell count, absolute neutrophil count, band count, and C-reactive protein (CRP). Bacteria grew in 169 BC. Thirty (17.8%) were finally considered true positives and 139 (82.2%) false positives. All potential BC contamination predicting factors analysed, except maximum temperature, showed significant differences between true positives and false positives. CRP value, time to positivity, and initial Gram stain result are the best predictors of false positives in BC. The positive predictive values of a CRP value≤30mg/L, BC time to positivity≥16h, and initial Gram stain suggestive of a contaminant in predicting a FP, are 95.1, 96.9 and 97.5%, respectively. When all 3 conditions are applied, their positive predictive value is 100%. Four (8.3%) patients with a false positive BC and discharged to home were revaluated in the Emergency Department. The majority of BC obtained in the Emergency Department that showed positive were finally considered false positives. Initial Gram stain, time to positivity, and CRP results are valuable diagnostic tests in distinguishing between true positives and false positives in BC. The early detection of false positives will allow minimising their negative consequences. Copyright © 2014 Asociación Española de Pediatría. Published by Elsevier España, S.L.U. All rights reserved.

  5. [Acute lumbago due to the manual lifting of patients in wards: prevalence and incidence data].

    PubMed

    Colombini, D; Cianci, E; Panciera, D; Martinelli, M; Venturi, E; Giammartini, P; Ricci, M G; Menoni, O; Battevi, N

    1999-01-01

    The aim of the study was to measure the occurrence (prevalence and incidence) of episodes of acute low back pain (definite effect) in a wide sample of health workers assisting disabled patients. A questionnaire was used for the study both of true acute low back pain and of episodes of ingravescent low back pain controlled pharmacologically at the onset. The questionnaire identified overall acute and pharmacologically controlled episodes occurring in the previous 12 months, both in the course of work and over the whole life of the subject. Appropriately trained operators administered the questionnaire to 551 subjects; 481 valid answer cards were obtained from 372 females and 109 males working in medical, orthopaedic and geriatric departments. 75.4% of the sample had high exposure index levels for patient lifting. The prevalence of true acute low back pain was 9% in males and 11% in females referred to the previous 12 months. Taking acute true and pharmacologically controlled low back pain together the prevalences rose to 13.8% for males and 26.9% in females. Data from the reference populations showed that acute low back pain did not exceed 3% on average in the previous year. Since work seniority in the hospital wards was known, the incidences were calculated, giving 7.9% in females and 5.29% in males for acute low back pain, and 19% in females and 3.49% in males for pharmacologically controlled low back pain. Considering the number of episodes in 100 workers/year, acute low back pain alone reached prevalences of 13-14%. This therefore appears to confirm the positive ratio between episodes of low back pain and duties involving assistance to disabled patients.

  6. An effect of loudness of advisory speech on a choice response task

    NASA Astrophysics Data System (ADS)

    Utsuki, Narisuke; Takeuchi, Yoshinori; Nomiyama, Takenori

    1995-03-01

    Recent technologies have realized talking advisory/guidance systems in which machines give advice and guidance to operators in speech. However, nonverbal aspects of spoken messages may have significant effects on an operator's behavior. Twelve subjects participated in a TV game-like choice response task where they were asked to choose a 'true' target from three invader-like figures displayed on a CRT screen. The subjects had received a prerecorded advice designating either left, center, or right target that would be true before each choice. The position of the 'true' targets and advice were preprogrammed in pseudorandom sequences. In other words, there was no way for the subjects to predict the 'true' target and there was no relationship between spoken advice and the true target position. The subjects tended to make more choices corresponding to the presented messages when the messages were presented in a louder voice than in a softer voice. Choice response time was significantly shorter when the response was the same as the advice indicated. The shortening of response time was slightly greater when advice was presented in a louder voice. This study demonstrates that spoken advice may result in faster and less deliberate reponses in accordance with the presented messages which are given by talking guidance systems.

  7. "Trust the researchers": flying in the face of evidence.

    PubMed

    Noble, John H

    2017-01-01

    There are always rival hypotheses to explain away the one that is posited as the most likely to be true. Context and Occam's razor - the principle that among competing hypotheses, the one with the fewest assumptions should be selected - ultimately point to which hypothesis is the most likely to be true.

  8. Radiographic identification of the anterior and posterior root attachments of the medial and lateral menisci.

    PubMed

    James, Evan W; LaPrade, Christopher M; Ellman, Michael B; Wijdicks, Coen A; Engebretsen, Lars; LaPrade, Robert F

    2014-11-01

    Anatomic root placement is necessary to restore native meniscal function during meniscal root repair. Radiographic guidelines for anatomic root placement are essential to improve the accuracy and consistency of anatomic root repair and to optimize outcomes after surgery. To define quantitative radiographic guidelines for identification of the anterior and posterior root attachments of the medial and lateral menisci on anteroposterior (AP) and lateral radiographic views. Descriptive laboratory study. The anterior and posterior roots of the medial and lateral menisci were identified in 12 human cadaveric specimens (average age, 51.3 years; age range, 39-65 years) and labeled using 2-mm radiopaque spheres. True AP and lateral radiographs were obtained, and 2 raters independently measured blinded radiographs in relation to pertinent landmarks and radiographic reference lines. On AP radiographs, the anteromedial and posteromedial roots were, on average, 31.9 ± 5.0 mm and 36.3 ± 3.5 mm lateral to the edge of the medial tibial plateau, respectively. The anterolateral and posterolateral roots were, on average, 37.9 ± 5.2 mm and 39.3 ± 3.8 mm medial to the edge of the lateral tibial plateau, respectively. On lateral radiographs, the anteromedial and anterolateral roots were, on average, 4.8 ± 3.7 mm and 20.5 ± 4.3 mm posterior to the anterior margin of the tibial plateau, respectively. The posteromedial and posterolateral roots were, on average, 18.0 ± 2.8 mm and 19.8 ± 3.5 mm anterior to the posterior margin of the tibial plateau, respectively. The intrarater and interrater intraclass correlation coefficients (ICCs) were >0.958, demonstrating excellent reliability. The meniscal root attachment sites were quantitatively and reproducibly defined with respect to anatomic landmarks and superimposed radiographic reference lines. The high ICCs indicate that the measured radiographic relationships are a consistent means for evaluating meniscal root positions. This study demonstrated consistent and reproducible radiographic guidelines for the location of the meniscal roots. These measurements may be used to assess root positions on intraoperative fluoroscopy and postoperative radiographs. © 2014 The Author(s).

  9. Extended focused assessment with sonography for trauma (EFAST) in the diagnosis of pneumothorax: experience at a community based level I trauma center.

    PubMed

    Nandipati, Kalyana C; Allamaneni, Shyam; Kakarla, Ravindra; Wong, Alfredo; Richards, Neil; Satterfield, James; Turner, James W; Sung, Kae-Jae

    2011-05-01

    Early identification of pneumothorax is crucial to reduce the mortality in critically injured patients. The objective of our study is to investigate the utility of surgeon performed extended focused assessment with sonography for trauma (EFAST) in the diagnosis of pneumothorax. We prospectively analysed 204 trauma patients in our level I trauma center over a period of 12 (06/2007-05/2008) months in whom EFAST was performed. The patients' demographics, type of injury, clinical examination findings (decreased air entry), CXR, EFAST and CT scan findings were entered into the data base. Sensitivity, specificity, positive (PPV) and negative predictive values (NPV) were calculated. Of 204 patients (mean age--43.01+/-19.5 years, sex--male 152, female 52) 21 (10.3%) patients had pneumothorax. Of 21 patients who had pneumothorax 12 were due to blunt trauma and 9 were due to penetrating trauma. The diagnosis of pneumothorax in 204 patients demonstrated the following: clinical examination was positive in 17 patients (true positive in 13/21, 62%; 4 were false positive and 8 were false negative), CXR was positive in 16 (true positive in 15/19, 79%; 1 false positive, 4 missed and 2 CXR not performed before chest tube) patients and EFAST was positive in 21 patients (20 were true positive [95.2%], 1 false positive and 1 false negative). In diagnosing pneumothorax EFAST has significantly higher sensitivity compared to the CXR (P=0.02). Surgeon performed trauma room extended FAST is simple and has higher sensitivity compared to the chest X-ray and clinical examination in detecting pneumothorax. Published by Elsevier Ltd.

  10. Casein infusion rate influences feed intake differently depending on metabolizable protein balance in dairy cows: A multilevel meta-analysis.

    PubMed

    Martineau, R; Ouellet, D R; Kebreab, E; Lapierre, H

    2016-04-01

    The effects of casein infusion have been investigated extensively in ruminant species. Its effect on responses in dry matter intake (DMI) has been reviewed and indicated no significant effect. The literature reviewed in the current meta-analysis is more extensive and limited to dairy cows fed ad libitum. A total of 51 studies were included in the meta-analysis and data were fitted to a multilevel model adjusting for the correlated nature of some studies. The effect size was the mean difference calculated by subtracting the means for the control from the casein-infused group. Overall, casein infusion [average of 333 g of dry matter (DM)/d; range: 91 to 1,092 g of DM/d] tended to increase responses in DMI by 0.18 kg/d (n=48 studies; 3 outliers). However, an interaction was observed between the casein infusion rate (IR) and the initial metabolizable protein (MP) balance [i.e., supply minus requirements (NRC, 2001)]. When control cows were in negative MP balance (n=27 studies), responses in DMI averaged 0.28 kg/d at mean MP balance (-264 g/d) and casein IR (336 g/d), and a 100g/d increment in the casein IR from its mean increased further responses by 0.14 kg/d (MP balance being constant), compared with cows not infused with casein. In contrast, when control cows were in positive MP balance (n=22 studies; 2 outliers), responses in DMI averaged -0.20 kg/d at mean casein IR (339 g/d), and a 100g/d increment in the casein IR from its mean further decreased responses by 0.33 kg/d, compared with cows not infused with casein. Responses in milk true protein yield at mean casein IR were greater (109 vs. 65 g/d) for cows in negative vs. positive MP balance, respectively, and the influence of the casein IR on responses was significant only for cows in negative MP balance. A 100g/d increment in the casein IR from its mean increased further responses in milk true protein yield by 25 g/d, compared with cows not infused with casein. Responses in blood urea concentration increased in casein studies (+0.59 mM) and the influence of the casein IR was greatest for cows in positive MP balance (0.26 vs. 0.11 mM per 100g/d increment). Responses in DMI were also correlated negatively with responses in blood urea concentration only for cows in positive MP balance. Together, these results suggest an association between satiety and deamination and oxidation of AA supplied in excess of requirements for cows in positive MP balance. Therefore, casein stimulated appetite in cows fed MP-deficient diets possibly via the supply of orexigenic AA or through a pull effect in response to an increased metabolic demand. Conversely, casein induced satiety in cows fed diets supplying MP in excess of requirements. Not precluding other factors involved in satiety (e.g., insulin, gut peptides), casein could have increased the supply of AA (e.g., Ser, Thr, Tyr), which might depress appetite at the brain level or increase the deamination and the oxidation of AA in oversupply in agreement with the hepatic oxidation theory. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  11. Comparing Recalibration Strategies for Electroencephalography-Based Decoders of Movement Intention in Neurological Patients with Motor Disability.

    PubMed

    López-Larraz, Eduardo; Ibáñez, Jaime; Trincado-Alonso, Fernando; Monge-Pereira, Esther; Pons, José Luis; Montesano, Luis

    2017-12-17

    Motor rehabilitation based on the association of electroencephalographic (EEG) activity and proprioceptive feedback has been demonstrated as a feasible therapy for patients with paralysis. To promote long-lasting motor recovery, these interventions have to be carried out across several weeks or even months. The success of these therapies partly relies on the performance of the system decoding movement intentions, which normally has to be recalibrated to deal with the nonstationarities of the cortical activity. Minimizing the recalibration times is important to reduce the setup preparation and maximize the effective therapy time. To date, a systematic analysis of the effect of recalibration strategies in EEG-driven interfaces for motor rehabilitation has not yet been performed. Data from patients with stroke (4 patients, 8 sessions) and spinal cord injury (SCI) (4 patients, 5 sessions) undergoing two different paradigms (self-paced and cue-guided, respectively) are used to study the performance of the EEG-based classification of motor intentions. Four calibration schemes are compared, considering different combinations of training datasets from previous and/or the validated session. The results show significant differences in classifier performances in terms of the true and false positives (TPs) and (FPs). Combining training data from previous sessions with data from the validation session provides the best compromise between the amount of data needed for calibration and the classifier performance. With this scheme, the average true (false) positive rates obtained are 85.3% (17.3%) and 72.9% (30.3%) for the self-paced and the cue-guided protocols, respectively. These results suggest that the use of optimal recalibration schemes for EEG-based classifiers of motor intentions leads to enhanced performances of these technologies, while not requiring long calibration phases prior to starting the intervention.

  12. Automated detection of pulmonary embolism (PE) in computed tomographic pulmonary angiographic (CTPA) images: multiscale hierachical expectation-maximization segmentation of vessels and PEs

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Chughtai, Aamer; Patel, Smita; Cascade, Philip N.; Sahiner, Berkman; Wei, Jun; Ge, Jun; Kazerooni, Ella A.

    2007-03-01

    CT pulmonary angiography (CTPA) has been reported to be an effective means for clinical diagnosis of pulmonary embolism (PE). We are developing a computer-aided detection (CAD) system to assist radiologist in PE detection in CTPA images. 3D multiscale filters in combination with a newly designed response function derived from the eigenvalues of Hessian matrices is used to enhance vascular structures including the vessel bifurcations and suppress non-vessel structures such as the lymphoid tissues surrounding the vessels. A hierarchical EM estimation is then used to segment the vessels by extracting the high response voxels at each scale. The segmented vessels are pre-screened for suspicious PE areas using a second adaptive multiscale EM estimation. A rule-based false positive (FP) reduction method was designed to identify the true PEs based on the features of PE and vessels. 43 CTPA scans were used as an independent test set to evaluate the performance of PE detection. Experienced chest radiologists identified the PE locations which were used as "gold standard". 435 PEs were identified in the artery branches, of which 172 and 263 were subsegmental and proximal to the subsegmental, respectively. The computer-detected volume was considered true positive (TP) when it overlapped with 10% or more of the gold standard PE volume. Our preliminary test results show that, at an average of 33 and 24 FPs/case, the sensitivities of our PE detection method were 81% and 78%, respectively, for proximal PEs, and 79% and 73%, respectively, for subsegmental PEs. The study demonstrates the feasibility that the automated method can identify PE accurately on CTPA images. Further study is underway to improve the sensitivity and reduce the FPs.

  13. Can emergency physicians accurately and reliably assess acute vertigo in the emergency department?

    PubMed

    Vanni, Simone; Nazerian, Peiman; Casati, Carlotta; Moroni, Federico; Risso, Michele; Ottaviani, Maddalena; Pecci, Rudi; Pepe, Giuseppe; Vannucchi, Paolo; Grifoni, Stefano

    2015-04-01

    To validate a clinical diagnostic tool, used by emergency physicians (EPs), to diagnose the central cause of patients presenting with vertigo, and to determine interrater reliability of this tool. A convenience sample of adult patients presenting to a single academic ED with isolated vertigo (i.e. vertigo without other neurological deficits) was prospectively evaluated with STANDING (SponTAneousNystagmus, Direction, head Impulse test, standiNG) by five trained EPs. The first step focused on the presence of spontaneous nystagmus, the second on the direction of nystagmus, the third on head impulse test and the fourth on gait. The local standard practice, senior audiologist evaluation corroborated by neuroimaging when deemed appropriate, was considered the reference standard. Sensitivity and specificity of STANDING were calculated. On the first 30 patients, inter-observer agreement among EPs was also assessed. Five EPs with limited experience in nystagmus assessment volunteered to participate in the present study enrolling 98 patients. Their average evaluation time was 9.9 ± 2.8 min (range 6-17). Central acute vertigo was suspected in 16 (16.3%) patients. There were 13 true positives, three false positives, 81 true negatives and one false negative, with a high sensitivity (92.9%, 95% CI 70-100%) and specificity (96.4%, 95% CI 93-38%) for central acute vertigo according to senior audiologist evaluation. The Cohen's kappas of the first, second, third and fourth steps of the STANDING were 0.86, 0.93, 0.73 and 0.78, respectively. The whole test showed a good inter-observer agreement (k = 0.76, 95% CI 0.45-1). In the hands of EPs, STANDING showed a good inter-observer agreement and accuracy validated against the local standard of care. © 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  14. Why GPS makes distances bigger than they are

    PubMed Central

    Ranacher, Peter; Brunauer, Richard; Trutschnig, Wolfgang; Van der Spek, Stefan; Reich, Siegfried

    2016-01-01

    ABSTRACT Global navigation satellite systems such as the Global Positioning System (GPS) is one of the most important sensors for movement analysis. GPS is widely used to record the trajectories of vehicles, animals and human beings. However, all GPS movement data are affected by both measurement and interpolation errors. In this article we show that measurement error causes a systematic bias in distances recorded with a GPS; the distance between two points recorded with a GPS is – on average – bigger than the true distance between these points. This systematic ‘overestimation of distance’ becomes relevant if the influence of interpolation error can be neglected, which in practice is the case for movement sampled at high frequencies. We provide a mathematical explanation of this phenomenon and illustrate that it functionally depends on the autocorrelation of GPS measurement error (C). We argue that C can be interpreted as a quality measure for movement data recorded with a GPS. If there is a strong autocorrelation between any two consecutive position estimates, they have very similar error. This error cancels out when average speed, distance or direction is calculated along the trajectory. Based on our theoretical findings we introduce a novel approach to determine C in real-world GPS movement data sampled at high frequencies. We apply our approach to pedestrian trajectories and car trajectories. We found that the measurement error in the data was strongly spatially and temporally autocorrelated and give a quality estimate of the data. Most importantly, our findings are not limited to GPS alone. The systematic bias and its implications are bound to occur in any movement data collected with absolute positioning if interpolation error can be neglected. PMID:27019610

  15. Performance evaluation of neuro-PET using silicon photomultipliers

    NASA Astrophysics Data System (ADS)

    Jung, Jiwoong; Choi, Yong; Jung, Jin Ho; Kim, Sangsu; Im, Ki Chun

    2016-05-01

    Recently, we have developed the second prototype Silicon photomultiplier (SiPM) based positron emission tomography (PET) scanner for human brain imaging. The PET system was comprised of detector block which consisted of 4×4 SiPMs and 4×4 Lutetium Yttrium Orthosilicate arrays, charge signal transmission method, high density position decoder circuit and FPGA-embedded ADC boards. The purpose of this study was to evaluate the performance of the newly developed neuro-PET system. The energy resolution, timing resolution, spatial resolution, sensitivity, stability of the photo-peak position and count rate performance were measured. Tomographic image of 3D Hoffman brain phantom was also acquired to evaluate imaging capability of the neuro-PET. The average energy and timing resolutions measured for 511 keV gamma rays were 17±0.1% and 3±0.3 ns, respectively. Spatial resolution and sensitivity at the center of field of view (FOV) were 3.1 mm and 0.8%, respectively. The average scatter fraction was 0.4 with an energy window of 350-650 keV. The maximum true count rate and maximum NECR were measured as 43.3 kcps and 6.5 kcps at an activity concentration of 16.7 kBq/ml and 5.5 kBq/ml, respectively. Long-term stability results show that there was no significant change in the photo-peak position, energy resolution and count rate for 60 days. Phantom imaging studies were performed and they demonstrated the feasibility for high quality brain imaging. The performance tests and imaging results indicate that the newly developed PET is useful for brain imaging studies, if the axial FOV is extended to improve the system sensitivity.

  16. Maternity care and maternal serum screening. Do male and female family physicians care for women differently?

    PubMed

    Woodward, C A; Carroll, J C; Ryan, G; Reid, A J; Permaul-Woods, J A; Arbitman, S; Domb, S B; Fallis, B; Kilthei, J

    1997-06-01

    To examine whether male and female family physicians practise maternity care differently, particularly regarding the maternal serum screening (MSS) program. Mailed survey fielded between October 1994 and March 1995. Ontario family practices. Random sample of 2000 members of the College of Family Physicians of Canada who care for pregnant women. More than 90% of eligible physicians responded. Attitudes toward, knowledge about, and behaviour toward MSS. Women physicians were more likely than men to practise part time, in groups, and in larger communities. Men physicians were more likely to perform deliveries; women were more likely to do shared care. Despite a shorter work week, on average, female physicians cared for more pregnant women than male physicians did. Among those providing intrapartum care, women performed more deliveries, on average, than men. Women physicians were more likely than men to offer MSS to all pregnant patients. Although average time spent discussing MSS before the test was similar, women physicians had better knowledge of when best to do the test and its true-positive rate. All differences reported were statistically significant (P < or = 0.001). Among family physicians caring for pregnant women, women physicians cared for more pregnant women than men did. Both spent similar time discussing MSS with their patients before offering screening, but more women physicians offered MSS to all their patients and were more knowledgeable about MSS than men physicians.

  17. 25 CFR 38.4 - Education positions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 1 2012-04-01 2011-04-01 true Education positions. 38.4 Section 38.4 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION EDUCATION PERSONNEL § 38.4 Education positions. (a) The Director shall establish the kinds of positions required to carry out the Bureau's education...

  18. Thyroglobulin levels and thyroglobulin doubling time independently predict a positive 18F-FDG PET/CT scan in patients with biochemical recurrence of differentiated thyroid carcinoma.

    PubMed

    Giovanella, Luca; Trimboli, Pierpaolo; Verburg, Frederik A; Treglia, Giorgio; Piccardo, Arnoldo; Foppiani, Luca; Ceriani, Luca

    2013-06-01

    To assess the relationship between serum thyroglobulin (Tg) levels, Tg doubling time (Tg-DT) and the diagnostic performance of (18)F-FDG PET/CT in detecting recurrences of (131)I-negative differentiated thyroid carcinoma (DTC). Included in the present study were 102 patients with DTC. All patients were treated by thyroid ablation (e.g. thyroidectomy and (131)I), and underwent (18)F-FDG PET/CT due to detectable Tg levels and negative conventional imaging. Consecutive serum Tg measurements performed before the (18)F-FDG PET/CT examination were used for Tg-DT calculation. The (18)F-FDG PET/CT results were assessed as true or false after histological and/or clinical follow-up. Serum Tg levels were higher in patients with a positive (18)F-FDG PET/CT scan (median 6.7 ng/mL, range 0.7-73.6 ng/mL) than in patients with a negative scan (median 1.8 ng/mL, range 0.5-4.9 ng/mL; P < 0.001). In 43 (88 %) of 49 patients with a true-positive (18)F-FDG PET/CT scan, the Tg levels were >5.5 ng/mL, and in 31 (74 %) of 42 patients with a true-negative (18)F-FDG PET/CT scan, the Tg levels were ≤5.5 ng/mL. A Tg-DT of <1 year was found in 46 of 49 patients (94 %) with a true-positive (18)F-FDG PET/CT scan, and 40 of 42 patients (95 %) with a true-negative scan had a stable or increased Tg-DT. Moreover, combining Tg levels and Tg-DT as selection criteria correctly distinguished between patients with a positive and a negative scan (P<0.0001). The accuracy of (18)F-FDG PET/CT significantly improves when the serum Tg level is above 5.5 ng/mL during levothyroxine treatment or when the Tg-DT is less than 1 year, independent of the absolute value.

  19. Evaluation of musculoskeletal sepsis with indium-111 white blood cell imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouzounian, T.J.; Thompson, L.; Grogan, T.J.

    The detection of musculoskeletal sepsis, especially following joint replacement, continues to be a challenging problem. Often, even with invasive diagnostic evaluation, the diagnosis of infection remains uncertain. This is a report on the first 55 Indium-111 white blood cell (WBC) images performed in 39 patients for the evaluation of musculoskeletal sepsis. There were 40 negative and 15 positive Indium-111 WBC images. These were correlated with operative culture and tissue pathology, aspiration culture, and clinical findings. Thirty-eight images were performed for the evaluation of possible total joint sepsis (8 positive and 30 negative images); 17 for the evaluation of nonarthroplasty-related musculoskeletalmore » sepsis (7 positive and 10 negative images). Overall, there were 13 true-positive, 39 true-negative, two false-positive, and one false-negative images. Indium-111 WBC imaging is a sensitive and specific means of evaluating musculoskeletal sepsis, especially following total joint replacement.« less

  20. CT-scout based, semi-automated vertebral morphometry after digital image enhancement.

    PubMed

    Glinkowski, Wojciech M; Narloch, Jerzy

    2017-09-01

    Radiographic diagnosis of osteoporotic vertebral fracture is necessary to reduce its substantial associated morbidity. Computed tomography (CT) scout has recently been demonstrated as a reliable technique for vertebral fracture diagnosis. Software assistance may help to overcome some limitations of that diagnostics. We aimed to evaluate whether digital image enhancement improved the capacity of one of the existing software to detect fractures semi-automatically. CT scanograms of patients suffering from osteoporosis, with or without vertebral fractures were analyzed. The original set of CT scanograms were triplicated and digitally modified to improve edge detection using three different techniques: SHARPENING, UNSHARP MASKING, and CONVOLUTION. The manual morphometric analysis identified 1485 vertebrae, 200 of which were classified as fractured. Unadjusted morphometry (AUTOMATED with no digital enhancement) found 63 fractures, 33 of which were true positive (i.e., it correctly identified 52% of the fractures); SHARPENING detected 57 fractures (30 true positives, 53%); UNSHARP MASKING yielded 30 (13 true positives, 43%); and CONVOLUTION found 24 fractures (9 true positives, 38%). The intra-reader reliability for height ratios did not significantly improve with image enhancement (kappa ranged 0.22-0.41 for adjusted measurements and 0.16-0.38 for unadjusted). Similarly, the inter-reader agreement for prevalent fractures did not significantly improve with image enhancement (kappa 0.29-0.56 and -0.01 to 0.23 for adjusted and unadjusted measurements, respectively). Our results suggest that digital image enhancement does not improve software-assisted vertebral fracture detection by CT scout. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. False-Positive Rate of AKI Using Consensus Creatinine–Based Criteria

    PubMed Central

    Lin, Jennie; Fernandez, Hilda; Shashaty, Michael G.S.; Negoianu, Dan; Testani, Jeffrey M.; Berns, Jeffrey S.; Parikh, Chirag R.

    2015-01-01

    Background and objectives Use of small changes in serum creatinine to diagnose AKI allows for earlier detection but may increase diagnostic false–positive rates because of inherent laboratory and biologic variabilities of creatinine. Design, setting, participants, & measurements We examined serum creatinine measurement characteristics in a prospective observational clinical reference cohort of 2267 adult patients with AKI by Kidney Disease Improving Global Outcomes creatinine criteria and used these data to create a simulation cohort to model AKI false–positive rates. We simulated up to seven successive blood draws on an equal population of hypothetical patients with unchanging true serum creatinine values. Error terms generated from laboratory and biologic variabilities were added to each simulated patient’s true serum creatinine value to obtain the simulated measured serum creatinine for each blood draw. We determined the proportion of patients who would be erroneously diagnosed with AKI by Kidney Disease Improving Global Outcomes creatinine criteria. Results Within the clinical cohort, 75.0% of patients received four serum creatinine draws within at least one 48-hour period during hospitalization. After four simulated creatinine measurements that accounted for laboratory variability calculated from assay characteristics and 4.4% of biologic variability determined from the clinical cohort and publicly available data, the overall false–positive rate for AKI diagnosis was 8.0% (interquartile range =7.9%–8.1%), whereas patients with true serum creatinine ≥1.5 mg/dl (representing 21% of the clinical cohort) had a false–positive AKI diagnosis rate of 30.5% (interquartile range =30.1%–30.9%) versus 2.0% (interquartile range =1.9%–2.1%) in patients with true serum creatinine values <1.5 mg/dl (P<0.001). Conclusions Use of small serum creatinine changes to diagnose AKI is limited by high false–positive rates caused by inherent variability of serum creatinine at higher baseline values, potentially misclassifying patients with CKD in AKI studies. PMID:26336912

  2. False-Positive Rate of AKI Using Consensus Creatinine-Based Criteria.

    PubMed

    Lin, Jennie; Fernandez, Hilda; Shashaty, Michael G S; Negoianu, Dan; Testani, Jeffrey M; Berns, Jeffrey S; Parikh, Chirag R; Wilson, F Perry

    2015-10-07

    Use of small changes in serum creatinine to diagnose AKI allows for earlier detection but may increase diagnostic false-positive rates because of inherent laboratory and biologic variabilities of creatinine. We examined serum creatinine measurement characteristics in a prospective observational clinical reference cohort of 2267 adult patients with AKI by Kidney Disease Improving Global Outcomes creatinine criteria and used these data to create a simulation cohort to model AKI false-positive rates. We simulated up to seven successive blood draws on an equal population of hypothetical patients with unchanging true serum creatinine values. Error terms generated from laboratory and biologic variabilities were added to each simulated patient's true serum creatinine value to obtain the simulated measured serum creatinine for each blood draw. We determined the proportion of patients who would be erroneously diagnosed with AKI by Kidney Disease Improving Global Outcomes creatinine criteria. Within the clinical cohort, 75.0% of patients received four serum creatinine draws within at least one 48-hour period during hospitalization. After four simulated creatinine measurements that accounted for laboratory variability calculated from assay characteristics and 4.4% of biologic variability determined from the clinical cohort and publicly available data, the overall false-positive rate for AKI diagnosis was 8.0% (interquartile range =7.9%-8.1%), whereas patients with true serum creatinine ≥1.5 mg/dl (representing 21% of the clinical cohort) had a false-positive AKI diagnosis rate of 30.5% (interquartile range =30.1%-30.9%) versus 2.0% (interquartile range =1.9%-2.1%) in patients with true serum creatinine values <1.5 mg/dl (P<0.001). Use of small serum creatinine changes to diagnose AKI is limited by high false-positive rates caused by inherent variability of serum creatinine at higher baseline values, potentially misclassifying patients with CKD in AKI studies. Copyright © 2015 by the American Society of Nephrology.

  3. Imaging in gynaecology: How good are we in identifying endometriomas?

    PubMed Central

    Van Holsbeke, C.; Van Calster, B.; Guerriero, S.; Savelli, L.; Leone, F.; Fischerova, D; Czekierdowski, A.; Fruscio, R.; Veldman, J.; Van de Putte, G.; Testa, A.C.; Bourne, T.; Valentin, L.; Timmerman, D.

    2009-01-01

    Aim: To evaluate the performance of subjective evaluation of ultrasound findings (pattern recognition) to discriminate endometriomas from other types of adnexal masses and to compare the demographic and ultrasound characteristics of the true positive cases with those cases that were presumed to be an endometrioma but proved to have a different histology (false positive cases) and the endometriomas missed by pattern recognition (false negative cases). Methods: All patients in the International Ovarian Tumor Analysis (IOTA ) studies were included for analysis. In the IOTA studies, patients with an adnexal mass that were preoperatively examined by expert sonologists following the same standardized ultrasound protocol were prospectively included in 21 international centres. Sensitivity and specificity to discriminate endometriomas from other types of adnexal masses using pattern recognition were calculated. Ultrasound and some demographic variables of the masses presumed to be an endometrioma were analysed (true positives and false positives) and compared with the variables of the endometriomas missed by pattern recognition (false negatives) as well as the true negatives. Results: IOTA phase 1, 1b and 2 included 3511 patients of which 2560 were benign (73%) and 951 malignant (27%). The dataset included 713 endometriomas. Sensitivity and specificity for pattern recognition were 81% (577/713) and 97% (2723/2798). The true positives were more often unilocular with ground glass echogenicity than the masses in any other category. Among the 75 false positive cases, 66 were benign but 9 were malignant (5 borderline tumours, 1 rare primary invasive tumour and 3 endometrioid adenocarcinomas). The presumed diagnosis suggested by the sonologist in case of a missed endometrioma was mostly functional cyst or cystadenoma. Conclusion: Expert sonologists can quite accurately discriminate endometriomas from other types of adnexal masses, but in this dataset 1% of the masses that were classified as endometrioma by pattern recognition proved to be malignancies. PMID:25478066

  4. Computer-aided detection of artificial pulmonary nodules using an ex vivo lung phantom: influence of exposure parameters and iterative reconstruction.

    PubMed

    Wielpütz, Mark O; Wroblewski, Jacek; Lederlin, Mathieu; Dinkel, Julien; Eichinger, Monika; Koenigkam-Santos, M; Biederer, Jürgen; Kauczor, Hans-Ulrich; Puderbach, Michael U; Jobst, Bertram J

    2015-05-01

    To evaluate the influence of exposure parameters and raw-data based iterative reconstruction (IR) on the performance of computer-aided detection (CAD) of pulmonary nodules on chest multidetector computed tomography (MDCT). Seven porcine lung explants were inflated in a dedicated ex vivo phantom shell and prepared with n=162 artificial nodules of a clinically relevant volume and maximum diameter (46-1063 μl, and 6.2-21.5 mm). n=118 nodules were solid and n=44 part-solid. MDCT was performed with different combinations of 120 and 80 kV with 120, 60, 30 and 12 mA*s, and reconstructed with both filtered back projection (FBP) and IR. Subsequently, 16 datasets per lung were subjected to dedicated CAD software. The rate of true positive, false negative and false positive CAD marks was measured for each reconstruction. The rate of true positive findings ranged between 88.9-91.4% for FBP and 88.3-90.1% for IR (n.s.) with most exposure settings, but was significantly lower with the combination of 80 kV and 12 mA*s (80.9% and 81.5%, respectively, p<0.05). False positive findings ranged between 2.3-8.1 annotations per lung. For nodule volumes <200 μl the rate of true positives was significantly lower than for >300 μl (p<0.05). Similarly, it was significantly lower for diameters <12 mm compared to ≥12 mm (p<0.05). The rate of true positives for solid and part-solid nodules was similar. Nodule CAD on chest MDCT is robust over a wide range of exposure settings. Noise reduction by IR is not detrimental for CAD, and may be used to improve image quality in the setting of low-dose MDCT for lung cancer screening. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Ground potential rise monitor

    DOEpatents

    Allen, Zachery W [Mandan, ND; Zevenbergen, Gary A [Arvada, CO

    2012-04-03

    A device and method for detecting ground potential rise (GPR) comprising positioning a first electrode and a second electrode at a distance from each other into the earth. The voltage of the first electrode and second electrode is attenuated by an attenuation factor creating an attenuated voltage. The true RMS voltage of the attenuated voltage is determined creating an attenuated true RMS voltage. The attenuated true RMS voltage is then multiplied by the attenuation factor creating a calculated true RMS voltage. If the calculated true RMS voltage is greater than a first predetermined voltage threshold, a first alarm is enabled at a local location. If user input is received at a remote location acknowledging the first alarm, a first alarm acknowledgment signal is transmitted. The first alarm acknowledgment signal is then received at which time the first alarm is disabled.

  6. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    NASA Astrophysics Data System (ADS)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  7. Experimental verification of an interpolation algorithm for improved estimates of animal position

    NASA Astrophysics Data System (ADS)

    Schell, Chad; Jaffe, Jules S.

    2004-07-01

    This article presents experimental verification of an interpolation algorithm that was previously proposed in Jaffe [J. Acoust. Soc. Am. 105, 3168-3175 (1999)]. The goal of the algorithm is to improve estimates of both target position and target strength by minimizing a least-squares residual between noise-corrupted target measurement data and the output of a model of the sonar's amplitude response to a target at a set of known locations. Although this positional estimator was shown to be a maximum likelihood estimator, in principle, experimental verification was desired because of interest in understanding its true performance. Here, the accuracy of the algorithm is investigated by analyzing the correspondence between a target's true position and the algorithm's estimate. True target position was measured by precise translation of a small test target (bead) or from the analysis of images of fish from a coregistered optical imaging system. Results with the stationary spherical test bead in a high signal-to-noise environment indicate that a large increase in resolution is possible, while results with commercial aquarium fish indicate a smaller increase is obtainable. However, in both experiments the algorithm provides improved estimates of target position over those obtained by simply accepting the angular positions of the sonar beam with maximum output as target position. In addition, increased accuracy in target strength estimation is possible by considering the effects of the sonar beam patterns relative to the interpolated position. A benefit of the algorithm is that it can be applied ``ex post facto'' to existing data sets from commercial multibeam sonar systems when only the beam intensities have been stored after suitable calibration.

  8. Sherlock Holmes and child psychopathology assessment approaches: the case of the false-positive.

    PubMed

    Jensen, P S; Watanabe, H

    1999-02-01

    To explore the relative value of various methods of assessing childhood psychopathology, the authors compared 4 groups of children: those who met criteria for one or more DSM diagnoses and scored high on parent symptom checklists, those who met psychopathology criteria on either one of these two assessment approaches alone, and those who met no psychopathology assessment criterion. Parents of 201 children completed the Child Behavior Checklist (CBCL), after which children and parents were administered the Diagnostic Interview Schedule for Children (version 2.1). Children and parents also completed other survey measures and symptom report inventories. The 4 groups of children were compared against "external validators" to examine the merits of "false-positive" and "false-negative" cases. True-positive cases (those that met DSM criteria and scored high on the CBCL) differed significantly from the true-negative cases on most external validators. "False-positive" and "false-negative" cases had intermediate levels of most risk factors and external validators. "False-positive" cases were not normal per se because they scored significantly above the true-negative group on a number of risk factors and external validators. A similar but less marked pattern was noted for "false-negatives." Findings call into question whether cases with high symptom checklist scores despite no formal diagnoses should be considered "false-positive." Pending the availability of robust markers for mental illness, researchers and clinicians must resist the tendency to reify diagnostic categories or to engage in arcane debates about the superiority of one assessment approach over another.

  9. Clinical experience with (/sup 111/ In) indium chloride scanning in inflammatory diseases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dayem, H.M.; Breen, J.; Leslie, E.V.

    1978-05-01

    Forty-eight patients were scanned with /sup 111/In-chloride in an attempt to identify the cause of fever. Fifteen true positive scans, 30 true negatives, and 3 false negatives were found. Of the 15 true positives, 7 cases of abdominal or pelvic abscess, and 8 cases of alcoholic hepatitis were detected. The 3 false negatives included: (1) an abscess in the anterior abdominal wall; (2) an abscess in the right upper quadrant at the site of a necrotic gallbladder; and (3) a tuberculous abscess of the lumbar spine. Examples from the different categories, pitfalls in interpretation and advantages and disadvantages of scanningmore » with /sup 111/In-chloride will be presented. These studies indicate that /sup 111/In-chloride is a safe, reliable scanning agent for abscesses below the diaphragm especially in patients who cannot undergo adequate bowel preparation.« less

  10. Characterization of twin boundaries in an Fe–17.5Mn–0.56C twinning induced plasticity steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patterson, Erin E., E-mail: erin.diedrich@yahoo.com; Field, David P., E-mail: dfield@wsu.edu; Zhang, Yudong, E-mail: yudong.zhang@univ-metz.fr

    2013-11-15

    A twinning-induced plasticity steel of composition Fe–17.5 wt.% Mn–0.56 wt.% C–1.39 wt.% Al–0.24 wt.% Si was analyzed for the purpose of characterizing the relationship between tensile strain and deformation twinning. Tensile samples achieved a maximum of 0.46 true strain at failure, and a maximum ultimate tensile strength of 1599 MPa. Electron backscatter diffraction (EBSD) analysis showed that the grain orientation rotated heavily to < 111 > parallel to the tensile axis above 0.3 true strain. Sigma 3 misorientations, as identified by EBSD orientation measurements, and using the image quality maps were used to quantify the number of twins present inmore » the scanned areas of the samples. The image quality method yielded a distinct positive correlation between the twin area density and deformation, but the orientation measurements were unreliable in quantifying twin density in these structures. Quantitative analysis of the twin fraction is limited from orientation information because of the poor spatial resolution of EBSD in relation to the twin thickness. The EBSD orientation maps created for a thin foil sample showed some improvement in the resolution of the twins, but not enough to be significant. Measurements of the twins in the transmission electron microscopy micrographs yielded an average thickness of 23 nm, which is near the resolution capabilities of EBSD on this material for the instrumentation used. Electron channeling contrast imaging performed on one bulk tensile specimen of 0.34 true strain, using a method of controlled diffraction, yielded several images of twinning, dislocation structures and strain fields. A twin thickness of 66 nm was measured by the same method used for the transmission electron microscopy measurement. It is apparent that the results obtain by electron channeling contrast imaging were better than those by EBSD but did not capture all information on the twin boundaries such as was observed by transmission electron microscopy. - Highlights: • Performed tensile tests to assess mechanical performance of TWIP alloy • Analyzed tensile specimens using EBSD, TEM, and ECCI • EBSD showed that most twinning occurred at or near the < 111 >//TA orientation. • EBSD, TEM and ECCI were used to measure average twin density. • Compared spatial resolution of EBSD, ECCI and TEM for the instrumentation used.« less

  11. Evidence of Polyandry for Aedes aegypti in Semifield Enclosures

    PubMed Central

    Helinski, Michelle E. H.; Valerio, Laura; Facchinelli, Luca; Scott, Thomas W.; Ramsey, Janine; Harrington, Laura C.

    2012-01-01

    Female Aedes aegypti are assumed to be primarily monandrous (i.e., mate only once in their lifetime), but true estimates of mating frequency have not been determined outside the laboratory. To assess polyandry in Ae. aegypti with first-generation progeny from wild mosquitoes, stable isotope semen-labeled males (15N or 13C) were allowed to mate with unlabeled females in semifield enclosures (22.5 m3) in a dengue-endemic area in southern Mexico. On average, 14% of females were positive for both labels, indicating that they received semen from more than one male. Our results provide evidence of a small but potentially significant rate of multiple mating within a 48-hour period and provide an approach for future open-field studies of polyandry in this species. Polyandry has implications for understanding mosquito ecology, evolution, and reproductive behavior as well as genetic strategies for mosquito control. PMID:22492148

  12. Material characterization using ultrasound tomography

    NASA Astrophysics Data System (ADS)

    Falardeau, Timothe; Belanger, Pierre

    2018-04-01

    Characterization of material properties can be performed using a wide array of methods e.g. X-ray diffraction or tensile testing. Each method leads to a limited set of material properties. This paper is interested in using ultrasound tomography to map speed of sound inside a material sample. The velocity inside the sample is directly related to its elastic properties. Recent develop-ments in ultrasound diffraction tomography have enabled velocity mapping of high velocity contrast objects using a combination of bent-ray time-of-flight tomography and diffraction tomography. In this study, ultrasound diffraction tomography was investigated using simulations in human bone phantoms. A finite element model was developed to assess the influence of the frequency, the number of transduction positions and the distance from the sample as well as to adapt the imaging algorithm. The average velocity in both regions of the bone phantoms were within 5% of the true value.

  13. Boosting CNN performance for lung texture classification using connected filtering

    NASA Astrophysics Data System (ADS)

    Tarando, Sebastián. Roberto; Fetita, Catalin; Kim, Young-Wouk; Cho, Hyoun; Brillet, Pierre-Yves

    2018-02-01

    Infiltrative lung diseases describe a large group of irreversible lung disorders requiring regular follow-up with CT imaging. Quantifying the evolution of the patient status imposes the development of automated classification tools for lung texture. This paper presents an original image pre-processing framework based on locally connected filtering applied in multiresolution, which helps improving the learning process and boost the performance of CNN for lung texture classification. By removing the dense vascular network from images used by the CNN for lung classification, locally connected filters provide a better discrimination between different lung patterns and help regularizing the classification output. The approach was tested in a preliminary evaluation on a 10 patient database of various lung pathologies, showing an increase of 10% in true positive rate (on average for all the cases) with respect to the state of the art cascade of CNNs for this task.

  14. How does negative emotion cause false memories?

    PubMed

    Brainerd, C J; Stein, L M; Silveira, R A; Rohenkohl, G; Reyna, V F

    2008-09-01

    Remembering negative events can stimulate high levels of false memory, relative to remembering neutral events. In experiments in which the emotional valence of encoded materials was manipulated with their arousal levels controlled, valence produced a continuum of memory falsification. Falsification was highest for negative materials, intermediate for neutral materials, and lowest for positive materials. Conjoint-recognition analysis produced a simple process-level explanation: As one progresses from positive to neutral to negative valence, false memory increases because (a) the perceived meaning resemblance between false and true items increases and (b) subjects are less able to use verbatim memories of true items to suppress errors.

  15. Elders Point East Marsh Island Restoration Monitoring Data Analysis

    DTIC Science & Technology

    2017-09-21

    Figure 13. Average biomass comparison between fertilizer treatment and non- fertilizer treatment at Elders East...25 ERDC/EL CR-17-1 vi Table 5. Count of benthic organisms ...31 Table 6. Benthic Community Indices: True Taxa Richness, Total Organism Count

  16. 26 CFR 1.1311(b)-1 - Maintenance of an inconsistent position.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., or nonrecognition, as the case may be, in the year of the error, and (ii) such inconsistent position... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Maintenance of an inconsistent position. 1.1311....1311(b)-1 Maintenance of an inconsistent position. (a) In general. Under the circumstances stated in...

  17. A Virtual Sensor for Online Fault Detection of Multitooth-Tools

    PubMed Central

    Bustillo, Andres; Correa, Maritza; Reñones, Anibal

    2011-01-01

    The installation of suitable sensors close to the tool tip on milling centres is not possible in industrial environments. It is therefore necessary to design virtual sensors for these machines to perform online fault detection in many industrial tasks. This paper presents a virtual sensor for online fault detection of multitooth tools based on a Bayesian classifier. The device that performs this task applies mathematical models that function in conjunction with physical sensors. Only two experimental variables are collected from the milling centre that performs the machining operations: the electrical power consumption of the feed drive and the time required for machining each workpiece. The task of achieving reliable signals from a milling process is especially complex when multitooth tools are used, because each kind of cutting insert in the milling centre only works on each workpiece during a certain time window. Great effort has gone into designing a robust virtual sensor that can avoid re-calibration due to, e.g., maintenance operations. The virtual sensor developed as a result of this research is successfully validated under real conditions on a milling centre used for the mass production of automobile engine crankshafts. Recognition accuracy, calculated with a k-fold cross validation, had on average 0.957 of true positives and 0.986 of true negatives. Moreover, measured accuracy was 98%, which suggests that the virtual sensor correctly identifies new cases. PMID:22163766

  18. A virtual sensor for online fault detection of multitooth-tools.

    PubMed

    Bustillo, Andres; Correa, Maritza; Reñones, Anibal

    2011-01-01

    The installation of suitable sensors close to the tool tip on milling centres is not possible in industrial environments. It is therefore necessary to design virtual sensors for these machines to perform online fault detection in many industrial tasks. This paper presents a virtual sensor for online fault detection of multitooth tools based on a bayesian classifier. The device that performs this task applies mathematical models that function in conjunction with physical sensors. Only two experimental variables are collected from the milling centre that performs the machining operations: the electrical power consumption of the feed drive and the time required for machining each workpiece. The task of achieving reliable signals from a milling process is especially complex when multitooth tools are used, because each kind of cutting insert in the milling centre only works on each workpiece during a certain time window. Great effort has gone into designing a robust virtual sensor that can avoid re-calibration due to, e.g., maintenance operations. The virtual sensor developed as a result of this research is successfully validated under real conditions on a milling centre used for the mass production of automobile engine crankshafts. Recognition accuracy, calculated with a k-fold cross validation, had on average 0.957 of true positives and 0.986 of true negatives. Moreover, measured accuracy was 98%, which suggests that the virtual sensor correctly identifies new cases.

  19. Comparison of newer IOL power calculation methods for post-corneal refractive surgery eyes

    PubMed Central

    Wang, Li; Tang, Maolong; Huang, David; Weikert, Mitchell P.; Koch, Douglas D.

    2015-01-01

    Objective To compare the newer formulae, the optical coherence tomography based intraocular lens (IOL) power formula (OCT formula) and the Barrett True-K formula (True-K), to the methods on the ASCRS calculator in eyes with previous myopic LASIK/PRK. Design Prospective case series. Participants One-hundred and four eyes of 80 patients who had previous myopic LASIK/PRK and subsequent cataract surgery and IOL implantation. Methods Using the actual refraction following cataract surgery as target refraction, predicted IOL power for each method was calculated. The IOL prediction error (PE) was obtained by subtracting the predicted IOL power from the power of IOL implanted. Main outcome measures Arithmetic IOL PEs, variances of mean arithmetic IOL PE, median refractive PE and percent of eyes within 0.5 D and 1.0 D of refractive PE. Results OCT produced smaller variance of IOL PE than did Wang-Koch-Maloney, and Shammas (P<0.05). With the OCT, True-K No History, Wang-Koch-Maloney, Shammas, Haigis-L, and Average of these 5 formulas, respectively, the median refractive PEs were 0.35 D, 0.42 D, 0.51 D, 0.48 D, 0.39 D, and 0.35 D, and the % of eyes within 0.5 D of refractive PE were 68.3%, 58.7%, 50.0%, 52.9%, 55.8%, and 67.3%, and within 1.0 D of RPE, 92.3%, 90.4%, 86.9%, 88.5%, 90.4%, and 94.2%, respectively. The OCT formula had smaller refractive PE compared to Wang-Koch-Maloney and Shammas, and the Average approach produced significantly smaller refractive PE than did all methods except OCT (all P<0.05). Conclusions The OCT and True-K No History are promising formulas. The ASCRS IOL calculator has been updated to include the OCT and Barrett True K formulas. Trial registration Intraocular Lens Power Calculation After Laser Refractive Surgery Based on Optical Coherence Tomography (OCT IOL); Identifier: NCT00532051; www.ClinicalTrials.gov PMID:26459996

  20. Automated segmentation of serous pigment epithelium detachment in SD-OCT images

    NASA Astrophysics Data System (ADS)

    Sun, Zhuli; Shi, Fei; Xiang, Dehui; Chen, Haoyu; Chen, Xinjian

    2015-03-01

    Pigment epithelium detachment (PED) is an important clinical manifestation of multiple chorio-retinal disease processes, which can cause the loss of central vision. A 3-D method is proposed to automatically segment serous PED in SD-OCT images. The proposed method consists of five steps: first, a curvature anisotropic diffusion filter is applied to remove speckle noise. Second, the graph search method is applied for abnormal retinal layer segmentation associated with retinal pigment epithelium (RPE) deformation. During this process, Bruch's membrane, which doesn't show in the SD-OCT images, is estimated with the convex hull algorithm. Third, the foreground and background seeds are automatically obtained from retinal layer segmentation result. Fourth, the serous PED is segmented based on the graph cut method. Finally, a post-processing step is applied to remove false positive regions based on mathematical morphology. The proposed method was tested on 20 SD-OCT volumes from 20 patients diagnosed with serous PED. The average true positive volume fraction (TPVF), false positive volume fraction (FPVF), dice similarity coefficient (DSC) and positive predictive value (PPV) are 97.19%, 0.03%, 96.34% and 95.59%, respectively. Linear regression analysis shows a strong correlation (r = 0.975) comparing the segmented PED volumes with the ground truth labeled by an ophthalmology expert. The proposed method can provide clinicians with accurate quantitative information, including shape, size and position of the PED regions, which can assist diagnose and treatment.

  1. Prevalence and distribution of paratuberculosis (Johne's disease) in cattle herds in Ireland

    PubMed Central

    2009-01-01

    A simple random survey was conducted in Ireland during 2005 to estimate the ELISA-prevalence of paratuberculosis, commonly called Johne's disease (JD), in the cattle population. Serum samples were collected from all 20,322 females/breeding bulls over 12 months-of-age in 639 herds. All samples were tested using a commercially available absorbed ELISA. The overall prevalence of infected herds, based on the presence of at least one ELISA-positive animal, was 21.4% (95% CI 18.4%-24.9%). Herd prevalence levels amongst dairy herds (mean 31.5%; 95% CI: 24.6%, 39.3%) was higher than among beef herds (mean 17.9%; 95% CI: 14.6%-21.8%). However, the animal level prevalence was similar. The true prevalence among all animals tested, was calculated to be 2.86% (95%CI: 2.76, 2.97) and for animals >= 2 yrs, it was 3.30% (95%CI: 3.17, 3.43). For animals in beef herds, true prevalence was 3.09% (95%CI: 2.93, 3.24), and for those in dairy herds, 2.74% (95%CI: 2.59, 2.90). The majority of herds had only one ELISA-positive infected animal. Only 6.4% (95% CI 4.7%-8.7%) of all herds had more than one ELISA-positive infected animal; 13.3% (CI 8.7%-19.7%) of dairy herds ranging from two to eight ELISA-positive infected animals; and, 3.9% beef herds (CI 2.4%-6.2%) ranging from two to five ELISA-positive infected animals. The true prevalence of herds infected and shedding Mycobacterium avium subspecies paratuberculosis is estimated to be 9.5% for all herd types; 20.6% for dairy herds; and 7.6% for beef herds. If ELISA positive animals <2-years-of-age are excluded, the true herd prevalene reduces to: 9.3% for all herd types; 19.6% for dairy herds; and 6.3% for beef herds based on a test specificity (Sp) of 99.8% and test sensitivity (Se) (i.e., ability to detect culture-positive, infected animals shedding at any level) of 27.8-28.9%. PMID:21851740

  2. The Test Validation Summary

    ERIC Educational Resources Information Center

    Frederick, Richard I.; Bowden, Stephen C.

    2009-01-01

    Common rates employed in classificatory testing are the true positive rate (TPR), false positive rate (FPR), positive predictive power (PPP), and negative predictive power (NPP). FPR and TPR are estimated from research samples representing populations to be distinguished by classificatory testing. PPP and NPP are used by clinicians to classify…

  3. The efficacy and cost of alternative strategies for systematic screening for type 2 diabetes in the U.S. population 45-74 years of age.

    PubMed

    Johnson, Susan L; Tabaei, Bahman P; Herman, William H

    2005-02-01

    To simulate the outcomes of alternative strategies for screening the U.S. population 45-74 years of age for type 2 diabetes. We simulated screening with random plasma glucose (RPG) and cut points of 100, 130, and 160 mg/dl and a multivariate equation including RPG and other variables. Over 15 years, we simulated screening at intervals of 1, 3, and 5 years. All positive screening tests were followed by a diagnostic fasting plasma glucose or an oral glucose tolerance test. Outcomes include the numbers of false-negative, true-positive, and false-positive screening tests and the direct and indirect costs. At year 15, screening every 3 years with an RPG cut point of 100 mg/dl left 0.2 million false negatives, an RPG of 130 mg/dl or the equation left 1.3 million false negatives, and an RPG of 160 mg/dl left 2.8 million false negatives. Over 15 years, the absolute difference between the most sensitive and most specific screening strategy was 4.5 million true positives and 476 million false-positives. Strategies using RPG cut points of 130 mg/dl or the multivariate equation every 3 years identified 17.3 million true positives; however, the equation identified fewer false-positives. The total cost of the most sensitive screening strategy was $42.7 billion and that of the most specific strategy was $6.9 billion. Screening for type 2 diabetes every 3 years with an RPG cut point of 130 mg/dl or the multivariate equation provides good yield and minimizes false-positive screening tests and costs.

  4. The True Self: A Psychological Concept Distinct From the Self.

    PubMed

    Strohminger, Nina; Knobe, Joshua; Newman, George

    2017-07-01

    A long tradition of psychological research has explored the distinction between characteristics that are part of the self and those that lie outside of it. Recently, a surge of research has begun examining a further distinction. Even among characteristics that are internal to the self, people pick out a subset as belonging to the true self. These factors are judged as making people who they really are, deep down. In this paper, we introduce the concept of the true self and identify features that distinguish people's understanding of the true self from their understanding of the self more generally. In particular, we consider recent findings that the true self is perceived as positive and moral and that this tendency is actor-observer invariant and cross-culturally stable. We then explore possible explanations for these findings and discuss their implications for a variety of issues in psychology.

  5. Physical losses could partially explain modest carotenoid retention in dried food products from biofortified cassava

    PubMed Central

    Tomlins, Keith Ian; Chijioke, Ugo; Westby, Andrew

    2018-01-01

    Gari, a fermented and dried semolina made from cassava, is one of the most common foods in West Africa. Recently introduced biofortified yellow cassava containing provitamin A carotenoids could help tackle vitamin A deficiency prevalent in those areas. However there are concerns because of the low retention of carotenoids during gari processing compared to other processes (e.g. boiling). The aim of the study was to assess the levels of true retention in trans–β-carotene during gari processing and investigate the causes of low retention. Influence of processing step, processor (3 commercial processors) and variety (TMS 01/1371; 01/1368 and 01/1412) were assessed. It was shown that low true retention (46% on average) during gari processing may be explained by not only chemical losses (i.e. due to roasting temperature) but also by physical losses (i.e. due to leaching of carotenoids in discarded liquids): true retention in the liquid lost from grating negatively correlated with true retention retained in the mash (R = -0.914). Moreover, true retention followed the same pattern as lost water at the different processing steps (i.e. for the commercial processors). Variety had a significant influence on true retention, carotenoid content, and trans-cis isomerisation but the processor type had little effect. It is the first time that the importance of physical carotenoid losses was demonstrated during processing of biofortified crops. PMID:29561886

  6. Physical losses could partially explain modest carotenoid retention in dried food products from biofortified cassava.

    PubMed

    Bechoff, Aurélie; Tomlins, Keith Ian; Chijioke, Ugo; Ilona, Paul; Westby, Andrew; Boy, Erick

    2018-01-01

    Gari, a fermented and dried semolina made from cassava, is one of the most common foods in West Africa. Recently introduced biofortified yellow cassava containing provitamin A carotenoids could help tackle vitamin A deficiency prevalent in those areas. However there are concerns because of the low retention of carotenoids during gari processing compared to other processes (e.g. boiling). The aim of the study was to assess the levels of true retention in trans-β-carotene during gari processing and investigate the causes of low retention. Influence of processing step, processor (3 commercial processors) and variety (TMS 01/1371; 01/1368 and 01/1412) were assessed. It was shown that low true retention (46% on average) during gari processing may be explained by not only chemical losses (i.e. due to roasting temperature) but also by physical losses (i.e. due to leaching of carotenoids in discarded liquids): true retention in the liquid lost from grating negatively correlated with true retention retained in the mash (R = -0.914). Moreover, true retention followed the same pattern as lost water at the different processing steps (i.e. for the commercial processors). Variety had a significant influence on true retention, carotenoid content, and trans-cis isomerisation but the processor type had little effect. It is the first time that the importance of physical carotenoid losses was demonstrated during processing of biofortified crops.

  7. Statistical controversies in clinical research: building the bridge to phase II-efficacy estimation in dose-expansion cohorts.

    PubMed

    Boonstra, P S; Braun, T M; Taylor, J M G; Kidwell, K M; Bellile, E L; Daignault, S; Zhao, L; Griffith, K A; Lawrence, T S; Kalemkerian, G P; Schipper, M J

    2017-07-01

    Regulatory agencies and others have expressed concern about the uncritical use of dose expansion cohorts (DECs) in phase I oncology trials. Nonetheless, by several metrics-prevalence, size, and number-their popularity is increasing. Although early efficacy estimation in defined populations is a common primary endpoint of DECs, the types of designs best equipped to identify efficacy signals have not been established. We conducted a simulation study of six phase I design templates with multiple DECs: three dose-assignment/adjustment mechanisms multiplied by two analytic approaches for estimating efficacy after the trial is complete. We also investigated the effect of sample size and interim futility analysis on trial performance. Identifying populations in which the treatment is efficacious (true positives) and weeding out inefficacious treatment/populations (true negatives) are competing goals in these trials. Thus, we estimated true and false positive rates for each design. Adaptively updating the MTD during the DEC improved true positive rates by 8-43% compared with fixing the dose during the DEC phase while maintaining false positive rates. Inclusion of an interim futility analysis decreased the number of patients treated under inefficacious DECs without hurting performance. A substantial gain in efficiency is obtainable using a design template that statistically models toxicity and efficacy against dose level during expansion. Design choices for dose expansion should be motivated by and based upon expected performance. Similar to the common practice in single-arm phase II trials, cohort sample sizes should be justified with respect to their primary aim and include interim analyses to allow for early stopping. © The Author 2017. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  8. Stochastic resonance-enhanced laser-based particle detector.

    PubMed

    Dutta, A; Werner, C

    2009-01-01

    This paper presents a Laser-based particle detector whose response was enhanced by modulating the Laser diode with a white-noise generator. A Laser sheet was generated to cast a shadow of the object on a 200 dots per inch, 512 x 1 pixels linear sensor array. The Laser diode was modulated with a white-noise generator to achieve stochastic resonance. The white-noise generator essentially amplified the wide-bandwidth (several hundred MHz) noise produced by a reverse-biased zener diode operating in junction-breakdown mode. The gain in the amplifier in the white-noise generator was set such that the Receiver Operating Characteristics plot provided the best discriminability. A monofiber 40 AWG (approximately 80 microm) wire was detected with approximately 88% True Positive rate and approximately 19% False Positive rate in presence of white-noise modulation and with approximately 71% True Positive rate and approximately 15% False Positive rate in absence of white-noise modulation.

  9. Test-based exclusion diets in gastro-esophageal reflux disease patients: a randomized controlled pilot trial.

    PubMed

    Caselli, Michele; Zuliani, Giovanni; Cassol, Francesca; Fusetti, Nadia; Zeni, Elena; Lo Cascio, Natalina; Soavi, Cecilia; Gullini, Sergio

    2014-12-07

    To investigate the clinical response of gastro-esophageal reflux disease (GERD) symptoms to exclusion diets based on food intolerance tests. A double blind, randomized, controlled pilot trial was performed in 38 GERD patients partially or completely non-responders to proton pump inhibitors (PPI) treatment. Fasting blood samples from each patients were obtained; leukocytotoxic test was performed by incubating the blood with a panel of 60 food items to be tested. The reaction of leukocytes (rounding, vacuolization, lack of movement, flattening, fragmentation or disintegration of cell wall) was then evaluated by optical microscopy and rated as follows: level 0 = negative, level 1 = slightly positive, level 2 = moderately positive, and level 3 = highly positive. A "true" diet excluding food items inducing moderate-severe reactions, and a "control" diet including them was developed for each patient. Then, twenty patients received the "true" diet and 18 the "control" diet; after one month (T1) symptoms severity was scored by the GERD impact scale (GIS). Hence, patients in the "control" group were switched to the "true" diet, and symptom severity was re-assessed after three months (T2). At baseline (T0) the mean GIS global score was 6.68 (range: 5-12) with no difference between "true" and control group (6.6 ± 1.19 vs 6.7 ± 1.7). All patients reacted moderately/severely to at least 1 food (range: 5-19), with a significantly greater number of food substances inducing reaction in controls compared with the "true" diet group (11.6 vs 7.0, P < 0.001). Food items more frequently involved were milk, lettuce, brewer's yeast, pork, coffee, rice, sole asparagus, and tuna, followed by eggs, tomato, grain, shrimps, and chemical yeast. At T1 both groups displayed a reduction of GIS score ("true" group 3.3 ± 1.7, -50%, P = 0.001; control group 4.9 ± 2.8, -26.9%, P = 0.02), although the GIS score was significantly lower in "true" vs "control" group (P = 0.04). At T2, after the diet switch, the "control" group showed a further reduction in GIS score (2.7 ± 1.9, -44.9%, P = 0.01), while the "true" group did not (2.6 ± 1.8, -21.3%, P = 0.19), so that the GIS scores didn't differ between the two groups. Our results suggest that food intolerance may play a role in GERD symptoms development, and leucocytotoxic test-based exclusion diets may be a possible therapeutic approach when PPI are not effective or indicated.

  10. Are overreferrals on developmental screening tests really a problem?

    PubMed

    Glascoe, F P

    2001-01-01

    Developmental screening tests, even those meeting standards for screening test accuracy, produce numerous false-positive results for 15% to 30% of children. This is thought to produce unnecessary referrals for diagnostic testing or special services and increase the cost of screening programs. To explore whether children who pass screening tests differ in important ways from those who do not and to determine whether children overreferred for testing benefit from the scrutiny of diagnostic testing and treatment planning. Subjects were a national sample of 512 parents and their children (age range of the children, 7 months to 8 years) who participated in validation studies of various screening tests. Psychological examiners adhering to standardized directions obtained informed consent and administered at least 2 developmental screening measures (the Brigance Screens, the Battelle Developmental Inventory Screening Test, the Denver-II, and the Parents' Evaluations of Developmental Status) and a concurrent battery of diagnostic measures, including tests of intelligence, language, and academic achievement (for children aged 2(1/2) years and older). The performance on diagnostic measures of children who failed screening but were not found to have a disability (false positives) was compared with that of children who passed screening and did not have a disability on diagnostic testing (true negatives). Children with false-positive scores performed significantly (P<.001) lower on diagnostic measures than did children with true-negative scores. The false-positive group had scores in adaptive behavior, language, intelligence, and academic achievement that were 9 to 14 points lower than the scores of those in the true-negative group. When viewing the likelihood of scoring below the 25th percentile on diagnostic measures, children with false-positive scores had a relative risk of 2.6 in adaptive behavior (95% confidence interval [CI], 1.67-4.21), 3.1 in language skills (95% CI, 1.90-5.20), 6.7 on intelligence tests (95% CI, 3.28-13.50), and 4.9 on academic measures (95% CI, 2.61-9.28). Overall, 151 (70%) of the children with false-positive results scored below the 25th percentile on 1 or more diagnostic measures (the point at which most children have difficulty benefiting from typical classroom instruction) in contrast with 64 (29%) of the children with true-negative scores (odds ratio, 5.6; 95% CI, 3.73-8.49). Children with false-positive scores were also more likely to be nonwhite and to have parents who had not graduated from high school. Performance differences between children with true-negative scores and children with false-positive scores continued to be significant (P<.001) even after adjusting for sociodemographic differences between groups. Children overreferred for diagnostic testing by developmental screens perform substantially lower than children with true-negative scores on measures of intelligence, language, and academic achievement-the 3 best predictors of school success. These children also carry more psychosocial risk factors, such as limited parental education and minority status. Thus, children with false-positive screening results are an at-risk group for whom diagnostic testing may not be an unnecessary expense but rather a beneficial and needed service that can help focus intervention efforts. Although such testing will not indicate a need for special education placement, it can be useful in identifying children's needs for other programs known to improve language, cognitive, and academic skills, such as Head Start, Title I services, tutoring, private speech-language therapy, and quality day care.

  11. A study on photodegradation of methadone, EDDP, and other drugs of abuse in hair exposed to controlled UVB radiation.

    PubMed

    Favretto, Donata; Tucci, Marianna; Monaldi, Alice; Ferrara, Santo Davide; Miolo, Giorgia

    2014-06-01

    The drug content of hair may be affected by washing, chemical or thermal treatments, the use of cosmetics, or exposure to the environment. Knowledge concerning the effect of natural or artificial light on drug content in hair can be helpful to the forensic toxicologist, in particular when investigating drug concentrations above or below pre-determined cut-offs. The photodegradation of methadone and its metabolite, 2-ethyl-1,5-dimethyl-3,3-diphenylpyrrolidine (EDDP) was studied in authentic positive hair samples by comparing drug concentrations determined by liquid chromatrography-high resolution mass spectrometry before and after exposure to UVB light (in vivo study). The same approach was applied in order to investigate the light sensitivity of opiates (6-monoacetylmorphine and morphine) and cocainics (cocaine and benzoylecgonine) in true positive hair. The yields of photodegradation were calculated for each drug class in eight different positive hair samples irradiated by UVB at 300 J/cm(2) obtaining averages, ranges and standard deviations. In parallel, the photostability of all the compounds as 10(-5) -10(-4)  M standard solutions in methanol were examined by means of UVB light irradiation in the range 0-100 J/cm(2) followed by UV/Vis spectroscopic analysis and direct infusion electrospray ionization-high resolution mass spectrometry (in vitro study). In hair, methadone was shown to be significantly affected by light (photodegradation of 55% on average), while its metabolite EDDP proved to be more photostable (17%). 6-monoacetylmorphine, morphine, benzoylecgonine, and cocaine were more photostable than methadone in vivo (on average, 21%, 17%, 20%, and 11% of degradation, respectively). When irradiated in standard solutions, the target molecules exhibited a larger photodegradation than in vivo with the exception of cocaine (photodegradation for methadone up to 70%, 6-monoacetylmorphine and morphine up to 90%, benzoylecgonine up to 67%, cocaine up to 15%). Some factors possibly affecting the yields of photodegradation in hair and partially explaining the differences observed between the in vivo and the in vitro studies were also investigated, such as the colour of hair (the role of melanin) and the integrity of the keratin matrix. Copyright © 2014 John Wiley & Sons, Ltd.

  12. Position feedback control system

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-01-01

    Disclosed is a system and method for independently evaluating the spatial positional performance of a machine having a movable member, comprising an articulated coordinate measuring machine comprising: a first revolute joint; a probe arm, having a proximal end rigidly attached to the first joint, and having a distal end with a probe tip attached thereto, wherein the probe tip is pivotally mounted to the movable machine member; a second revolute joint; a first support arm serially connecting the first joint to the second joint; and coordinate processing means, operatively connected to the first and second revolute joints, for calculating the spatial coordinates of the probe tip; means for kinematically constraining the articulated coordinate measuring machine to a working surface; and comparator means, in operative association with the coordinate processing means and with the movable machine, for comparing the true position of the movable machine member, as measured by the true position of the probe tip, with the desired position of the movable machine member.

  13. What's Living in Your World?

    ERIC Educational Resources Information Center

    Powell, Karen; Stiller, John

    2005-01-01

    The average biotech catalog contains a dizzying array of kits offering tried-and-true protocols in molecular biology and biochemistry. Prepackaged experiences, ranging from DNA fingerprinting to protein purification, are now available to high school students. Although commercial kits designed for education provide excellent hands-on experiences in…

  14. SU-E-T-109: Development of An End-To-End Test for the Varian TrueBeamtm with a Novel Multiple-Dosimetric Modality H and N Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zakjevskii, V; Knill, C; Rakowski, J

    2014-06-01

    Purpose: To develop a comprehensive end-to-end test for Varian's TrueBeam linear accelerator for head and neck IMRT using a custom phantom designed to utilize multiple dosimetry devices. Methods: The initial end-to-end test and custom H and N phantom were designed to yield maximum information in anatomical regions significant to H and N plans with respect to: i) geometric accuracy, ii) dosimetric accuracy, and iii) treatment reproducibility. The phantom was designed in collaboration with Integrated Medical Technologies. A CT image was taken with a 1mm slice thickness. The CT was imported into Varian's Eclipse treatment planning system, where OARs and themore » PTV were contoured. A clinical template was used to create an eight field static gantry angle IMRT plan. After optimization, dose was calculated using the Analytic Anisotropic Algorithm with inhomogeneity correction. Plans were delivered with a TrueBeam equipped with a high definition MLC. Preliminary end-to-end results were measured using film and ion chambers. Ion chamber dose measurements were compared to the TPS. Films were analyzed with FilmQAPro using composite gamma index. Results: Film analysis for the initial end-to-end plan with a geometrically simple PTV showed average gamma pass rates >99% with a passing criterion of 3% / 3mm. Film analysis of a plan with a more realistic, ie. complex, PTV yielded pass rates >99% in clinically important regions containing the PTV, spinal cord and parotid glands. Ion chamber measurements were on average within 1.21% of calculated dose for both plans. Conclusion: trials have demonstrated that our end-to-end testing methods provide baseline values for the dosimetric and geometric accuracy of Varian's TrueBeam system.« less

  15. Microstructure and critical strain of dynamic recrystallization of 6082 aluminum alloy in thermal deformation

    NASA Astrophysics Data System (ADS)

    Ren, W. W.; Xu, C. G.; Chen, X. L.; Qin, S. X.

    2018-05-01

    Using high temperature compression experiments, true stress true strain curve of 6082 aluminium alloy were obtained at the temperature 460°C-560°C and the strain rate 0.01 s-1-10 s-1. The effects of deformation temperature and strain rate on the microstructure are investigated; (‑∂lnθ/∂ε) ‑ ε curves are plotted based on σ-ε curve. Critical strains of dynamic recrystallization of 6082 aluminium alloy model were obtained. The results showed lower strain rates were beneficial to increase the volume fraction of recrystallization, the average recrystallized grain size was coarse; High strain rates are beneficial to refine average grain size, the volume fraction of dynamic recrystallized grain is less than that by using low strain rates. High temperature reduced the dislocation density and provided less driving force for recrystallization so that coarse grains remained. Dynamic recrystallization critical strain model and thermal experiment results can effectively predict recrystallization critical point of 6082 aluminium alloy during thermal deformation.

  16. Saturated laser fluorescence in turbulent sooting flames at high pressure

    NASA Technical Reports Server (NTRS)

    King, G. B.; Carter, C. D.; Laurendeau, N. M.

    1984-01-01

    The primary objective was to develop a quantitative, single pulse, laser-saturated fluorescence (LSF) technique for measurement of radical species concentrations in practical flames. The species of immediate interest was the hydroxyl radical. Measurements were made in both turbulent premixed diffusion flames at pressures between 1 and 20 atm. Interferences from Mie scattering were assessed by doping with particles or by controlling soot loading through variation of equivalence ratio and fuel type. The efficacy of the LSF method at high pressure was addressed by comparing fluorescence and adsorption measurements in a premixed, laminar flat flame at 1-20 atm. Signal-averaging over many laser shots is sufficient to determine the local concentration of radical species in laminar flames. However, for turbulent flames, single pulse measurements are more appropriate since a statistically significant number of laser pulses is needed to determine the probability function (PDF). PDFs can be analyzed to give true average properties and true local kinetics in turbulent, chemically reactive flows.

  17. Amplitude gating for a coached breathing approach in respiratory gated 10 MV flattening filter‐free VMAT delivery

    PubMed Central

    Lee, Richard; Gete, Ermias; Duzenli, Cheryl

    2015-01-01

    The purpose of this study was to investigate amplitude gating combined with a coached breathing strategy for 10 MV flattening filter‐free (FFF) volumetric‐modulated arc therapy (VMAT) on the Varian TrueBeam linac. Ten patient plans for VMAT SABR liver were created using the Eclipse treatment planning system (TPS). The verification plans were then transferred to a CT‐scanned Quasar phantom and delivered on a TrueBeam linac using a 10 MV FFF beam and Varian's real‐time position management (RPM) system for respiratory gating based on breathing amplitude. Breathing traces were acquired from ten patients using two kinds of breathing patterns: free breathing and an interrupted (~5 s pause) end of exhale coached breathing pattern. Ion chamber and Gafchromic film measurements were acquired for a gated delivery while the phantom moved under the described breathing patterns, as well as for a nongated stationary phantom delivery. The gate window was set to obtain a range of residual target motion from 2–5 mm. All gated deliveries on a moving phantom have been shown to be dosimetrically equivalent to the nongated deliveries on a static phantom, with differences in point dose measurements under 1% and average gamma 2%/2 mm agreement above 98.7%. Comparison with the treatment planning system also resulted in good agreement, with differences in point‐dose measurements under 2.5% and average gamma 3%/3 mm agreement of 97%. The use of a coached breathing pattern significantly increases the duty cycle, compared with free breathing, and allows for shorter treatment times. Patients' free‐breathing patterns contain considerable variability and, although dosimetric results for gated delivery may be acceptable, it is difficult to achieve efficient treatment delivery. A coached breathing pattern combined with a 5 mm amplitude gate, resulted in both high‐quality dose distributions and overall shortest gated beam delivery times. PACS number: 87.55.Qr PMID:26219000

  18. MO-FG-BRA-08: A Preliminary Study of Gold Nanoparticles Enhanced Diffuse Optical Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, K; Dogan, N; Yang, Y

    2015-06-15

    Purpose: To develop an imaging method by using gold nanoparticles (GNP) to enhance diffuse optical tomography (DOT) for better tumor detection. Methods: Experiments were performed on a tissue-simulating cylindrical optical phantom (30mm diameter, 60mm length). The GNP used are gold nanorods (10nm diameter, 44nm length) with peak light absorption at 840nm. 0.085ml GNP colloid of 96nM concentration was loaded into a 6mm diameter cylindrical hole in the phantom. An 856nm laser beam (14mW) was used as light source to irradiate the phantom at multiple locations through rotating and elevating the phantom. A CCD camera captured the light transmission through themore » phantom for each irradiation with total 40 projections (8 rotation angles in 45degree steps and 5 elevations with 3mm apart). Cone beam CT of the phantom was used to generate the three-dimensional mesh for DOT reconstruction and to identify the true location of the GNP volume. A forward simulation was performed with known phantom optical properties to establish a relationship between the absorption coefficient and concentration of the GNP by matching the simulated and measured transmission. DOT image reconstruction was performed to restore the GNP within the phantom. In addition, a region-constrained reconstruction was performed by confining the solutions within the GNP volume detected from CT. Results: The position of the GNP volume was reconstructed with <2mm error. The reconstructed average GNP concentration within an identical volume was 104nM, 8% difference from the truth. When the CT was used as “a priori”, the reconstructed average GNP concentration was 239nM, about 2.5 times of the true concentration. Conclusion: This study is the first to demonstrate GNP enhanced DOT with phantom imaging. The GNP can be differentiated from their surrounding background. However, the reconstruction methods needs to be improved for better spatial and quantification accuracy.« less

  19. Using electronic data to predict the probability of true bacteremia from positive blood cultures.

    PubMed

    Wang, S J; Kuperman, G J; Ohno-Machado, L; Onderdonk, A; Sandige, H; Bates, D W

    2000-01-01

    As part of a project to help physicians make more appropriate treatment decisions, we implemented a clinical prediction rule that computes the probability of true bacteremia for positive blood cultures and displays this information when culture results are viewed online. Prior to implementing the rule, we performed a revalidation study to verify the accuracy of the previously published logistic regression model. We randomly selected 114 cases of positive blood cultures from a recent one-year period and performed a paper chart review with the help of infectious disease experts to determine whether the cultures were true positives or contaminants. Based on the results of this revalidation study, we updated the probabilities reported by the model and made additional enhancements to improve the accuracy of the rule. Next, we implemented the rule into our hospital's laboratory computer system so that the probability information was displayed with all positive blood culture results. We displayed the prediction rule information on approximately half of the 2184 positive blood cultures at our hospital that were randomly selected during a 6-month period. During the study, we surveyed 54 housestaff to obtain their opinions about the usefulness of this intervention. Fifty percent (27/54) indicated that the information had influenced their belief of the probability of bacteremia in their patients, and in 28% (15/54) of cases it changed their treatment decision. Almost all (98% (53/54)) indicated that they wanted to continue receiving this information. We conclude that the probability information provided by this clinical prediction rule is considered useful to physicians when making treatment decisions.

  20. Multilingual vocal emotion recognition and classification using back propagation neural network

    NASA Astrophysics Data System (ADS)

    Kayal, Apoorva J.; Nirmal, Jagannath

    2016-03-01

    This work implements classification of different emotions in different languages using Artificial Neural Networks (ANN). Mel Frequency Cepstral Coefficients (MFCC) and Short Term Energy (STE) have been considered for creation of feature set. An emotional speech corpus consisting of 30 acted utterances per emotion has been developed. The emotions portrayed in this work are Anger, Joy and Neutral in each of English, Marathi and Hindi languages. Different configurations of Artificial Neural Networks have been employed for classification purposes. The performance of the classifiers has been evaluated by False Negative Rate (FNR), False Positive Rate (FPR), True Positive Rate (TPR) and True Negative Rate (TNR).

  1. Neural events that underlie remembering something that never happened.

    PubMed

    Gonsalves, B; Paller, K A

    2000-12-01

    We induced people to experience a false-memory illusion by first asking them to visualize common objects when cued with the corresponding word; on some trials, a photograph of the object was presented 1800 ms after the cue word. We then tested their memory for the photographs. Posterior brain potentials in response to words at encoding were more positive if the corresponding object was later falsely remembered as a photograph. Similar brain potentials during the memory test were more positive for true than for false memories. These results implicate visual imagery in the generation of false memories and provide neural correlates of processing differences between true and false memories.

  2. Addendum to the article: Misuse of null hypothesis significance testing: Would estimation of positive and negative predictive values improve certainty of chemical risk assessment?

    PubMed

    Bundschuh, Mirco; Newman, Michael C; Zubrod, Jochen P; Seitz, Frank; Rosenfeldt, Ricki R; Schulz, Ralf

    2015-03-01

    We argued recently that the positive predictive value (PPV) and the negative predictive value (NPV) are valuable metrics to include during null hypothesis significance testing: They inform the researcher about the probability of statistically significant and non-significant test outcomes actually being true. Although commonly misunderstood, a reported p value estimates only the probability of obtaining the results or more extreme results if the null hypothesis of no effect was true. Calculations of the more informative PPV and NPV require a priori estimate of the probability (R). The present document discusses challenges of estimating R.

  3. Automated detection of new impact sites on Martian surface from HiRISE images

    NASA Astrophysics Data System (ADS)

    Xin, Xin; Di, Kaichang; Wang, Yexin; Wan, Wenhui; Yue, Zongyu

    2017-10-01

    In this study, an automated method for Martian new impact site detection from single images is presented. It first extracts dark areas in full high resolution image, then detects new impact craters within dark areas using a cascade classifier which combines local binary pattern features and Haar-like features trained by an AdaBoost machine learning algorithm. Experimental results using 100 HiRISE images show that the overall detection rate of proposed method is 84.5%, with a true positive rate of 86.9%. The detection rate and true positive rate in the flat regions are 93.0% and 91.5%, respectively.

  4. Serial testing for latent tuberculosis using QuantiFERON-TB Gold In-Tube: A Markov model.

    PubMed

    Moses, Mark W; Zwerling, Alice; Cattamanchi, Adithya; Denkinger, Claudia M; Banaei, Niaz; Kik, Sandra V; Metcalfe, John; Pai, Madhukar; Dowdy, David

    2016-07-29

    Healthcare workers (HCWs) in low-incidence settings are often serially tested for latent TB infection (LTBI) with the QuantiFERON-TB Gold In-Tube (QFT) assay, which exhibits frequent conversions and reversions. The clinical impact of such variability on serial testing remains unknown. We used a microsimulation Markov model that accounts for major sources of variability to project diagnostic outcomes in a simulated North American HCW cohort. Serial testing using a single QFT with the recommended conversion cutoff (IFN-g > 0.35 IU/mL) resulted in 24.6% (95% uncertainty range, UR: 23.8-25.5) of the entire population testing false-positive over ten years. Raising the cutoff to >1.0 IU/mL or confirming initial positive results with a (presumed independent) second test reduced this false-positive percentage to 2.3% (95%UR: 2.0-2.6%) or 4.1% (95%UR: 3.7-4.5%), but also reduced the proportion of true incident infections detected within the first year of infection from 76.5% (95%UR: 66.3-84.6%) to 54.8% (95%UR: 44.6-64.5%) or 61.5% (95%UR: 51.6-70.9%), respectively. Serial QFT testing of HCWs in North America may result in tremendous over-diagnosis and over-treatment of LTBI, with nearly thirty false-positives for every true infection diagnosed. Using higher cutoffs for conversion or confirmatory tests (for initial positives) can mitigate these effects, but will also diagnose fewer true infections.

  5. Emotionally Negative Pictures Enhance Gist Memory

    PubMed Central

    Bookbinder, S. H.; Brainerd, C. J.

    2016-01-01

    In prior work on how true and false memory are influenced by emotion, valence and arousal have often been conflated. Thus, it is difficult to say which specific effects are due to valence and which are due to arousal. In the present research, we used a picture-memory paradigm that allowed emotional valence to be manipulated with arousal held constant. Negatively-valenced pictures elevated both true and false memory, relative to positive and neutral pictures. Conjoint recognition modeling revealed that negative valence (a) reduced erroneous suppression of true memories and (b) increased the familiarity of the semantic content of both true and false memories. Overall, negative valence impaired the verbatim side of episodic memory but enhanced the gist side, and these effects persisted even after a week-long delay. PMID:27454002

  6. Investigation into Text Classification With Kernel Based Schemes

    DTIC Science & Technology

    2010-03-01

    Document Matrix TDMs Term-Document Matrices TMG Text to Matrix Generator TN True Negative TP True Positive VSM Vector Space Model xxii THIS PAGE...are represented as a term-document matrix, common evaluation metrics, and the software package Text to Matrix Generator ( TMG ). The classifier...AND METRICS This chapter introduces the indexing capabilities of the Text to Matrix Generator ( TMG ) Toolbox. Specific attention is placed on the

  7. Endovascular treatment of ruptured true posterior communicating artery aneurysms.

    PubMed

    Yang, Yonglin; Su, Wandong; Meng, Qinghai

    2015-01-01

    Although true posterior communicating artery (PCoA) aneurysms are rare, they are of vital importance. We reviewed 9 patients with this fatal disease, who were treated with endovascular embolization, and discussed the meaning of endovascular embolization for the treatment of true PCoA aneurysms. From September 2006 to May 2012, 9 patients with digital substraction angiography (DSA) confirmed true PCoA aneurysms were treated with endovascular embolization. Patients were followed-up with a minimal duration of 17 months and assessed by Glasgow Outcome Scale (GOS) score. All the patients presented with spontaneous subarachnoid hemorrhage from the ruptured aneurysms. The ratio of males to females was 1:2, and the average age of onset was 59.9 (ranging from 52 to 72) years. The preoperative Hunt-Hess grade scores were I to III. All patients had recovered satisfactorily. No permanent neurological deficits were left. Currently, endovascular embolization can be recommended as the top choice for the treatment of most true PCoA aneurysms, due to its advanced technique, especially the application of the stent-assisted coiling technique, combined with its advantage of mininal invasiveness and quick recovery. However, the choice of treatment methods should be based on the clinical and anatomical characteristics of the aneurysm and the skillfulness of the surgeon.

  8. The True- and Eccentric-Anomaly Parameterizations of the Perturbed Kepler Motion

    NASA Astrophysics Data System (ADS)

    Gergely, László Á.; Perjés, Zoltán I.; Vasúth, Mátyás

    2000-01-01

    The true- and eccentric-anomaly parameterizations of the Kepler motion are generalized to quasi-periodic orbits, by considering perturbations of the radial part of the kinetic energy in the form of a series of negative powers of the orbital radius. A toolbox of methods for averaging observables as functions of the energy E and angular momentum L is developed. A broad range of systems governed by the generic Brumberg force, as well as recent applications in the theory of gravitational radiation, involve integrals of these functions over a period of motion. These integrals are evaluated by using the residue theorem. In the course of this work, two important questions emerge: (1) When do the true- and eccentric-anomaly parameters exist? (2) Under what circumstances, and why, are the poles in the origin? The purpose of this paper is to find the answer to these queries.

  9. Contact Prediction for Beta and Alpha-Beta Proteins Using Integer Linear Optimization and its Impact on the First Principles 3D Structure Prediction Method ASTRO-FOLD

    PubMed Central

    Rajgaria, R.; Wei, Y.; Floudas, C. A.

    2010-01-01

    An integer linear optimization model is presented to predict residue contacts in β, α + β, and α/β proteins. The total energy of a protein is expressed as sum of a Cα – Cα distance dependent contact energy contribution and a hydrophobic contribution. The model selects contacts that assign lowest energy to the protein structure while satisfying a set of constraints that are included to enforce certain physically observed topological information. A new method based on hydrophobicity is proposed to find the β-sheet alignments. These β-sheet alignments are used as constraints for contacts between residues of β-sheets. This model was tested on three independent protein test sets and CASP8 test proteins consisting of β, α + β, α/β proteins and was found to perform very well. The average accuracy of the predictions (separated by at least six residues) was approximately 61%. The average true positive and false positive distances were also calculated for each of the test sets and they are 7.58 Å and 15.88 Å, respectively. Residue contact prediction can be directly used to facilitate the protein tertiary structure prediction. This proposed residue contact prediction model is incorporated into the first principles protein tertiary structure prediction approach, ASTRO-FOLD. The effectiveness of the contact prediction model was further demonstrated by the improvement in the quality of the protein structure ensemble generated using the predicted residue contacts for a test set of 10 proteins. PMID:20225257

  10. Results of a Multi-Institutional Benchmark Test for Cranial CT/MR Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulin, Kenneth; Urie, Marcia M., E-mail: murie@qarc.or; Cherlow, Joel M.

    2010-08-01

    Purpose: Variability in computed tomography/magnetic resonance imaging (CT/MR) cranial image registration was assessed using a benchmark case developed by the Quality Assurance Review Center to credential institutions for participation in Children's Oncology Group Protocol ACNS0221 for treatment of pediatric low-grade glioma. Methods and Materials: Two DICOM image sets, an MR and a CT of the same patient, were provided to each institution. A small target in the posterior occipital lobe was readily visible on two slices of the MR scan and not visible on the CT scan. Each institution registered the two scans using whatever software system and method itmore » ordinarily uses for such a case. The target volume was then contoured on the two MR slices, and the coordinates of the center of the corresponding target in the CT coordinate system were reported. The average of all submissions was used to determine the true center of the target. Results: Results are reported from 51 submissions representing 45 institutions and 11 software systems. The average error in the position of the center of the target was 1.8 mm (1 standard deviation = 2.2 mm). The least variation in position was in the lateral direction. Manual registration gave significantly better results than did automatic registration (p = 0.02). Conclusion: When MR and CT scans of the head are registered with currently available software, there is inherent uncertainty of approximately 2 mm (1 standard deviation), which should be considered when defining planning target volumes and PRVs for organs at risk on registered image sets.« less

  11. Controlled treatment of primary hypertension with propranolol and spironolactone. A crossover study with special reference to initial plasma renin activity.

    PubMed

    Karlberg, B E; Kågedal, B; Tegler, L; Tolagen, K; Bergman, B

    1976-03-31

    Twenty-seven patients with hypertension were randomly allocated to a 10 month crossover study. Treatment consisted of spironolactone (200 mg/day for 2 months), propranolol (320 mg/day for 2 months) and combined administration of both drugs at half the dosage. Between treatment periods placebo was given for 2 months. Fourteen patients were previously untreated. The average pretreatment blood pressure for the entire group was 188/114 +/- 16/7(mean +/- standard deviation) mm Hg supine and 188/118 +/- 20/9 mm Hg standing. Both spironolactone and propranolol reduced blood pressure significantly in both the supine and standing positions. Upright plasma renin activity was determined by radioimmunoassay of angiotensin I. The average initial level was 1.9 +/- 1.2 (range 0.4 to 5.0) ng/ml/hr. There was a close correlation between plasma renin activity and the effects of the drugs: With increasing renin level the response to propranolol was better whereas the opposite was true for spironolactone. The combination of spironolactone and propranolol decreased the blood pressure still further in the supine and standing positions, irrespective of initial plasma renin activity. All patients achieved a normal supine pressure. Blood pressure and plasma renin activity returned toward pretreatment values during placebo administration. It is concluded that pretreatment levels of plasma renin activity can predict the antihypertensive response to propranolol and spironolactone. The combination of the two drugs, which have different modes of action, will effectively reduce blood pressure in hypertension. The results support the concept that the renin-angiotensin-aldo-sterone system may be involved in primary hypertension.

  12. Combining multiple ChIP-seq peak detection systems using combinatorial fusion.

    PubMed

    Schweikert, Christina; Brown, Stuart; Tang, Zuojian; Smith, Phillip R; Hsu, D Frank

    2012-01-01

    Due to the recent rapid development in ChIP-seq technologies, which uses high-throughput next-generation DNA sequencing to identify the targets of Chromatin Immunoprecipitation, there is an increasing amount of sequencing data being generated that provides us with greater opportunity to analyze genome-wide protein-DNA interactions. In particular, we are interested in evaluating and enhancing computational and statistical techniques for locating protein binding sites. Many peak detection systems have been developed; in this study, we utilize the following six: CisGenome, MACS, PeakSeq, QuEST, SISSRs, and TRLocator. We define two methods to merge and rescore the regions of two peak detection systems and analyze the performance based on average precision and coverage of transcription start sites. The results indicate that ChIP-seq peak detection can be improved by fusion using score or rank combination. Our method of combination and fusion analysis would provide a means for generic assessment of available technologies and systems and assist researchers in choosing an appropriate system (or fusion method) for analyzing ChIP-seq data. This analysis offers an alternate approach for increasing true positive rates, while decreasing false positive rates and hence improving the ChIP-seq peak identification process.

  13. Fast and Analytical EAP Approximation from a 4th-Order Tensor.

    PubMed

    Ghosh, Aurobrata; Deriche, Rachid

    2012-01-01

    Generalized diffusion tensor imaging (GDTI) was developed to model complex apparent diffusivity coefficient (ADC) using higher-order tensors (HOTs) and to overcome the inherent single-peak shortcoming of DTI. However, the geometry of a complex ADC profile does not correspond to the underlying structure of fibers. This tissue geometry can be inferred from the shape of the ensemble average propagator (EAP). Though interesting methods for estimating a positive ADC using 4th-order diffusion tensors were developed, GDTI in general was overtaken by other approaches, for example, the orientation distribution function (ODF), since it is considerably difficult to recuperate the EAP from a HOT model of the ADC in GDTI. In this paper, we present a novel closed-form approximation of the EAP using Hermite polynomials from a modified HOT model of the original GDTI-ADC. Since the solution is analytical, it is fast, differentiable, and the approximation converges well to the true EAP. This method also makes the effort of computing a positive ADC worthwhile, since now both the ADC and the EAP can be used and have closed forms. We demonstrate our approach with 4th-order tensors on synthetic data and in vivo human data.

  14. Fast and Analytical EAP Approximation from a 4th-Order Tensor

    PubMed Central

    Ghosh, Aurobrata; Deriche, Rachid

    2012-01-01

    Generalized diffusion tensor imaging (GDTI) was developed to model complex apparent diffusivity coefficient (ADC) using higher-order tensors (HOTs) and to overcome the inherent single-peak shortcoming of DTI. However, the geometry of a complex ADC profile does not correspond to the underlying structure of fibers. This tissue geometry can be inferred from the shape of the ensemble average propagator (EAP). Though interesting methods for estimating a positive ADC using 4th-order diffusion tensors were developed, GDTI in general was overtaken by other approaches, for example, the orientation distribution function (ODF), since it is considerably difficult to recuperate the EAP from a HOT model of the ADC in GDTI. In this paper, we present a novel closed-form approximation of the EAP using Hermite polynomials from a modified HOT model of the original GDTI-ADC. Since the solution is analytical, it is fast, differentiable, and the approximation converges well to the true EAP. This method also makes the effort of computing a positive ADC worthwhile, since now both the ADC and the EAP can be used and have closed forms. We demonstrate our approach with 4th-order tensors on synthetic data and in vivo human data. PMID:23365552

  15. Revisiting the definition of the electronic chemical potential, chemical hardness, and softness at finite temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franco-Pérez, Marco, E-mail: qimfranco@hotmail.com, E-mail: jlgm@xanum.uam.mx; Department of Chemistry, McMaster University, Hamilton, Ontario L8S 4M1; Gázquez, José L., E-mail: qimfranco@hotmail.com, E-mail: jlgm@xanum.uam.mx

    We extend the definition of the electronic chemical potential (μ{sub e}) and chemical hardness (η{sub e}) to finite temperatures by considering a reactive chemical species as a true open system to the exchange of electrons, working exclusively within the framework of the grand canonical ensemble. As in the zero temperature derivation of these descriptors, the response of a chemical reagent to electron-transfer is determined by the response of the (average) electronic energy of the system, and not by intrinsic thermodynamic properties like the chemical potential of the electron-reservoir which is, in general, different from the electronic chemical potential, μ{sub e}.more » Although the dependence of the electronic energy on electron number qualitatively resembles the piecewise-continuous straight-line profile for low electronic temperatures (up to ca. 5000 K), the introduction of the temperature as a free variable smoothens this profile, so that derivatives (of all orders) of the average electronic energy with respect to the average electron number exist and can be evaluated analytically. Assuming a three-state ensemble, well-known results for the electronic chemical potential at negative (−I), positive (−A), and zero values of the fractional charge (−(I + A)/2) are recovered. Similarly, in the zero temperature limit, the chemical hardness is formally expressed as a Dirac delta function in the particle number and satisfies the well-known reciprocity relation with the global softness.« less

  16. Social support and relationship satisfaction in bipolar disorder.

    PubMed

    Boyers, Grace B; Simpson Rowe, Lorelei

    2018-06-01

    Social support is positively associated with individual well-being, particularly if an intimate partner provides that support. However, despite evidence that individuals with bipolar disorder (BPD) are at high risk for relationship discord and are especially vulnerable to low or inadequate social support, little research has explored the relationship between social support and relationship quality among couples in which a partner has BPD. The current study addresses this gap in the literature by examining the association between social support and relationship satisfaction in a weekly diary study. Thirty-eight opposite-sex couples who were married or living together for at least one year and in which one partner met diagnostic criteria for BPD completed up to 26 weekly diaries measuring social support and relationship satisfaction, as well as psychiatric symptoms. Results revealed that greater social support on average was associated with higher average relationship satisfaction for individuals with BPD and their partners, and that more support than usual in any given week was associated with higher relationship satisfaction that week. The converse was also true, with greater-than-average relationship satisfaction and more satisfaction than usual associated with greater social support. The results emphasize the week-to-week variability of social support and relationship satisfaction and the probable reciprocal relationship between support and satisfaction among couples in which a partner has BPD. Thus, social support may be important for maintaining relationship satisfaction and vice versa, even after controlling for concurrent mood symptoms. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. Response time as a discriminator between true- and false-positive responses in suprathreshold perimetry.

    PubMed

    Artes, Paul H; McLeod, David; Henson, David B

    2002-01-01

    To report on differences between the latency distributions of responses to stimuli and to false-positive catch trials in suprathreshold perimetry. To describe an algorithm for defining response time windows and to report on its performance in discriminating between true- and false-positive responses on the basis of response time (RT). A sample of 435 largely inexperienced patients underwent suprathreshold visual field examination on a perimeter that was modified to record RTs. Data were analyzed from 60,500 responses to suprathreshold stimuli and from 523 false-positive responses to catch trials. False-positive responses had much more variable latencies than responses to suprathreshold stimuli. An algorithm defining RT windows on the basis of z-transformed individual latency samples correctly identified more than 70% of false-positive responses to catch trials, whereas fewer than 3% of responses to suprathreshold stimuli were classified as false-positive responses. Latency analysis can be used to detect a substantial proportion of false-positive responses in suprathreshold perimetry. Rejection of such responses may increase the reliability of visual field screening by reducing variability and bias in a small but clinically important proportion of patients.

  18. False memory for context and true memory for context similarly activate the parahippocampal cortex.

    PubMed

    Karanian, Jessica M; Slotnick, Scott D

    2017-06-01

    The role of the parahippocampal cortex is currently a topic of debate. One view posits that the parahippocampal cortex specifically processes spatial layouts and sensory details (i.e., the visual-spatial processing view). In contrast, the other view posits that the parahippocampal cortex more generally processes spatial and non-spatial contexts (i.e., the general contextual processing view). A large number of studies have found that true memories activate the parahippocampal cortex to a greater degree than false memories, which would appear to support the visual-spatial processing view as true memories are typically associated with greater visual-spatial detail than false memories. However, in previous studies, contextual details were also greater for true memories than false memories. Thus, such differential activity in the parahippocampal cortex may have reflected differences in contextual processing, which would challenge the visual-spatial processing view. In the present functional magnetic resonance imaging (fMRI) study, we employed a source memory paradigm to investigate the functional role of the parahippocampal cortex during true memory and false memory for contextual information to distinguish between the visual-spatial processing view and the general contextual processing view. During encoding, abstract shapes were presented to the left or right of fixation. During retrieval, old shapes were presented at fixation and participants indicated whether each shape was previously on the "left" or "right" followed by an "unsure", "sure", or "very sure" confidence rating. The conjunction of confident true memories for context and confident false memories for context produced activity in the parahippocampal cortex, which indicates that this region is associated with contextual processing. Furthermore, the direct contrast of true memory and false memory produced activity in the visual cortex but did not produce activity in the parahippocampal cortex. The present evidence suggests that the parahippocampal cortex is associated with general contextual processing rather than only being associated with visual-spatial processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Can We Forget What We Know in a False-Belief Task? An Investigation of the True-Belief Default.

    PubMed

    Rubio-Fernández, Paula

    2017-01-01

    It has been generally assumed in the Theory of Mind literature of the past 30 years that young children fail standard false-belief tasks because they attribute their own knowledge to the protagonist (what Leslie and colleagues called a "true-belief default"). Contrary to the traditional view, we have recently proposed that the children's bias is task induced. This alternative view was supported by studies showing that 3 year olds are able to pass a false-belief task that allows them to focus on the protagonist, without drawing their attention to the target object in the test phase. For a more accurate comparison of these two accounts, the present study tested the true-belief default with adults. Four experiments measuring eye movements and response inhibition revealed that (a) adults do not have an automatic tendency to respond to the false-belief question according to their own knowledge and (b) the true-belief response need not be inhibited in order to correctly predict the protagonist's actions. The positive results observed in the control conditions confirm the accuracy of the various measures used. I conclude that the results of this study undermine the true-belief default view and those models that posit mechanisms of response inhibition in false-belief reasoning. Alternatively, the present study with adults and recent studies with children suggest that participants' focus of attention in false-belief tasks may be key to their performance. Copyright © 2015 Cognitive Science Society, Inc.

  20. Positive Psychology and Adolescent Mental Health: False Promise or True Breakthrough?

    ERIC Educational Resources Information Center

    Kelley, Thomas M.

    2004-01-01

    The emerging field of positive psychology has pledged to improve the mental health of American adolescents. Yet, without a principle-based conceptual foundation to guide its study of optimal youth functioning, positive psychology will ultimately fail to keep its promise. This paper suggests that the principles of Mind, Thought and Consciousness…

  1. 22 CFR 506.2 - Review of positions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Review of positions. 506.2 Section 506.2 Foreign Relations BROADCASTING BOARD OF GOVERNORS PART-TIME CAREER EMPLOYMENT PROGRAM § 506.2 Review of positions... feasibility of converting them to part-time. Among the criteria which may be used when conducting this review...

  2. Mandated fuel economy standards as a strategy for improving motor vehicle fuel economy.

    DOT National Transportation Integrated Search

    1978-10-19

    The major domestic motor vehicle manufacturers have projected that their new car fleet average fuel economy will meet the federal mandated fuel economy standard for 1985, of 27.5 miles per gallon. Assuming that these projections hold true, in one dec...

  3. Evaluation and comparison of the ability of online available prediction programs to predict true linear B-cell epitopes.

    PubMed

    Costa, Juan G; Faccendini, Pablo L; Sferco, Silvano J; Lagier, Claudia M; Marcipar, Iván S

    2013-06-01

    This work deals with the use of predictors to identify useful B-cell linear epitopes to develop immunoassays. Experimental techniques to meet this goal are quite expensive and time consuming. Therefore, we tested 5 free, online prediction methods (AAPPred, ABCpred, BcePred, BepiPred and Antigenic) widely used for predicting linear epitopes, using the primary structure of the protein as the only input. We chose a set of 65 experimentally well documented epitopes obtained by the most reliable experimental techniques as our true positive set. To compare the quality of the predictor methods we used their positive predictive value (PPV), i.e. the proportion of the predicted epitopes that are true, experimentally confirmed epitopes, in relation to all the epitopes predicted. We conclude that AAPPred and ABCpred yield the best results as compared with the other programs and with a random prediction procedure. Our results also indicate that considering the consensual epitopes predicted by several programs does not improve the PPV.

  4. Accuracy and precision of four value-added blood glucose meters: the Abbott Optium, the DDI Prodigy, the HDI True Track, and the HypoGuard Assure Pro.

    PubMed

    Sheffield, Catherine A; Kane, Michael P; Bakst, Gary; Busch, Robert S; Abelseth, Jill M; Hamilton, Robert A

    2009-09-01

    This study compared the accuracy and precision of four value-added glucose meters. Finger stick glucose measurements in diabetes patients were performed using the Abbott Diabetes Care (Alameda, CA) Optium, Diagnostic Devices, Inc. (Miami, FL) DDI Prodigy, Home Diagnostics, Inc. (Fort Lauderdale, FL) HDI True Track Smart System, and Arkray, USA (Minneapolis, MN) HypoGuard Assure Pro. Finger glucose measurements were compared with laboratory reference results. Accuracy was assessed by a Clarke error grid analysis (EGA), a Parkes EGA, and within 5%, 10%, 15%, and 20% of the laboratory value criteria (chi2 analysis). Meter precision was determined by calculating absolute mean differences in glucose values between duplicate samples (Kruskal-Wallis test). Finger sticks were obtained from 125 diabetes patients, of which 90.4% were Caucasian, 51.2% were female, 83.2% had type 2 diabetes, and average age of 59 years (SD 14 years). Mean venipuncture blood glucose was 151 mg/dL (SD +/-65 mg/dL; range, 58-474 mg/dL). Clinical accuracy by Clarke EGA was demonstrated in 94% of Optium, 82% of Prodigy, 61% of True Track, and 77% of the Assure Pro samples (P < 0.05 for Optium and True Track compared to all others). By Parkes EGA, the True Track was significantly less accurate than the other meters. Within 5% accuracy was achieved in 34%, 24%, 29%, and 13%, respectively (P < 0.05 for Optium, Prodigy, and Assure Pro compared to True Track). Within 10% accuracy was significantly greater for the Optium, Prodigy, and Assure Pro compared to True Track. Significantly more Optium results demonstrated within 15% and 20% accuracy compared to the other meter systems. The HDI True Track was significantly less precise than the other meter systems. The Abbott Optium was significantly more accurate than the other meter systems, whereas the HDI True Track was significantly less accurate and less precise compared to the other meter systems.

  5. Some Simple Formulas for Posterior Convergence Rates

    PubMed Central

    2014-01-01

    We derive some simple relations that demonstrate how the posterior convergence rate is related to two driving factors: a “penalized divergence” of the prior, which measures the ability of the prior distribution to propose a nonnegligible set of working models to approximate the true model and a “norm complexity” of the prior, which measures the complexity of the prior support, weighted by the prior probability masses. These formulas are explicit and involve no essential assumptions and are easy to apply. We apply this approach to the case with model averaging and derive some useful oracle inequalities that can optimize the performance adaptively without knowing the true model. PMID:27379278

  6. Screening for postnatal depression in Chinese-speaking women using the Hong Kong translated version of the Edinburgh Postnatal Depression Scale.

    PubMed

    Chen, Helen; Bautista, Dianne; Ch'ng, Ying Chia; Li, Wenyun; Chan, Edwin; Rush, A John

    2013-06-01

    The Edinburgh Postnatal Depression Scale (EPDS) may not be a uniformly valid postnatal depression (PND) screen across populations. We evaluated the performance of a Chinese translation of 10-item (HK-EPDS) and six-item (HK-EPDS-6) versions in post-partum women in Singapore. Chinese-speaking post-partum obstetric clinic patients were recruited for this study. They completed the HK-EPDS, from which we derived the six-item HK-EPDS-6. All women were clinically assessed for PND based on Diagnostic and Statistical Manual, Fourth Edition-Text Revision criteria. Receiver-operator curve (ROC) analyses and likelihood ratio computations informed scale cutoff choices. Clinical fitness was judged by thresholds for internal consistency [α ≥ 0.70] and for diagnostic performance by true-positive rate (>85%), false-positive rate (≤10%), positive likelihood ratio (>1), negative likelihood ratio (<0.2), area under the ROC curve (AUC, ≥90%) and effect size (≥0.80). Based on clinical interview, prevalence of PND was 6.2% in 487 post-partum women. HK-EPDS internal consistency was 0.84. At 13 or more cutoff, the true-positive rate was 86.7%, false-positive rate 3.3%, positive likelihood ratio 26.4, negative likelihood ratio 0.14, AUC 94.4% and effect size 0.81. For the HK-EPDS-6, internal consistency was 0.76. At 8 or more cutoff, we found a true-positive rate of 86.7%, false-positive rate 6.6%, positive likelihood ratio 13.2, negative likelihood ration 0.14, AUC 92.9% and effect size 0.98. The HK-EPDS (cutoff ≥13) and HK-EPDS6 (cutoff ≥8) are fit for PND screening for general population post-partum women. The brief six-item version appears to be clinically suitable for quick screening in Chinese speaking women. Copyright © 2013 Wiley Publishing Asia Pty Ltd.

  7. Improvements of the offshore earthquake locations in the Earthquake Early Warning System

    NASA Astrophysics Data System (ADS)

    Chen, Ta-Yi; Hsu, Hsin-Chih

    2017-04-01

    Since 2014 the Earthworm Based Earthquake Alarm Reporting (eBEAR) system has been operated and been used to issue warnings to schools. In 2015 the system started to provide warnings to the public in Taiwan via television and the cell phone. Online performance of the eBEAR system indicated that the average reporting times afforded by the system are approximately 15 and 28 s for inland and offshore earthquakes, respectively. The eBEAR system in average can provide more warning time than the current EEW system (3.2 s and 5.5 s for inland and offshore earthquakes, respectively). However, offshore earthquakes were usually located poorly because only P-wave arrivals were used in the eBEAR system. Additionally, in the early stage of the earthquake early warning system, only fewer stations are available. The poor station coverage may be a reason to answer why offshore earthquakes are difficult to locate accurately. In the Geiger's inversion procedure of earthquake location, we need to put an initial hypocenter and origin time into the location program. For the initial hypocenter, we defined some test locations on the offshore area instead of using the average of locations from triggered stations. We performed 20 programs concurrently running the Geiger's method with different pre-defined initial position to locate earthquakes. We assume that if the program with the pre-defined initial position is close to the true earthquake location, during the iteration procedure of the Geiger's method the processing time of this program should be less than others. The results show that using pre-defined locations for trial-hypocenter in the inversion procedure is able to improve the accurate of offshore earthquakes. Especially for EEW system, in the initial stage of the EEW system, only use 3 or 5 stations to locate earthquakes may lead to bad results because of poor station coverage. In this study, the pre-defined trial-locations provide a feasible way to improve the estimations of earthquake locations in EEW system.

  8. SIMS of Organic Materials—Interface Location in Argon Gas Cluster Depth Profiles Using Negative Secondary Ions

    NASA Astrophysics Data System (ADS)

    Havelund, R.; Seah, M. P.; Tiddia, M.; Gilmore, I. S.

    2018-02-01

    A procedure has been established to define the interface position in depth profiles accurately when using secondary ion mass spectrometry and the negative secondary ions. The interface position varies strongly with the extent of the matrix effect and so depends on the secondary ion measured. Intensity profiles have been measured at both fluorenylmethyloxycarbonyl-uc(l)-pentafluorophenylalanine (FMOC) to Irganox 1010 and Irganox 1010 to FMOC interfaces for many secondary ions. These profiles show separations of the two interfaces that vary over some 10 nm depending on the secondary ion selected. The shapes of these profiles are strongly governed by matrix effects, slightly weakened by a long wavelength roughening. The matrix effects are separately measured using homogeneous, known mixtures of these two materials. Removal of the matrix and roughening effects give consistent compositional profiles for all ions that are described by an integrated exponentially modified Gaussian (EMG) profile. Use of a simple integrated Gaussian may lead to significant errors. The average interface positions in the compositional profiles are determined to standard uncertainties of 0.19 and 0.14 nm, respectively, using the integrated EMG function. Alternatively, and more simply, it is shown that interface positions and profiles may be deduced from data for several secondary ions with measured matrix factors by simply extrapolating the result to Ξ = 0. Care must be taken in quoting interface resolutions since those measured for predominantly Gaussian interfaces with Ξ above or below zero, without correction, appear significantly better than the true resolution.

  9. A digital boxcar integrator for IMS spectra

    NASA Technical Reports Server (NTRS)

    Cohen, Martin J.; Stimac, Robert M.; Wernlund, Roger F.; Parker, Donald C.

    1995-01-01

    When trying to detect or quantify a signal at or near the limit of detectability, it is invariably embeded in the noise. This statement is true for nearly all detectors of any physical phenomena and the limit of detectability, hopefully, occurs at very low signal-to-noise levels. This is particularly true of IMS (Ion Mobility Spectrometers) spectra due to the low vapor pressure of several chemical compounds of great interest and the small currents associated with the ionic detection process. Gated Integrators and Boxcar Integrators or Averagers are designed to recover fast, repetitive analog signals. In a typical application, a time 'Gate' or 'Window' is generated, characterized by a set delay from a trigger or gate pulse and a certain width. A Gated Integrator amplifies and integrates the signal that is present during the time the gate is open, ignoring noise and interference that may be present at other times. Boxcar Integration refers to the practice of averaging the output of the Gated Integrator over many sweeps of the detector. Since any signal present during the gate will add linearly, while noise will add in a 'random walk' fashion as the square root of the number of sweeps, averaging N sweeps will improve the 'Signal-to-Noise Ratio' by a factor of the square root of N.

  10. Mood-congruent true and false memory: effects of depression.

    PubMed

    Howe, Mark L; Malone, Catherine

    2011-02-01

    The Deese/Roediger-McDermott paradigm was used to investigate the effect of depression on true and false recognition. In this experiment true and false recognition was examined across positive, neutral, negative, and depression-relevant lists for individuals with and without a diagnosis of major depressive disorder. Results showed that participants with major depressive disorder falsely recognised significantly more depression-relevant words than non-depressed controls. These findings also parallel recent research using recall instead of recognition and show that there are clear mood congruence effects for depression on false memory performance. © 2011 Psychology Press, an imprint of the Taylor & Francis Group, an Informa business

  11. A Maximum NEC Criterion for Compton Collimation to Accurately Identify True Coincidences in PET

    PubMed Central

    Chinn, Garry; Levin, Craig S.

    2013-01-01

    In this work, we propose a new method to increase the accuracy of identifying true coincidence events for positron emission tomography (PET). This approach requires 3-D detectors with the ability to position each photon interaction in multi-interaction photon events. When multiple interactions occur in the detector, the incident direction of the photon can be estimated using the Compton scatter kinematics (Compton Collimation). If the difference between the estimated incident direction of the photon relative to a second, coincident photon lies within a certain angular range around colinearity, the line of response between the two photons is identified as a true coincidence and used for image reconstruction. We present an algorithm for choosing the incident photon direction window threshold that maximizes the noise equivalent counts of the PET system. For simulated data, the direction window removed 56%–67% of random coincidences while retaining > 94% of true coincidences from image reconstruction as well as accurately extracted 70% of true coincidences from multiple coincidences. PMID:21317079

  12. On the Importance of Cycle Minimum in Sunspot Cycle Prediction

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.

    1996-01-01

    The characteristics of the minima between sunspot cycles are found to provide important information for predicting the amplitude and timing of the following cycle. For example, the time of the occurrence of sunspot minimum sets the length of the previous cycle, which is correlated by the amplitude-period effect to the amplitude of the next cycle, with cycles of shorter (longer) than average length usually being followed by cycles of larger (smaller) than average size (true for 16 of 21 sunspot cycles). Likewise, the size of the minimum at cycle onset is correlated with the size of the cycle's maximum amplitude, with cycles of larger (smaller) than average size minima usually being associated with larger (smaller) than average size maxima (true for 16 of 22 sunspot cycles). Also, it was found that the size of the previous cycle's minimum and maximum relates to the size of the following cycle's minimum and maximum with an even-odd cycle number dependency. The latter effect suggests that cycle 23 will have a minimum and maximum amplitude probably larger than average in size (in particular, minimum smoothed sunspot number Rm = 12.3 +/- 7.5 and maximum smoothed sunspot number RM = 198.8 +/- 36.5, at the 95-percent level of confidence), further suggesting (by the Waldmeier effect) that it will have a faster than average rise to maximum (fast-rising cycles have ascent durations of about 41 +/- 7 months). Thus, if, as expected, onset for cycle 23 will be December 1996 +/- 3 months, based on smoothed sunspot number, then the length of cycle 22 will be about 123 +/- 3 months, inferring that it is a short-period cycle and that cycle 23 maximum amplitude probably will be larger than average in size (from the amplitude-period effect), having an RM of about 133 +/- 39 (based on the usual +/- 30 percent spread that has been seen between observed and predicted values), with maximum amplitude occurrence likely sometime between July 1999 and October 2000.

  13. The dynamic interplay between perceived true self-knowledge and decision satisfaction.

    PubMed

    Schlegel, Rebecca J; Hicks, Joshua A; Davis, William E; Hirsch, Kelly A; Smith, Christina M

    2013-03-01

    The present research used multiple methods to examine the hypothesis that perceived true self-knowledge and decision satisfaction are inextricably linked together by a widely held "true-self-as-guide" lay theory of decision making. Consistent with this proposition, Study 1 found that participants rated using the true self as a guide as more important for achieving personal satisfaction than a variety of other potential decision-making strategies. After establishing the prevalence of this lay theory, the remaining studies then focused on examining the proposed consequent relationship between perceived true self-knowledge and decision satisfaction. Consistent with hypotheses, 2 cross-sectional correlational studies (Studies 2 and 3) found a positive relationship between perceived true self-knowledge and decision satisfaction for different types of major decisions. Study 4 used daily diary methods to demonstrate that fluctuations in perceived true self-knowledge reliably covary with fluctuations in decision satisfaction. Finally, 2 studies directly examined the causal direction of this relationship through experimental manipulation and revealed that the relationship is truly bidirectional. More specifically, Study 5 showed that manipulating perceived knowledge of the true self (but not other self-concepts) directly affects decision satisfaction. Study 6 showed that this effect also works in reverse by manipulating feelings of decision satisfaction, which directly affected perceived knowledge of the true self (but not other self-concepts). Taken together, these studies suggest that people believe the true self should be used as a guide when making major life decisions and that this belief has observable consequences for the self and decision making. PsycINFO Database Record (c) 2013 APA, all rights reserved

  14. Diagnostic System for Decomposition Studies of Energetic Materials

    DTIC Science & Technology

    2017-10-03

    transition states and reaction pathways are sought. The overall objective for these combined experimental studies and quantum mechanics investigations...peak-to-peak 1 min: 50,000:1 ( 8.6×10-6 AU noise) peak-to-peak Interferometer UltraScan linear air bearing scanner with True -Alignment Aperture... True 24 bit dynamic range for all scan velocities, dual channel data acquisition Validation Internal validation unit, 6 positions, certified

  15. 41 CFR 50-204.22 - Exposure to airborne radioactive material.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Exposure to airborne... FEDERAL SUPPLY CONTRACTS Radiation Standards § 50-204.22 Exposure to airborne radioactive material. (a) No..., within a restricted area, to be exposed to airborne radioactive material in an average concentration in...

  16. Evaluating variable rate fungicide applications for control of Sclerotinia

    USDA-ARS?s Scientific Manuscript database

    Oklahoma peanut growers continue to try to increase yields and reduce input costs. Perhaps the largest input in a peanut crop is fungicide applications. This is especially true for areas in the state that have high disease pressure from Sclerotinia. On average, a single fungicide application cost...

  17. 40 CFR 89.206 - Trading.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Trading. 89.206 Section 89.206... EMISSIONS FROM NEW AND IN-USE NONROAD COMPRESSION-IGNITION ENGINES Averaging, Banking, and Trading Provisions § 89.206 Trading. (a) Requirements for Tier 1 engines rated at or above 37 kW. (1) A nonroad...

  18. 21 CFR 640.56 - Quality control test for potency.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., Center for Biologics Evaluation and Research, Food and Drug Administration. Such testing shall not be... Evaluation and Research, Food and Drug Administration. (d) If the average potency level of antihemophilic... 21 Food and Drugs 7 2011-04-01 2010-04-01 true Quality control test for potency. 640.56 Section...

  19. Automatic user customization for improving the performance of a self-paced brain interface system.

    PubMed

    Fatourechi, Mehrdad; Bashashati, Ali; Birch, Gary E; Ward, Rabab K

    2006-12-01

    Customizing the parameter values of brain interface (BI) systems by a human expert has the advantage of being fast and computationally efficient. However, as the number of users and EEG channels grows, this process becomes increasingly time consuming and exhausting. Manual customization also introduces inaccuracies in the estimation of the parameter values. In this paper, the performance of a self-paced BI system whose design parameter values were automatically user customized using a genetic algorithm (GA) is studied. The GA automatically estimates the shapes of movement-related potentials (MRPs), whose features are then extracted to drive the BI. Offline analysis of the data of eight subjects revealed that automatic user customization improved the true positive (TP) rate of the system by an average of 6.68% over that whose customization was carried out by a human expert, i.e., by visually inspecting the MRP templates. On average, the best improvement in the TP rate (an average of 9.82%) was achieved for four individuals with spinal cord injury. In this case, the visual estimation of the parameter values of the MRP templates was very difficult because of the highly noisy nature of the EEG signals. For four able-bodied subjects, for which the MRP templates were less noisy, the automatic user customization led to an average improvement of 3.58% in the TP rate. The results also show that the inter-subject variability of the TP rate is also reduced compared to the case when user customization is carried out by a human expert. These findings provide some primary evidence that automatic user customization leads to beneficial results in the design of a self-paced BI for individuals with spinal cord injury.

  20. Preliminary results of neural networks and zernike polynomials for classification of videokeratography maps.

    PubMed

    Carvalho, Luis Alberto

    2005-02-01

    Our main goal in this work was to develop an artificial neural network (NN) that could classify specific types of corneal shapes using Zernike coefficients as input. Other authors have implemented successful NN systems in the past and have demonstrated their efficiency using different parameters. Our claim is that, given the increasing popularity of Zernike polynomials among the eye care community, this may be an interesting choice to add complementing value and precision to existing methods. By using a simple and well-documented corneal surface representation scheme, which relies on corneal elevation information, one can generate simple NN input parameters that are independent of curvature definition and that are also efficient. We have used the Matlab Neural Network Toolbox (MathWorks, Natick, MA) to implement a three-layer feed-forward NN with 15 inputs and 5 outputs. A database from an EyeSys System 2000 (EyeSys Vision, Houston, TX) videokeratograph installed at the Escola Paulista de Medicina-Sao Paulo was used. This database contained an unknown number of corneal types. From this database, two specialists selected 80 corneas that could be clearly classified into five distinct categories: (1) normal, (2) with-the-rule astigmatism, (3) against-the-rule astigmatism, (4) keratoconus, and (5) post-laser-assisted in situ keratomileusis. The corneal height (SAG) information of the 80 data files was fit with the first 15 Vision Science and it Applications (VSIA) standard Zernike coefficients, which were individually used to feed the 15 neurons of the input layer. The five output neurons were associated with the five typical corneal shapes. A group of 40 cases was randomly selected from the larger group of 80 corneas and used as the training set. The NN responses were statistically analyzed in terms of sensitivity [true positive/(true positive + false negative)], specificity [true negative/(true negative + false positive)], and precision [(true positive + true negative)/total number of cases]. The mean values for these parameters were, respectively, 78.75, 97.81, and 94%. Although we have used a relatively small training and testing set, results presented here should be considered promising. They are certainly an indication of the potential of Zernike polynomials as reliable parameters, at least in the cases presented here, as input data for artificial intelligence automation of the diagnosis process of videokeratography examinations. This technique should facilitate the implementation and add value to the classification methods already available. We also discuss briefly certain special properties of Zernike polynomials that are what we think make them suitable as NN inputs for this type of application.

  1. Emotionally negative pictures enhance gist memory.

    PubMed

    Bookbinder, S H; Brainerd, C J

    2017-02-01

    In prior work on how true and false memory are influenced by emotion, valence and arousal have often been conflated. Thus, it is difficult to say which specific effects are caused by valence and which are caused by arousal. In the present research, we used a picture-memory paradigm that allowed emotional valence to be manipulated with arousal held constant. Negatively valenced pictures elevated both true and false memory, relative to positive and neutral pictures. Conjoint recognition modeling revealed that negative valence (a) reduced erroneous suppression of true memories and (b) increased the familiarity of the semantic content of both true and false memories. Overall, negative valence impaired the verbatim side of episodic memory but enhanced the gist side, and these effects persisted even after a week-long delay. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. 34 CFR 300.177 - States' sovereign immunity and positive efforts to employ and advance qualified individuals with...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 2 2011-07-01 2010-07-01 true States' sovereign immunity and positive efforts to... immunity and positive efforts to employ and advance qualified individuals with disabilities. (a) States' sovereign immunity. (1) A State that accepts funds under this part waives its immunity under the 11th...

  3. 34 CFR 300.177 - States' sovereign immunity and positive efforts to employ and advance qualified individuals with...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 34 Education 2 2014-07-01 2013-07-01 true States' sovereign immunity and positive efforts to... immunity and positive efforts to employ and advance qualified individuals with disabilities. (a) States' sovereign immunity. (1) A State that accepts funds under this part waives its immunity under the 11th...

  4. The Role of Balanced Training and Testing Data Sets for Binary Classifiers in Bioinformatics

    PubMed Central

    Wei, Qiong; Dunbrack, Roland L.

    2013-01-01

    Training and testing of conventional machine learning models on binary classification problems depend on the proportions of the two outcomes in the relevant data sets. This may be especially important in practical terms when real-world applications of the classifier are either highly imbalanced or occur in unknown proportions. Intuitively, it may seem sensible to train machine learning models on data similar to the target data in terms of proportions of the two binary outcomes. However, we show that this is not the case using the example of prediction of deleterious and neutral phenotypes of human missense mutations in human genome data, for which the proportion of the binary outcome is unknown. Our results indicate that using balanced training data (50% neutral and 50% deleterious) results in the highest balanced accuracy (the average of True Positive Rate and True Negative Rate), Matthews correlation coefficient, and area under ROC curves, no matter what the proportions of the two phenotypes are in the testing data. Besides balancing the data by undersampling the majority class, other techniques in machine learning include oversampling the minority class, interpolating minority-class data points and various penalties for misclassifying the minority class. However, these techniques are not commonly used in either the missense phenotype prediction problem or in the prediction of disordered residues in proteins, where the imbalance problem is substantial. The appropriate approach depends on the amount of available data and the specific problem at hand. PMID:23874456

  5. Detection of Multiple Innervation Zones from Multi-Channel Surface EMG Recordings with Low Signal-to-Noise Ratio Using Graph-Cut Segmentation.

    PubMed

    Marateb, Hamid Reza; Farahi, Morteza; Rojas, Monica; Mañanas, Miguel Angel; Farina, Dario

    2016-01-01

    Knowledge of the location of muscle Innervation Zones (IZs) is important in many applications, e.g. for minimizing the quantity of injected botulinum toxin for the treatment of spasticity or for deciding on the type of episiotomy during child delivery. Surface EMG (sEMG) can be noninvasively recorded to assess physiological and morphological characteristics of contracting muscles. However, it is not often possible to record signals of high quality. Moreover, muscles could have multiple IZs, which should all be identified. We designed a fully-automatic algorithm based on the enhanced image Graph-Cut segmentation and morphological image processing methods to identify up to five IZs in 60-ms intervals of very-low to moderate quality sEMG signal detected with multi-channel electrodes (20 bipolar channels with Inter Electrode Distance (IED) of 5 mm). An anisotropic multilayered cylinder model was used to simulate 750 sEMG signals with signal-to-noise ratio ranging from -5 to 15 dB (using Gaussian noise) and in each 60-ms signal frame, 1 to 5 IZs were included. The micro- and macro- averaged performance indices were then reported for the proposed IZ detection algorithm. In the micro-averaging procedure, the number of True Positives, False Positives and False Negatives in each frame were summed up to generate cumulative measures. In the macro-averaging, on the other hand, precision and recall were calculated for each frame and their averages are used to determine F1-score. Overall, the micro (macro)-averaged sensitivity, precision and F1-score of the algorithm for IZ channel identification were 82.7% (87.5%), 92.9% (94.0%) and 87.5% (90.6%), respectively. For the correctly identified IZ locations, the average bias error was of 0.02±0.10 IED ratio. Also, the average absolute conduction velocity estimation error was 0.41±0.40 m/s for such frames. The sensitivity analysis including increasing IED and reducing interpolation coefficient for time samples was performed. Meanwhile, the effect of adding power-line interference and using other image interpolation methods on the deterioration of the performance of the proposed algorithm was investigated. The average running time of the proposed algorithm on each 60-ms sEMG frame was 25.5±8.9 (s) on an Intel dual-core 1.83 GHz CPU with 2 GB of RAM. The proposed algorithm correctly and precisely identified multiple IZs in each signal epoch in a wide range of signal quality and is thus a promising new offline tool for electrophysiological studies.

  6. Detection of Multiple Innervation Zones from Multi-Channel Surface EMG Recordings with Low Signal-to-Noise Ratio Using Graph-Cut Segmentation

    PubMed Central

    Farahi, Morteza; Rojas, Monica; Mañanas, Miguel Angel; Farina, Dario

    2016-01-01

    Knowledge of the location of muscle Innervation Zones (IZs) is important in many applications, e.g. for minimizing the quantity of injected botulinum toxin for the treatment of spasticity or for deciding on the type of episiotomy during child delivery. Surface EMG (sEMG) can be noninvasively recorded to assess physiological and morphological characteristics of contracting muscles. However, it is not often possible to record signals of high quality. Moreover, muscles could have multiple IZs, which should all be identified. We designed a fully-automatic algorithm based on the enhanced image Graph-Cut segmentation and morphological image processing methods to identify up to five IZs in 60-ms intervals of very-low to moderate quality sEMG signal detected with multi-channel electrodes (20 bipolar channels with Inter Electrode Distance (IED) of 5 mm). An anisotropic multilayered cylinder model was used to simulate 750 sEMG signals with signal-to-noise ratio ranging from -5 to 15 dB (using Gaussian noise) and in each 60-ms signal frame, 1 to 5 IZs were included. The micro- and macro- averaged performance indices were then reported for the proposed IZ detection algorithm. In the micro-averaging procedure, the number of True Positives, False Positives and False Negatives in each frame were summed up to generate cumulative measures. In the macro-averaging, on the other hand, precision and recall were calculated for each frame and their averages are used to determine F1-score. Overall, the micro (macro)-averaged sensitivity, precision and F1-score of the algorithm for IZ channel identification were 82.7% (87.5%), 92.9% (94.0%) and 87.5% (90.6%), respectively. For the correctly identified IZ locations, the average bias error was of 0.02±0.10 IED ratio. Also, the average absolute conduction velocity estimation error was 0.41±0.40 m/s for such frames. The sensitivity analysis including increasing IED and reducing interpolation coefficient for time samples was performed. Meanwhile, the effect of adding power-line interference and using other image interpolation methods on the deterioration of the performance of the proposed algorithm was investigated. The average running time of the proposed algorithm on each 60-ms sEMG frame was 25.5±8.9 (s) on an Intel dual-core 1.83 GHz CPU with 2 GB of RAM. The proposed algorithm correctly and precisely identified multiple IZs in each signal epoch in a wide range of signal quality and is thus a promising new offline tool for electrophysiological studies. PMID:27978535

  7. Serial testing for latent tuberculosis using QuantiFERON-TB Gold In-Tube: A Markov model

    PubMed Central

    Moses, Mark W.; Zwerling, Alice; Cattamanchi, Adithya; Denkinger, Claudia M.; Banaei, Niaz; Kik, Sandra V.; Metcalfe, John; Pai, Madhukar; Dowdy, David

    2016-01-01

    Healthcare workers (HCWs) in low-incidence settings are often serially tested for latent TB infection (LTBI) with the QuantiFERON-TB Gold In-Tube (QFT) assay, which exhibits frequent conversions and reversions. The clinical impact of such variability on serial testing remains unknown. We used a microsimulation Markov model that accounts for major sources of variability to project diagnostic outcomes in a simulated North American HCW cohort. Serial testing using a single QFT with the recommended conversion cutoff (IFN-g > 0.35 IU/mL) resulted in 24.6% (95% uncertainty range, UR: 23.8–25.5) of the entire population testing false-positive over ten years. Raising the cutoff to >1.0 IU/mL or confirming initial positive results with a (presumed independent) second test reduced this false-positive percentage to 2.3% (95%UR: 2.0–2.6%) or 4.1% (95%UR: 3.7–4.5%), but also reduced the proportion of true incident infections detected within the first year of infection from 76.5% (95%UR: 66.3–84.6%) to 54.8% (95%UR: 44.6–64.5%) or 61.5% (95%UR: 51.6–70.9%), respectively. Serial QFT testing of HCWs in North America may result in tremendous over-diagnosis and over-treatment of LTBI, with nearly thirty false-positives for every true infection diagnosed. Using higher cutoffs for conversion or confirmatory tests (for initial positives) can mitigate these effects, but will also diagnose fewer true infections. PMID:27469388

  8. Magnetic Resonance–Based Automatic Air Segmentation for Generation of Synthetic Computed Tomography Scans in the Head Region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Weili; Kim, Joshua P.; Kadbi, Mo

    2015-11-01

    Purpose: To incorporate a novel imaging sequence for robust air and tissue segmentation using ultrashort echo time (UTE) phase images and to implement an innovative synthetic CT (synCT) solution as a first step toward MR-only radiation therapy treatment planning for brain cancer. Methods and Materials: Ten brain cancer patients were scanned with a UTE/Dixon sequence and other clinical sequences on a 1.0 T open magnet with simulation capabilities. Bone-enhanced images were generated from a weighted combination of water/fat maps derived from Dixon images and inverted UTE images. Automated air segmentation was performed using unwrapped UTE phase maps. Segmentation accuracy was assessedmore » by calculating segmentation errors (true-positive rate, false-positive rate, and Dice similarity indices using CT simulation (CT-SIM) as ground truth. The synCTs were generated using a voxel-based, weighted summation method incorporating T2, fluid attenuated inversion recovery (FLAIR), UTE1, and bone-enhanced images. Mean absolute error (MAE) characterized Hounsfield unit (HU) differences between synCT and CT-SIM. A dosimetry study was conducted, and differences were quantified using γ-analysis and dose-volume histogram analysis. Results: On average, true-positive rate and false-positive rate for the CT and MR-derived air masks were 80.8% ± 5.5% and 25.7% ± 6.9%, respectively. Dice similarity indices values were 0.78 ± 0.04 (range, 0.70-0.83). Full field of view MAE between synCT and CT-SIM was 147.5 ± 8.3 HU (range, 138.3-166.2 HU), with the largest errors occurring at bone–air interfaces (MAE 422.5 ± 33.4 HU for bone and 294.53 ± 90.56 HU for air). Gamma analysis revealed pass rates of 99.4% ± 0.04%, with acceptable treatment plan quality for the cohort. Conclusions: A hybrid MRI phase/magnitude UTE image processing technique was introduced that significantly improved bone and air contrast in MRI. Segmented air masks and bone-enhanced images were integrated into our synCT pipeline for brain, and results agreed well with clinical CTs, thereby supporting MR-only radiation therapy treatment planning in the brain.« less

  9. Magnetic Resonance-Based Automatic Air Segmentation for Generation of Synthetic Computed Tomography Scans in the Head Region.

    PubMed

    Zheng, Weili; Kim, Joshua P; Kadbi, Mo; Movsas, Benjamin; Chetty, Indrin J; Glide-Hurst, Carri K

    2015-11-01

    To incorporate a novel imaging sequence for robust air and tissue segmentation using ultrashort echo time (UTE) phase images and to implement an innovative synthetic CT (synCT) solution as a first step toward MR-only radiation therapy treatment planning for brain cancer. Ten brain cancer patients were scanned with a UTE/Dixon sequence and other clinical sequences on a 1.0 T open magnet with simulation capabilities. Bone-enhanced images were generated from a weighted combination of water/fat maps derived from Dixon images and inverted UTE images. Automated air segmentation was performed using unwrapped UTE phase maps. Segmentation accuracy was assessed by calculating segmentation errors (true-positive rate, false-positive rate, and Dice similarity indices using CT simulation (CT-SIM) as ground truth. The synCTs were generated using a voxel-based, weighted summation method incorporating T2, fluid attenuated inversion recovery (FLAIR), UTE1, and bone-enhanced images. Mean absolute error (MAE) characterized Hounsfield unit (HU) differences between synCT and CT-SIM. A dosimetry study was conducted, and differences were quantified using γ-analysis and dose-volume histogram analysis. On average, true-positive rate and false-positive rate for the CT and MR-derived air masks were 80.8% ± 5.5% and 25.7% ± 6.9%, respectively. Dice similarity indices values were 0.78 ± 0.04 (range, 0.70-0.83). Full field of view MAE between synCT and CT-SIM was 147.5 ± 8.3 HU (range, 138.3-166.2 HU), with the largest errors occurring at bone-air interfaces (MAE 422.5 ± 33.4 HU for bone and 294.53 ± 90.56 HU for air). Gamma analysis revealed pass rates of 99.4% ± 0.04%, with acceptable treatment plan quality for the cohort. A hybrid MRI phase/magnitude UTE image processing technique was introduced that significantly improved bone and air contrast in MRI. Segmented air masks and bone-enhanced images were integrated into our synCT pipeline for brain, and results agreed well with clinical CTs, thereby supporting MR-only radiation therapy treatment planning in the brain. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Memory for media: investigation of false memories for negatively and positively charged public events.

    PubMed

    Porter, Stephen; Taylor, Kristian; Ten Brinke, Leanne

    2008-01-01

    Despite a large body of false memory research, little has addressed the potential influence of an event's emotional content on susceptibility to false recollections. The Paradoxical Negative Emotion (PNE) hypothesis predicts that negative emotion generally facilitates memory but also heightens susceptibility to false memories. Participants were asked whether they could recall 20 "widely publicised" public events (half fictitious) ranging in emotional valence, with or without visual cues. Participants recalled a greater number of true negative events (M=3.31/5) than true positive (M=2.61/5) events. Nearly everyone (95%) came to recall at least one false event (M=2.15 false events recalled). Further, more than twice as many participants recalled any false negative (90%) compared to false positive (41.7%) events. Negative events, in general, were associated with more detailed memories and false negative event memories were more detailed than false positive event memories. Higher dissociation scores were associated with false recollections of negative events, specifically.

  11. A knowledge-driven probabilistic framework for the prediction of protein-protein interaction networks.

    PubMed

    Browne, Fiona; Wang, Haiying; Zheng, Huiru; Azuaje, Francisco

    2010-03-01

    This study applied a knowledge-driven data integration framework for the inference of protein-protein interactions (PPI). Evidence from diverse genomic features is integrated using a knowledge-driven Bayesian network (KD-BN). Receiver operating characteristic (ROC) curves may not be the optimal assessment method to evaluate a classifier's performance in PPI prediction as the majority of the area under the curve (AUC) may not represent biologically meaningful results. It may be of benefit to interpret the AUC of a partial ROC curve whereby biologically interesting results are represented. Therefore, the novel application of the assessment method referred to as the partial ROC has been employed in this study to assess predictive performance of PPI predictions along with calculating the True positive/false positive rate and true positive/positive rate. By incorporating domain knowledge into the construction of the KD-BN, we demonstrate improvement in predictive performance compared with previous studies based upon the Naive Bayesian approach. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  12. Disaggregating measurement uncertainty from population variability and Bayesian treatment of uncensored results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strom, Daniel J.; Joyce, Kevin E.; Maclellan, Jay A.

    2012-04-17

    In making low-level radioactivity measurements of populations, it is commonly observed that a substantial portion of net results are negative. Furthermore, the observed variance of the measurement results arises from a combination of measurement uncertainty and population variability. This paper presents a method for disaggregating measurement uncertainty from population variability to produce a probability density function (PDF) of possibly true results. To do this, simple, justifiable, and reasonable assumptions are made about the relationship of the measurements to the measurands (the 'true values'). The measurements are assumed to be unbiased, that is, that their average value is the average ofmore » the measurands. Using traditional estimates of each measurement's uncertainty to disaggregate population variability from measurement uncertainty, a PDF of measurands for the population is produced. Then, using Bayes's theorem, the same assumptions, and all the data from the population of individuals, a prior PDF is computed for each individual's measurand. These PDFs are non-negative, and their average is equal to the average of the measurement results for the population. The uncertainty in these Bayesian posterior PDFs is all Berkson with no remaining classical component. The methods are applied to baseline bioassay data from the Hanford site. The data include 90Sr urinalysis measurements on 128 people, 137Cs in vivo measurements on 5,337 people, and 239Pu urinalysis measurements on 3,270 people. The method produces excellent results for the 90Sr and 137Cs measurements, since there are nonzero concentrations of these global fallout radionuclides in people who have not been occupationally exposed. The method does not work for the 239Pu measurements in non-occupationally exposed people because the population average is essentially zero.« less

  13. What proportion of Salmonella Typhi cases are detected by blood culture? A systematic literature review.

    PubMed

    Mogasale, Vittal; Ramani, Enusa; Mogasale, Vijayalaxmi V; Park, JuYeon

    2016-05-17

    Blood culture is often used in definitive diagnosis of typhoid fever while, bone marrow culture has a greater sensitivity and considered reference standard. The sensitivity of blood culture measured against bone marrow culture results in measurement bias because both tests are not fully sensitive. Here we propose a combination of the two cultures as a reference to define true positive S. Typhi cases. Based on a systematic literature review, we identified ten papers that had performed blood and bone marrow culture for S. Typhi in same subjects. We estimated the weighted mean of proportion of cases detected by culture measured against true S. Typhi positive cases using a random effects model. Of 529 true positive S. Typhi cases, 61 % (95 % CI 52-70 %) and 96 % (95 % CI 93-99 %) were detected by blood and bone marrow cultures respectively. Blood culture sensitivity was 66 % (95 % CI 56-75 %) when compared with bone marrow culture results. The use of blood culture sensitivity as a proxy measure to estimate the proportion of typhoid fever cases detected by blood culture is likely to be an underestimate. As blood culture sensitivity is used as a correction factor in estimating typhoid disease burden, epidemiologists and policy makers should account for the underestimation.

  14. 46, XX true hermaphroditism associated with a terminal deletion of the short arm of the X chromosome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbaux, S.; Vilain, E.; McElreavey, K.

    1994-09-01

    Testes are determined by the activity of the SRY gene product encoded by the Y chromosome. Mutations in SRY can lead to XY sex reversal (XY females) and the presence of the SRY gene in some XX individuals can lead either to complete (XX males) or incomplete (XX true hermaphrodites) sex reversal. Approximately 10% of XX true hermaphrodites contain a portion of the Y chromosome, including SRY, in their genome. The etiology of the remaining cases is unestablished but may be caused by mutations in other as yet unidentied sex determining genes downstream of SRY. Here we describe an SRY-negativemore » true hermaphrodite with a 46,X,del(X)(p21.1-pter). The patient also presented with severe mental retardation, abnormal skin pigmentation and below average height. Histological examination of the gonad revealed bilateral ovotestis. We postulate that the Xp deletion has unmasked a recessive allele on the apparently normal X chromosome generating the intersex phenotype. This observation together with recent findings of certain XY females carrying duplications of Xp21.3 suggests that there may be a loci on Xp which acts as a switch in the testis/ovarian determination pathways.« less

  15. Development of automatic visceral fat volume calculation software for CT volume data.

    PubMed

    Nemoto, Mitsutaka; Yeernuer, Tusufuhan; Masutani, Yoshitaka; Nomura, Yukihiro; Hanaoka, Shouhei; Miki, Soichiro; Yoshikawa, Takeharu; Hayashi, Naoto; Ohtomo, Kuni

    2014-01-01

    To develop automatic visceral fat volume calculation software for computed tomography (CT) volume data and to evaluate its feasibility. A total of 24 sets of whole-body CT volume data and anthropometric measurements were obtained, with three sets for each of four BMI categories (under 20, 20 to 25, 25 to 30, and over 30) in both sexes. True visceral fat volumes were defined on the basis of manual segmentation of the whole-body CT volume data by an experienced radiologist. Software to automatically calculate visceral fat volumes was developed using a region segmentation technique based on morphological analysis with CT value threshold. Automatically calculated visceral fat volumes were evaluated in terms of the correlation coefficient with the true volumes and the error relative to the true volume. Automatic visceral fat volume calculation results of all 24 data sets were obtained successfully and the average calculation time was 252.7 seconds/case. The correlation coefficients between the true visceral fat volume and the automatically calculated visceral fat volume were over 0.999. The newly developed software is feasible for calculating visceral fat volumes in a reasonable time and was proved to have high accuracy.

  16. Averaging of elastic constants for polycrystals

    DOE PAGES

    Blaschke, Daniel N.

    2017-10-13

    Many materials of interest are polycrystals, i.e., aggregates of single crystals. Randomly distributed orientations of single crystals lead to macroscopically isotropic properties. Here in this paper, we briefly review strategies of calculating effective isotropic second and third order elastic constants from the single crystal ones. Our main emphasis is on single crystals of cubic symmetry. Specifically, the averaging of third order elastic constants has not been particularly successful in the past, and discrepancies have often been attributed to texturing of polycrystals as well as to uncertainties in the measurement of elastic constants of both poly and single crystals. While thismore » may well be true, we also point out here shortcomings in the theoretical averaging framework.« less

  17. Position of planet X obtained from motion of near-parabolic comets

    NASA Astrophysics Data System (ADS)

    Medvedev, Yurii; Vavilov, Dmitrii

    2016-10-01

    The authors of paper (Batygin and Brown, 2016) proposed that a planet with 10 earth's mass and an orbit of 700 AU semi major axis and 0.6 eccentricity can explain the observed distribution of Kuiper Belt objects around Sedna. Then Fienga et al.(2016) used the INPOP planetary ephemerides model as a sensor for testing for an additional body in the solar system. They defined the planet position on the orbit using the most sensitive data set, the Cassini radio ranging data.Here we use near-parabolic comets for determination of the planet's position on the orbit. Assuming that some comets approached the planet in the past, we made a search for the comets with low Minimum Orbit Intersection Distance (MOID) with the planet's orbit. From the list of 768 near-parabolic comets five "new" comets with hyperbolic orbits were chosen. We considered two cases of the planet's motion: the direct and the inverse ones. In case of the direct motion the true anomaly of the planet lies in interval [1760, 1840] and, thus, the right ascension, the declination and geocentric distance of the planet are in intervals [830, 900], [80,100], and [1110, 1120] AU, correspondingly. In case of the inverse motion the true anomaly is in [2120, 2230] and the other values are in intervals [480, 580], [-120,-60] and [790, 910] AU. For comparison with the direct motion the true anomaly for the inverse motion, v, should be transformed by 3600-v. That gives us the interval [1370, 1480] that belongs to the intervals of the true anomaly of possible planet's position given by Fienga et al.(2016).ReferencesBatygin, K. & Brown, M. E., 2016, Evidence for a distant giant planet in the Solar system, Astronomical Journal, v. 151, 22Fienga A. A. Fienga1,J. Laskar, H. Manche, and M. Gastineau, 2016, Constraints on the location of a possible 9th planet derived from the Cassini data , Astronomy & Astrophysics, v. 587, L8

  18. pyAmpli: an amplicon-based variant filter pipeline for targeted resequencing data.

    PubMed

    Beyens, Matthias; Boeckx, Nele; Van Camp, Guy; Op de Beeck, Ken; Vandeweyer, Geert

    2017-12-14

    Haloplex targeted resequencing is a popular method to analyze both germline and somatic variants in gene panels. However, involved wet-lab procedures may introduce false positives that need to be considered in subsequent data-analysis. No variant filtering rationale addressing amplicon enrichment related systematic errors, in the form of an all-in-one package, exists to our knowledge. We present pyAmpli, a platform independent parallelized Python package that implements an amplicon-based germline and somatic variant filtering strategy for Haloplex data. pyAmpli can filter variants for systematic errors by user pre-defined criteria. We show that pyAmpli significantly increases specificity, without reducing sensitivity, essential for reporting true positive clinical relevant mutations in gene panel data. pyAmpli is an easy-to-use software tool which increases the true positive variant call rate in targeted resequencing data. It specifically reduces errors related to PCR-based enrichment of targeted regions.

  19. Risk management and precaution: insights on the cautious use of evidence.

    PubMed Central

    Hrudey, Steve E; Leiss, William

    2003-01-01

    Risk management, done well, should be inherently precautionary. Adopting an appropriate degree of precaution with respect to feared health and environmental hazards is fundamental to risk management. The real problem is in deciding how precautionary to be in the face of inevitable uncertainties, demanding that we understand the equally inevitable false positives and false negatives from screening evidence. We consider a framework for detection and judgment of evidence of well-characterized hazards, using the concepts of sensitivity, specificity, positive predictive value, and negative predictive value that are well established for medical diagnosis. Our confidence in predicting the likelihood of a true danger inevitably will be poor for rare hazards because of the predominance of false positives; failing to detect a true danger is less likely because false negatives must be rarer than the danger itself. Because most controversial environmental hazards arise infrequently, this truth poses a dilemma for risk management. PMID:14527835

  20. High luminance monochrome vs. color displays: impact on performance and search

    NASA Astrophysics Data System (ADS)

    Krupinski, Elizabeth A.; Roehrig, Hans; Matsui, Takashi

    2011-03-01

    To determine if diagnostic accuracy and visual search efficiency with a high luminance medical-grade color display are equivalent to a high luminance medical-grade monochrome display. Six radiologists viewed DR chest images, half with a solitary pulmonary nodule and half without. Observers reported whether or not a nodule was present and their confidence in that decision. Total viewing time per image was recorded. On a subset of 15 cases eye-position was recorded. Confidence data were analyzed using MRMC ROC techniques. There was no statistically significant difference (F = 0.0136, p = 0.9078) between color (mean Az = 0.8981, se = 0.0065) and monochrome (mean Az = 0.8945, se = 0.0148) diagnostic performance. Total viewing time per image did not differ significantly (F = 0.392, p = 0.5315) as a function of color (mean = 27.36 sec, sd = 12.95) vs monochrome (mean = 28.04, sd = 14.36) display. There were no significant differences in decision dwell times (true and false, positive and negative) overall for color vs monochrome displays (F = 0.133, p = 0.7154). The true positive (TP) and false positive (FP) decisions were associated with the longest dwell times, the false negatives (FN) with slightly shorter dwell times, and the true negative decisions (TN) with the shortest (F = 50.552, p < 0.0001) and these trends were consistent for both color and monochrome displays. Current color medical-grade displays are suitable for primary diagnostic interpretation in clinical radiology.

  1. The Forced Hard Spring Equation

    ERIC Educational Resources Information Center

    Fay, Temple H.

    2006-01-01

    Through numerical investigations, various examples of the Duffing type forced spring equation with epsilon positive, are studied. Since [epsilon] is positive, all solutions to the associated homogeneous equation are periodic and the same is true with the forcing applied. The damped equation exhibits steady state trajectories with the interesting…

  2. Axillary lymph node metastases in patients with breast carcinomas: assessment with nonenhanced versus uspio-enhanced MR imaging.

    PubMed

    Memarsadeghi, Mazda; Riedl, Christopher C; Kaneider, Andreas; Galid, Arik; Rudas, Margaretha; Matzek, Wolfgang; Helbich, Thomas H

    2006-11-01

    To prospectively assess the accuracy of nonenhanced versus ultrasmall superparamagnetic iron oxide (USPIO)-enhanced magnetic resonance (MR) imaging for depiction of axillary lymph node metastases in patients with breast carcinoma, with histopathologic findings as reference standard. The study was approved by the university ethics committee; written informed consent was obtained. Twenty-two women (mean age, 60 years; range, 40-79 years) with breast carcinomas underwent nonenhanced and USPIO-enhanced (2.6 mg of iron per kilogram of body weight intravenously administered) transverse T1-weighted and transverse and sagittal T2-weighted and T2*-weighted MR imaging in adducted and elevated arm positions. Two experienced radiologists, blinded to the histopathologic findings, analyzed images of axillary lymph nodes with regard to size, morphologic features, and USPIO uptake. A third independent radiologist served as a tiebreaker if consensus between two readers could not be reached. Visual and quantitative analyses of MR images were performed. Sensitivity, specificity, and accuracy values were calculated. To assess the effect of USPIO after administration, signal-to-noise ratio (SNR) changes were statistically analyzed with repeated-measurements analysis of variance (mixed model) for MR sequences. At nonenhanced MR imaging, of 133 lymph nodes, six were rated as true-positive, 99 as true-negative, 23 as false-positive, and five as false-negative. At USPIO-enhanced MR imaging, 11 lymph nodes were rated as true-positive, 120 as true-negative, two as false-positive, and none as false-negative. In two metastatic lymph nodes in two patients with more than one metastatic lymph node, a consensus was not reached. USPIO-enhanced MR imaging revealed a node-by-node sensitivity, specificity, and accuracy of 100%, 98%, and 98%, respectively. At USPIO-enhanced MR imaging, no metastatic lymph nodes were missed on a patient-by-patient basis. Significant interactions indicating differences in the decrease of SNR values for metastatic and nonmetastatic lymph nodes were found for all sequences (P < .001 to P = .022). USPIO-enhanced MR imaging appears valuable for assessment of axillary lymph node metastases in patients with breast carcinomas and is superior to nonenhanced MR imaging.

  3. Comparative diagnostic value of 18F-fluoride PET-CT versus MRI for skull-base bone invasion in nasopharyngeal carcinoma.

    PubMed

    Le, Yali; Chen, Yu; Zhou, Fan; Liu, Guangfu; Huang, Zhanwen; Chen, Yue

    2016-10-01

    This study compared the diagnostic value of F-fluoride PET-computed tomography (PET-CT) and MRI in skull-base bone erosion in nasopharyngeal carcinoma (NPC) patients. A total of 93 patients with biopsy-confirmed NPC were enrolled, including 68 men and 25 women between 23 and 74 years of age. All patients were evaluated by both F-fluoride PET-CT and MRI, and the interval between the two imaging examinations was less than 20 days. The patients received no treatment either before or between scans. The studies were interpreted by two nuclear medicine physicians or two radiologists with more than 10 years of professional experience who were blinded to both the diagnosis and the results of the other imaging studies. The reference standard was skull-base bone erosion at a 20-week follow-up imaging study. On the basis of the results of the follow-up imaging studies, 52 patients showed skull-base bone erosion. The numbers of true positives, false positives, true negatives, and false negatives with F-fluoride PET-CT were 49, 4, 37, and 3, respectively. The numbers of true positives, false positives, true negatives, and false negatives with MRI were 46, 5, 36, and 6, respectively. The sensitivity, specificity, and crude accuracy of F-fluoride PET-CT were 94.23, 90.24, and 92.47%, respectively; for MRI, these values were 88.46, 87.80, and 88.17%. Of the 52 patients, 43 showed positive findings both on F-fluoride PET-CT and on MRI. Within the patient cohort, F-fluoride PET-CT and MRI detected 178 and 135 bone lesions, respectively. Both F-fluoride PET-CT and MRI have high sensitivity, specificity, and crude accuracy for detecting skull-base bone invasion in patients with NPC. F-fluoride PET-CT detected more lesions than did MRI in the skull-base bone. This suggests that F-fluoride PET-CT has a certain advantage in evaluating the skull-base bone of NPC patients. Combining the two methods could improve the diagnostic accuracy of skull-base bone invasion for NPC.

  4. Costs and Cost Effectiveness of Three Approaches for Cervical Cancer Screening among HIV-Positive Women in Johannesburg, South Africa

    PubMed Central

    Lince-Deroche, Naomi; Phiri, Jane; Michelow, Pam; Smith, Jennifer S.; Firnhaber, Cindy

    2015-01-01

    Background South Africa has high rates of HIV and HPV and high incidence and mortality from cervical cancer. However, cervical cancer is largely preventable when early screening and treatment are available. We estimate the costs and cost-effectiveness of conventional cytology (Pap), visual inspection with acetic acid (VIA) and HPV DNA testing for detecting cases of CIN2+ among HIV-infected women currently taking antiretroviral treatment at a public HIV clinic in Johannesburg, South Africa. Methods Method effectiveness was derived from a validation study completed at the clinic. Costs were estimated from the provider perspective using micro-costing between June 2013-April 2014. Capital costs were annualized using a discount rate of 3%. Two different service volume scenarios were considered. Threshold analysis was used to explore the potential for reducing the cost of HPV DNA testing. Results VIA was least costly in both scenarios. In the higher volume scenario, the average cost per procedure was US$ 3.67 for VIA, US$ 8.17 for Pap and US$ 54.34 for HPV DNA. Colposcopic biopsies cost on average US$ 67.71 per procedure. VIA was least sensitive but most cost-effective at US$ 17.05 per true CIN2+ case detected. The cost per case detected for Pap testing was US$ 130.63 using a conventional definition for positive results and US$ 187.52 using a more conservative definition. HPV DNA testing was US$ 320.09 per case detected. Colposcopic biopsy costs largely drove the total and per case costs. A 71% reduction in HPV DNA screening costs would make it competitive with the conservative Pap definition. Conclusions Women need access to services which meet their needs and address the burden of cervical dysplasia and cancer in this region. Although most cost-effective, VIA may require more frequent screening due to low sensitivity, an important consideration for an HIV-positive population with increased risk for disease progression. PMID:26569487

  5. Costs and Cost Effectiveness of Three Approaches for Cervical Cancer Screening among HIV-Positive Women in Johannesburg, South Africa.

    PubMed

    Lince-Deroche, Naomi; Phiri, Jane; Michelow, Pam; Smith, Jennifer S; Firnhaber, Cindy

    2015-01-01

    South Africa has high rates of HIV and HPV and high incidence and mortality from cervical cancer. However, cervical cancer is largely preventable when early screening and treatment are available. We estimate the costs and cost-effectiveness of conventional cytology (Pap), visual inspection with acetic acid (VIA) and HPV DNA testing for detecting cases of CIN2+ among HIV-infected women currently taking antiretroviral treatment at a public HIV clinic in Johannesburg, South Africa. Method effectiveness was derived from a validation study completed at the clinic. Costs were estimated from the provider perspective using micro-costing between June 2013-April 2014. Capital costs were annualized using a discount rate of 3%. Two different service volume scenarios were considered. Threshold analysis was used to explore the potential for reducing the cost of HPV DNA testing. VIA was least costly in both scenarios. In the higher volume scenario, the average cost per procedure was US$ 3.67 for VIA, US$ 8.17 for Pap and US$ 54.34 for HPV DNA. Colposcopic biopsies cost on average US$ 67.71 per procedure. VIA was least sensitive but most cost-effective at US$ 17.05 per true CIN2+ case detected. The cost per case detected for Pap testing was US$ 130.63 using a conventional definition for positive results and US$ 187.52 using a more conservative definition. HPV DNA testing was US$ 320.09 per case detected. Colposcopic biopsy costs largely drove the total and per case costs. A 71% reduction in HPV DNA screening costs would make it competitive with the conservative Pap definition. Women need access to services which meet their needs and address the burden of cervical dysplasia and cancer in this region. Although most cost-effective, VIA may require more frequent screening due to low sensitivity, an important consideration for an HIV-positive population with increased risk for disease progression.

  6. Automatic detection and quantification of pulmonary arterio-venous malformations in hereditary hemorrhagic telangiectasia

    NASA Astrophysics Data System (ADS)

    Fetita, Catalin; Fortemps de Loneux, Thierry; Kouvahe, Amélé Florence; El Hajjam, Mostafa

    2017-03-01

    Hereditary hemorrhagic telangiectasia (HHT) is an autosomic dominant disorder, which is characterized by the development of multiple arterio-venous malformations in the skin, mucous membranes, and/or visceral organs. Pulmonary Arterio-Venous Malformation (PAVM) is an abnormal connection where feeding arteries shunt directly into draining veins with no intervening capillary bed. This condition may lead to paradoxical embolism and hemorrhagic complications. PAVMs patients should systematically be screened as the spontaneous complication rate is high, reaching almost 50%. Chest enhanced contrast CT scanner is the reference screening and follow-up examination. When performed by experienced operators as the prime treatment, percutaneous embolization of PAVMs is a safe, efficient and sustained therapy. The accuracy of PAVM detection and quantification of its progression over time is the key of embolotherapy success. In this paper, we propose an automatic method for PAVM detection and quantification relying on a modeling of vessel deformation, i.e. local caliber increase, based on mathematical morphology. The pulmonary field and vessels are first segmented using geodesic operators. The vessel caliber is estimated by means of a granulometric measure and the local caliber increase is detected by using a geodesic operator, the h-maxdomes. The detection sensitivity can be tuned up according to the choice of the h value which models the irregularity of the vessel caliber along its axis and the PAVM selection is performed according to a clinical criterion of >3 mm diameter of the feeding artery of the PAVM. The developed method was tested on a 20 patient dataset. A sensitivity study allowed choosing the irregularity parameter to maximize the true positive ratio reaching 85.4% in average. A specific false positive reduction procedure targeting the vessel trunks of the arterio-venous tree near mediastinum allows a precision increase from 13% to 67% with an average number of 1.15 false positives per scan.

  7. Evaluation of the TrueBeam machine performance check (MPC) beam constancy checks for flattened and flattening filter-free (FFF) photon beams.

    PubMed

    Barnes, Michael P; Greer, Peter B

    2017-01-01

    Machine Performance Check (MPC) is an automated and integrated image-based tool for verification of beam and geometric performance of the TrueBeam linac. The aims of the study were to evaluate the MPC beam performance tests against current daily quality assurance (QA) methods, to compare MPC performance against more accurate monthly QA tests and to test the sensitivity of MPC to changes in beam performance. The MPC beam constancy checks test the beam output, uniformity, and beam center against the user defined baseline. MPC was run daily over a period of 5 months (n = 115) in parallel with the Daily QA3 device. Additionally, IC Profiler, in-house EPID tests, and ion chamber measurements were performed biweekly and results presented in a form directly comparable to MPC. The sensitivity of MPC was investigated using controlled adjustments of output, beam angle, and beam position steering. Over the period, MPC output agreed with ion chamber to within 0.6%. For an output adjustment of 1.2%, MPC was found to agree with ion chamber to within 0.17%. MPC beam center was found to agree with the in-house EPID method within 0.1 mm. A focal spot position adjustment of 0.4 mm (at isocenter) was measured with MPC beam center to within 0.01 mm. An average systematic offset of 0.5% was measured in the MPC uniformity and agreement of MPC uniformity with symmetry measurements was found to be within 0.9% for all beams. MPC uniformity detected a change in beam symmetry of 1.5% to within 0.3% and 0.9% of IC Profiler for flattened and FFF beams, respectively. © 2016 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  8. Performance Characteristics of a New LSO PET/CT Scanner With Extended Axial Field-of-View and PSF Reconstruction

    NASA Astrophysics Data System (ADS)

    Jakoby, Bjoern W.; Bercier, Yanic; Watson, Charles C.; Bendriem, Bernard; Townsend, David W.

    2009-06-01

    A new combined lutetium oxyorthosilicate (LSO) PET/CT scanner with an extended axial field-of-view (FOV) of 21.8 cm has been developed (Biograph TruePoint PET/CT with TrueV; Siemens Molecular Imaging) and introduced into clinical practice. The scanner includes the recently announced point spread function (PSF) reconstruction algorithm. The PET components incorporate four rings of 48 detector blocks, 5.4 cm times 5.4 cm in cross-section. Each block comprises a 13 times 13 matrix of 4 times 4 times 20 mm3 elements. Data are acquired with a 4.5 ns coincidence time window and an energy window of 425-650 keV. The physical performance of the new scanner has been evaluated according to the recently revised National Electrical Manufacturers Association (NEMA) NU 2-2007 standard and the results have been compared with a previous PET/CT design that incorporates three rings of block detectors with an axial coverage of 16.2 cm (Biograph TruePoint PET/CT; Siemens Molecular Imaging). In addition to the phantom measurements, patient Noise Equivalent Count Rates (NECRs) have been estimated for a range of patients with different body weights (42-154 kg). The average spatial resolution is the same for both scanners: 4.4 mm (FWHM) and 5.0 mm (FWHM) at 1 cm and 10 cm respectively from the center of the transverse FOV. The scatter fractions of the Biograph TruePoint and Biograph TruePoint TrueV are comparable at 32%. Compared to the three ring design, the system sensitivity and peak NECR with smoothed randoms correction (1R) increase by 82% and 73%, respectively. The increase in sensitivity from the extended axial coverage of the Biograph TruePoint PET/CT with TrueV should allow a decrease in either scan time or injected dose without compromising diagnostic image quality. The contrast improvement with the PSF reconstruction potentially offers enhanced detectability for small lesions.

  9. Determining Reactor Fuel Type from Continuous Antineutrino Monitoring

    NASA Astrophysics Data System (ADS)

    Jaffke, Patrick; Huber, Patrick

    2017-09-01

    We investigate the ability of an antineutrino detector to determine the fuel type of a reactor. A hypothetical 5-ton antineutrino detector is placed 25 m from the core and measures the spectral shape and rate of antineutrinos emitted by fission fragments in the core for a number of 90-d periods. Our results indicate that four major fuel types can be differentiated from the variation of fission fractions over the irradiation time with a true positive probability of detection at approximately 95%. In addition, we demonstrate that antineutrinos can identify the burnup at which weapons-grade mixed-oxide (MOX) fuel would be reduced to reactor-grade MOX, on average, providing assurance that plutonium-disposition goals are met. We also investigate removal scenarios where plutonium is purposefully diverted from a mixture of MOX and low-enriched uranium fuel. Finally, we discuss how our analysis is impacted by a spectral distortion around 6 MeV observed in the antineutrino spectrum measured from commercial power reactors.

  10. Modeling inter-signal arrival times for accurate detection of CAN bus signal injection attacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Michael Roy; Bridges, Robert A; Combs, Frank L

    Modern vehicles rely on hundreds of on-board electronic control units (ECUs) communicating over in-vehicle networks. As external interfaces to the car control networks (such as the on-board diagnostic (OBD) port, auxiliary media ports, etc.) become common, and vehicle-to-vehicle / vehicle-to-infrastructure technology is in the near future, the attack surface for vehicles grows, exposing control networks to potentially life-critical attacks. This paper addresses the need for securing the CAN bus by detecting anomalous traffic patterns via unusual refresh rates of certain commands. While previous works have identified signal frequency as an important feature for CAN bus intrusion detection, this paper providesmore » the first such algorithm with experiments on five attack scenarios. Our data-driven anomaly detection algorithm requires only five seconds of training time (on normal data) and achieves true positive / false discovery rates of 0.9998/0.00298, respectively (micro-averaged across the five experimental tests).« less

  11. 26 CFR 49.4262(b)-1 - Exclusion of certain travel.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... be the average mileage of the established route traveled by the carrier between given points under... 26 Internal Revenue 16 2010-04-01 2010-04-01 true Exclusion of certain travel. 49.4262(b)-1... Exclusion of certain travel. (a) In general. Under section 4262(b) taxable transportation does not include...

  12. Speech Perception Abilities of Adults with Dyslexia: Is There Any Evidence for a True Deficit?

    ERIC Educational Resources Information Center

    Hazan, Valerie; Messaoud-Galusi, Souhila; Rosen, Stuart; Nouwens, Suzan; Shakespeare, Bethanie

    2009-01-01

    Purpose: This study investigated whether adults with dyslexia show evidence of a consistent speech perception deficit by testing phoneme categorization and word perception in noise. Method: Seventeen adults with dyslexia and 20 average readers underwent a test battery including standardized reading, language and phonological awareness tests, and…

  13. 40 CFR 63.1275 - Glycol dehydration unit process vent standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 12 2012-07-01 2011-07-01 true Glycol dehydration unit process vent... Facilities § 63.1275 Glycol dehydration unit process vent standards. (a) This section applies to each glycol dehydration unit subject to this subpart with an actual annual average natural gas flowrate equal to or...

  14. 40 CFR 63.1275 - Glycol dehydration unit process vent standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 11 2010-07-01 2010-07-01 true Glycol dehydration unit process vent... Facilities § 63.1275 Glycol dehydration unit process vent standards. (a) This section applies to each glycol dehydration unit subject to this subpart with an actual annual average natural gas flowrate equal to or...

  15. 21 CFR 173.322 - Chemicals used in delinting cottonseed.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Chemicals used in delinting cottonseed. 173.322... limitations as are provided: Substances Limitations alpha-Alkyl-omega-hydroxypoly-(oxyethylene) produced by...) having an average of 5 ethylene oxide units May be used at an application rate not to exceed 0.3 percent...

  16. 20 CFR 225.3 - PIA computation formulas.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... computed under one of two normal formulas determined by the employee's eligibility year. In addition, there.... The Average Monthly Earnings PIA formula is used to compute a PIA for one of two reasons: either the... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true PIA computation formulas. 225.3 Section 225.3...

  17. 20 CFR 225.3 - PIA computation formulas.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... computed under one of two normal formulas determined by the employee's eligibility year. In addition, there.... The Average Monthly Earnings PIA formula is used to compute a PIA for one of two reasons: either the... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true PIA computation formulas. 225.3 Section 225.3...

  18. 25 CFR 39.140 - How does a school qualify for a Small School Adjustment?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... lower grades and has a diploma-awarding high school component with an average instructional daily... 25 Indians 1 2012-04-01 2011-04-01 true How does a school qualify for a Small School Adjustment... INDIAN SCHOOL EQUALIZATION PROGRAM Indian School Equalization Formula Small School Adjustment § 39.140...

  19. Turning Fiction Into Non-fiction for Signal-to-Noise Ratio Estimation -- The Time-Multiplexed and Adaptive Split-Symbol Moments Estimator

    NASA Astrophysics Data System (ADS)

    Simon, M.; Dolinar, S.

    2005-08-01

    A means is proposed for realizing the generalized split-symbol moments estimator (SSME) of signal-to-noise ratio (SNR), i.e., one whose implementation on the average allows for a number of subdivisions (observables), 2L, per symbol beyond the conventional value of two, with other than an integer value of L. In theory, the generalized SSME was previously shown to yield optimum performance for a given true SNR, R, when L=R/sqrt(2) and thus, in general, the resulting estimator was referred to as the fictitious SSME. Here we present a time-multiplexed version of the SSME that allows it to achieve its optimum value of L as above (to the extent that it can be computed as the average of a sum of integers) at each value of SNR and as such turns fiction into non-fiction. Also proposed is an adaptive algorithm that allows the SSME to rapidly converge to its optimum value of L when in fact one has no a priori information about the true value of SNR.

  20. Accuracies of the synthesized monochromatic CT numbers and effective atomic numbers obtained with a rapid kVp switching dual energy CT scanner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsitt, Mitchell M.; Christodoulou, Emmanuel G.; Larson, Sandra C.

    2011-04-15

    Purpose: This study was performed to investigate the accuracies of the synthesized monochromatic images and effective atomic number maps obtained with the new GE Discovery CT750 HD CT scanner. Methods: A Gammex-RMI model 467 tissue characterization phantom and the CT number linearity section of a Phantom Laboratory Catphan 600 phantom were scanned using the dual energy (DE) feature on the GE CT750 HD scanner. Synthesized monochromatic images at various energies between 40 and 120 keV and effective atomic number (Z{sub eff}) maps were generated. Regions of interest were placed within these images/maps to measure the average monochromatic CT numbers andmore » average Z{sub eff} of the materials within these phantoms. The true Z{sub eff} values were either supplied by the phantom manufacturer or computed using Mayneord's equation. The linear attenuation coefficients for the true CT numbers were computed using the NIST XCOM program with the input of manufacturer supplied elemental compositions and densities. The effects of small variations in the assumed true densities of the materials were also investigated. Finally, the effect of body size on the accuracies of the synthesized monochromatic CT numbers was investigated using a custom lumbar section phantom with and without an external fat-mimicking ring. Results: Other than the Z{sub eff} of the simulated lung inserts in the tissue characterization phantom, which could not be measured by DECT, the Z{sub eff} values of all of the other materials in the tissue characterization and Catphan phantoms were accurate to 15%. The accuracies of the synthesized monochromatic CT numbers of the materials in both phantoms varied with energy and material. For the 40-120 keV range, RMS errors between the measured and true CT numbers in the Catphan are 8-25 HU when the true CT numbers were computed using the nominal plastic densities. These RMS errors improve to 3-12 HU for assumed true densities within the nominal density {+-}0.02 g/cc range. The RMS errors between the measured and true CT numbers of the tissue mimicking materials in the tissue characterization phantom over the 40-120 keV range varied from about 6 HU-248 HU and did not improve as dramatically with small changes in assumed true density. Conclusions: Initial tests indicate that the Z{sub eff} values computed with DECT on this scanner are reasonably accurate; however, the synthesized monochromatic CT numbers can be very inaccurate, especially for dense tissue mimicking materials at low energies. Furthermore, the synthesized monochromatic CT numbers of materials still depend on the amount of the surrounding tissues especially at low keV, demonstrating that the numbers are not truly monochromatic. Further research is needed to develop DE methods that produce more accurate synthesized monochromatic CT numbers.« less

  1. Intellectual factors in false memories of patients with schizophrenia.

    PubMed

    Zhu, Bi; Chen, Chuansheng; Loftus, Elizabeth F; Dong, Qi; Lin, Chongde; Li, Jun

    2018-07-01

    The current study explored the intellectual factors in false memories of 139 patients with schizophrenia, using a recognition task and an IQ test. The full-scale IQ score of the participants ranged from 57 to 144 (M = 100, SD = 14). The full IQ score had a negative correlation with false recognition in patients with schizophrenia, and positive correlations with high-confidence true recognition and discrimination rates. Further analyses with the subtests' scores revealed that false recognition was negatively correlated with scores of performance IQ (and one of its subtests: picture arrangement), whereas true recognition was positively correlated with scores of verbal IQ (and two of its subtests: information and digit span). High-IQ patients had less false recognition (overall or high-confidence false recognition), more high-confidence true recognition, and higher discrimination abilities than those with low IQ. These findings contribute to a better understanding of the cognitive mechanism in false memory of patients with schizophrenia, and are of practical relevance to the evaluation of memory reliability in patients with different intellectual levels. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. 18F-FDG SPECT/CT in the diagnosis of differentiated thyroid carcinoma with elevated thyroglobulin and negative iodine-131 scans.

    PubMed

    Ma, C; Wang, X; Shao, M; Zhao, L; Jiawei, X; Wu, Z; Wang, H

    2015-06-01

    Aim of the present study was to investigate the usefulness of 18F-FDG SPECT/CT in differentiated thyroid cancer (DTC) with elevated serum thyroglobulin (Tg) but negative iodine-131 scan. This retrospective review of patients with DTC recurrence who had 18F-FDG SPECT/CT and 18F-FDG PET/CT for elevated serum Tg but negative iodine-131 scan (March 2007-October 2012). After total thyroidectomy followed by radioiodine ablation, 86 consecutive patients with elevated Tg levels underwent 18F-FDG SPECT/CT or 18F-FDG PET/CT. Of these, 45 patients had 18F-FDG SPECT/CT, the other 41 patients had 18F-FDG PET/CT 3-4weeks after thyroid hormone withdrawal. The results of 18F-FDG PET/CT and SPECT/CT were correlated with patient follow-up information, which included the results from subsequent imaging modalities such as neck ultrasound, MRI and CT, Tg levels, and histologic examination of surgical specimens. The diagnostic accuracy of the two imaging modalities was evaluated. In 18F-FDG SPECT/CT scans, 24 (24/45) patients had positive findings, 22 true positive in 24 patients, false positive in 2 patients, true-negative and false-negative in 6, 15 patients, respectively. The overall sensitivity, specificity, and accuracy of 18F-FDG SPECT/CT were 59.5%, 75% and 62.2%, respectively. Twenty six patients had positive findings on 18F-FDG PET/CT scans, 23 true positive in 26 (26/41) patients, false positive in 3 patients, true-negative and false-negative in 9, 6 patients, respectively. The overall sensitivity, specificity, and accuracy of 18F-FDG PET/CT were 79.3%, 81.8% and 78.1%, respectively. Clinical management changed for 13 (29%) of 45 patients by 18F-FDG SPECT/CT, 14 (34%) of 41 patients by 18F-FDG PET/CT including surgery, radiation therapy, or multikinase inhibitor. Based on the retrospective analysis of 86 patients, 18F-FDG SPECT/CT has lower sensitivity in the diagnosis of DTC recurrence with elevated Tg and negative iodine-131scan to 18F-FDG PET/CT. The clinical application of FDG SPECT/CT is then limited and cannot replace PET/CT.

  3. Development and validation of a dual sensing scheme to improve accuracy of bradycardia and pause detection in an insertable cardiac monitor.

    PubMed

    Passman, Rod S; Rogers, John D; Sarkar, Shantanu; Reiland, Jerry; Reisfeld, Erin; Koehler, Jodi; Mittal, Suneet

    2017-07-01

    Undersensing of premature ventricular beats and low-amplitude R waves are primary causes for inappropriate bradycardia and pause detections in insertable cardiac monitors (ICMs). The purpose of this study was to develop and validate an enhanced algorithm to reduce inappropriately detected bradycardia and pause episodes. Independent data sets to develop and validate the enhanced algorithm were derived from a database of ICM-detected bradycardia and pause episodes in de-identified patients monitored for unexplained syncope. The original algorithm uses an auto-adjusting sensitivity threshold for R-wave sensing to detect tachycardia and avoid T-wave oversensing. In the enhanced algorithm, a second sensing threshold is used with a long blanking and fixed lower sensitivity threshold, looking for evidence of undersensed signals. Data reported includes percent change in appropriate and inappropriate bradycardia and pause detections as well as changes in episode detection sensitivity and positive predictive value with the enhanced algorithm. The validation data set, from 663 consecutive patients, consisted of 4904 (161 patients) bradycardia and 2582 (133 patients) pause episodes, of which 2976 (61%) and 996 (39%) were appropriately detected bradycardia and pause episodes. The enhanced algorithm reduced inappropriate bradycardia and pause episodes by 95% and 47%, respectively, with 1.7% and 0.6% reduction in appropriate episodes, respectively. The average episode positive predictive value improved by 62% (P < .001) for bradycardia detection and by 26% (P < .001) for pause detection, with an average relative sensitivity of 95% (P < .001) and 99% (P = .5), respectively. The enhanced dual sense algorithm for bradycardia and pause detection in ICMs substantially reduced inappropriate episode detection with a minimal reduction in true episode detection. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  4. BMPix and PEAK tools: New methods for automated laminae recognition and counting—Application to glacial varves from Antarctic marine sediment

    NASA Astrophysics Data System (ADS)

    Weber, M. E.; Reichelt, L.; Kuhn, G.; Pfeiffer, M.; Korff, B.; Thurow, J.; Ricken, W.

    2010-03-01

    We present tools for rapid and quantitative detection of sediment lamination. The BMPix tool extracts color and gray scale curves from images at pixel resolution. The PEAK tool uses the gray scale curve and performs, for the first time, fully automated counting of laminae based on three methods. The maximum count algorithm counts every bright peak of a couplet of two laminae (annual resolution) in a smoothed curve. The zero-crossing algorithm counts every positive and negative halfway passage of the curve through a wide moving average, separating the record into bright and dark intervals (seasonal resolution). The same is true for the frequency truncation method, which uses Fourier transformation to decompose the curve into its frequency components before counting positive and negative passages. The algorithms are available at doi:10.1594/PANGAEA.729700. We applied the new methods successfully to tree rings, to well-dated and already manually counted marine varves from Saanich Inlet, and to marine laminae from the Antarctic continental margin. In combination with AMS14C dating, we found convincing evidence that laminations in Weddell Sea sites represent varves, deposited continuously over several millennia during the last glacial maximum. The new tools offer several advantages over previous methods. The counting procedures are based on a moving average generated from gray scale curves instead of manual counting. Hence, results are highly objective and rely on reproducible mathematical criteria. Also, the PEAK tool measures the thickness of each year or season. Since all information required is displayed graphically, interactive optimization of the counting algorithms can be achieved quickly and conveniently.

  5. Kilometric radiation power flux dependence on area of discrete aurora

    NASA Technical Reports Server (NTRS)

    Saflekos, N. A.; Burch, J. L.; Gurnett, D. A.; Anderson, R. R.; Sheehan, R. E.

    1989-01-01

    Kilometer wavelength radiation, measured from distant positions over the North Pole and over the Earth's equator, was compared to the area of discrete aurora imaged by several low-altitude spacecraft. Through correlative studies of auroral kilometric radiation (AKR) with about two thousand auroral images, a stereoscopic view of the average auroral acceleration region was obtained. A major result is that the total AKR power increases as the area of the discrete auroral oval increases. The implications are that the regions of parallel potentials or the auroral plasma cavities, in which AKR is generated, must possess the following attributes: (1) they are shallow in altitude and their radial position depends on wavelength, (2) they thread flux tubes of small cross section, (3) the generation mechanism in them reaches a saturation limit rapidly, and (4) their distribution over the discrete auroral oval is nearly uniform. The above statistical results are true for large samples collected over a long period of time (about six months). In the short term, AKR frequently exhibits temporal variations with scales as short as three minutes (the resolution of the averaged data used). These fluctuations are explainable by rapid quenchings as well as fast starts of the electron cyclotron maser mechanism. There were times when AKR was present at substantial power levels while optical emissions were below instrument thresholds. A recent theoretical result may account for this set of observations by predicting that suprathermal electrons, of energies as low as several hundred eV, can generate second harmonic AKR. The indirect observations of second harmonic AKR require that these electrons have mirror points high above the atmosphere so as to minimize auroral light emissions. The results provide evidence supporting the electron cyclotron maser mechanism.

  6. Measurement of fracture stress for 6000-series extruded aluminum alloy tube using multiaxial tube expansion testing method

    NASA Astrophysics Data System (ADS)

    Nagai, Keisuke; Kuwabara, Toshihiko; Ilinich, Andrey; Luckey, George

    2018-05-01

    A servo-controlled tension-internal pressure testing machine with an optical 3D digital image correlation system (DIC) is used to measure the multiaxial deformation behavior of an extruded aluminum alloy tube for a strain range from initial yield to fracture. The outer diameter of the test sample is 50.8 mm and wall thickness 2.8 mm. Nine linear stress paths are applied to the specimens: σɸ (axial true stress component) : σθ (circumferential true stress component) = 1:0, 4:1, 2:1, 4:3, 1:1, 3:4, 1:2, 1:4, and 0:1. The equivalent strain rate is approximately 5 × 10-4 s-1 constant. The forming limit curve (FLC) and forming limit stress curve (FLSC) are also measured. Moreover, the average true stress components inside a localized necking area are determined for each specimen from the thickness strain data for the localized necking area and the geometry of the fracture surface.

  7. Primary care visit use after positive fecal immunochemical test for colorectal cancer screening.

    PubMed

    Hillyer, Grace Clarke; Jensen, Christopher D; Zhao, Wei K; Neugut, Alfred I; Lebwohl, Benjamin; Tiro, Jasmin A; Kushi, Lawrence H; Corley, Douglas A

    2017-10-01

    For some patients, positive cancer screening test results can be a stressful experience that can affect future screening compliance and increase the use of health care services unrelated to medically indicated follow-up. Among 483,216 individuals aged 50 to 75 years who completed a fecal immunochemical test to screen for colorectal cancer at a large integrated health care setting between 2007 and 2011, the authors evaluated whether a positive test was associated with a net change in outpatient primary care visit use within the year after screening. Multivariable regression models were used to evaluate the relationship between test result group and net changes in primary care visits after fecal immunochemical testing. In the year after the fecal immunochemical test, use increased by 0.60 clinic visits for patients with true-positive results. The absolute change in visits was largest (3.00) among individuals with positive test results who were diagnosed with colorectal cancer, but significant small increases also were found for patients treated with polypectomy and who had no neoplasia (0.36) and those with a normal examination and no polypectomy performed (0.17). Groups of patients who demonstrated an increase in net visit use compared with the true-negative group included patients with true-positive results (odds ratio [OR], 1.60; 95% confidence interval [95% CI], 1.54-1.66), and positive groups with a colorectal cancer diagnosis (OR, 7.19; 95% CI, 6.12-8.44), polypectomy/no neoplasia (OR, 1.37; 95% CI, 1.27-1.48), and normal examination/no polypectomy (OR, 1.24; 95% CI, 1.18-1.30). Given the large size of outreach programs, these small changes can cumulatively generate thousands of excess visits and have a substantial impact on total health care use. Therefore, these changes should be included in colorectal cancer screening cost models and their causes investigated further. Cancer 2017;123:3744-3753. © 2017 American Cancer Society. © 2017 American Cancer Society.

  8. Establishing a sample-to cut-off ratio for lab-diagnosis of hepatitis C virus in Indian context.

    PubMed

    Tiwari, Aseem K; Pandey, Prashant K; Negi, Avinash; Bagga, Ruchika; Shanker, Ajay; Baveja, Usha; Vimarsh, Raina; Bhargava, Richa; Dara, Ravi C; Rawat, Ganesh

    2015-01-01

    Lab-diagnosis of hepatitis C virus (HCV) is based on detecting specific antibodies by enzyme immuno-assay (EIA) or chemiluminescence immuno-assay (CIA). Center for Disease Control reported that signal-to-cut-off (s/co) ratios in anti-HCV antibody tests like EIA/CIA can be used to predict the probable result of supplemental test; above a certain s/co value it is most likely to be true-HCV positive result and below that certain s/co it is most likely to be false-positive result. A prospective study was undertaken in patients in tertiary care setting for establishing this "certain" s/co value. The study was carried out in consecutive patients requiring HCV testing for screening/diagnosis and medical management. These samples were tested for anti-HCV on CIA (VITROS(®) Anti-HCV assay, Ortho-Clinical Diagnostics, New Jersey) for calculating s/co value. The supplemental nucleic acid test used was polymerase chain reaction (PCR) (Abbott). PCR test results were used to define true negatives, false negatives, true positives, and false positives. Performance of different putative s/co ratios versus PCR was measured using sensitivity, specificity, positive predictive value and negative predictive value and most appropriate s/co was considered on basis of highest specificity at sensitivity of at least 95%. An s/co ratio of ≥6 worked out to be over 95% sensitive and almost 92% specific in 438 consecutive patient samples tested. The s/co ratio of six can be used for lab-diagnosis of HCV infection; those with s/co higher than six can be diagnosed to have HCV infection without any need for supplemental assays.

  9. Accuracy of ultrasound for the prediction of placenta accreta.

    PubMed

    Bowman, Zachary S; Eller, Alexandra G; Kennedy, Anne M; Richards, Douglas S; Winter, Thomas C; Woodward, Paula J; Silver, Robert M

    2014-08-01

    Ultrasound has been reported to be greater than 90% sensitive for the diagnosis of accreta. Prior studies may be subject to bias because of single expert observers, suspicion for accreta, and knowledge of risk factors. We aimed to assess the accuracy of ultrasound for the prediction of accreta. Patients with accreta at a single academic center were matched to patients with placenta previa, but no accreta, by year of delivery. Ultrasound studies with views of the placenta were collected, deidentified, blinded to clinical history, and placed in random sequence. Six investigators prospectively interpreted each study for the presence of accreta and findings reported to be associated with its diagnosis. Sensitivity, specificity, positive predictive, negative predictive value, and accuracy were calculated. Characteristics of accurate findings were compared using univariate and multivariate analyses. Six investigators examined 229 ultrasound studies from 55 patients with accreta and 56 controls for 1374 independent observations. 1205/1374 (87.7% overall, 90% controls, 84.9% cases) studies were given a diagnosis. There were 371 (27.0%) true positives; 81 (5.9%) false positives; 533 (38.8%) true negatives, 220 (16.0%) false negatives, and 169 (12.3%) with uncertain diagnosis. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were 53.5%, 88.0%, 82.1%, 64.8%, and 64.8%, respectively. In multivariate analysis, true positives were more likely to have placental lacunae (odds ratio [OR], 1.5; 95% confidence interval [CI], 1.4-1.6), loss of retroplacental clear space (OR, 2.4; 95% CI, 1.1-4.9), or abnormalities on color Doppler (OR, 2.1; 95% CI, 1.8-2.4). Ultrasound for the prediction of placenta accreta may not be as sensitive as previously described. Copyright © 2014 Mosby, Inc. All rights reserved.

  10. The observed North-South Asymmetry of IMF spiral

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahluwalia, H.S.; Xue, S.S.

    1995-06-01

    The authors appraise the finding, reported in the literature, that a small but finite north-south asymmetry (NSA) exists in the interplanetary magnetic field (IMF) spiral at Earth`s orbit. The authors have analyzed the data available on the Omnitape for the 1963 to 1993 period. The coverage is very uneven, ranging from less than 40% to greater than 80%. The magnitude of NSA fluctuates considerably during the period of this analysis. This is true even if one considers the period 1967 to 1982 when the coverage is greater than 50%. The values of NSA derived from 27-day averages of the hourlymore » data points range from greater than +50 deg to less than {minus}40 deg. If one arranges the data according to the magnetic polarity epochs of the solar polar field, the epoch averages gives the magnitude of NSA less than approximately 2 deg. This is also true, if one considers the average magnitude of NSA for the 1965 to 1993 period, when the coverage is greater than 25%. A genuine, persistent, NSA of IMF spiral is likely to affect the cosmic ray modulation, on either side of the current sheet, by introducing a corresponding change in the radial diffusion coefficient of energetic particle transport in the heliosphere. The annual mean values of the observed NSA of IMF spiral are compared with the observed off-ecliptic contributions to cosmic ray modulation.« less

  11. Motion patterns in acupuncture needle manipulation.

    PubMed

    Seo, Yoonjeong; Lee, In-Seon; Jung, Won-Mo; Ryu, Ho-Sun; Lim, Jinwoong; Ryu, Yeon-Hee; Kang, Jung-Won; Chae, Younbyoung

    2014-10-01

    In clinical practice, acupuncture manipulation is highly individualised for each practitioner. Before we establish a standard for acupuncture manipulation, it is important to understand completely the manifestations of acupuncture manipulation in the actual clinic. To examine motion patterns during acupuncture manipulation, we generated a fitted model of practitioners' motion patterns and evaluated their consistencies in acupuncture manipulation. Using a motion sensor, we obtained real-time motion data from eight experienced practitioners while they conducted acupuncture manipulation using their own techniques. We calculated the average amplitude and duration of a sampled motion unit for each practitioner and, after normalisation, we generated a true regression curve of motion patterns for each practitioner using a generalised additive mixed modelling (GAMM). We observed significant differences in rotation amplitude and duration in motion samples among practitioners. GAMM showed marked variations in average regression curves of motion patterns among practitioners but there was strong consistency in motion parameters for individual practitioners. The fitted regression model showed that the true regression curve accounted for an average of 50.2% of variance in the motion pattern for each practitioner. Our findings suggest that there is great inter-individual variability between practitioners, but remarkable intra-individual consistency within each practitioner. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  12. Performance characteristics of broth-only cultures after revision total joint arthroplasty.

    PubMed

    Smith, Eric B; Cai, Jenny; Wynne, Rachael; Maltenfort, Mitchell; Good, Robert P

    2014-11-01

    Surgeons frequently obtain intraoperative cultures at the time of revision total joint arthroplasty. The use of broth or liquid medium before applying the sample to the agar medium may be associated with contamination and false-positive cultures; however, the degree to which this is the case is not known. We (1) calculated the performance characteristics of broth-only cultures (sensitivity, specificity, positive predictive value, and negative predictive value) and (2) characterized the organisms identified in broth to determine whether a specific organism showed increased proclivity for true-positive periprosthetic joint infection (PJI). A single-institution retrospective chart review was performed on 257 revision total joint arthroplasties from 2009 through 2010. One hundred ninety (74%) had cultures for review. All culture results, as well as treatment, if any, were documented and patients were followed for a minimum of 1 year for evidence of PJI. Cultures were measured as either positive from the broth only or broth negative. The true diagnosis of infection was determined by the Musculoskeletal Infection Society criteria during the preoperative workup or postoperatively at 1 year for purposes of calculating the performance characteristics of the broth-only culture. The sensitivity, specificity, positive predictive value, and negative predictive value were 19%, 88%, 13%, and 92%, respectively. The most common organism identified was coagulase-negative Staphylococcus (16 of 24 cases, 67%). Coagulase-negative Staphylococcus was present in all three true-positive cases; however, it was also found in 13 of the false-positive cases. The broth-only positive cultures showed poor sensitivity and positive predictive value but good specificity and negative predictive value. The good specificity indicates that it can help to rule in the presence of PJI; however, the poor sensitivity makes broth-only culture an unreliable screening test. We recommend that broth-only culture results be carefully scrutinized and decisions on the diagnosis and treatment of infection should be based specifically on the Musculoskeletal Infection Society criteria. Level IV, diagnostic study. See Instructions for Authors for a complete description of levels of evidence.

  13. Intra-operative Localization of Brachytherapy Implants Using Intensity-based Registration

    PubMed Central

    KarimAghaloo, Z.; Abolmaesumi, P.; Ahmidi, N.; Chen, T.K.; Gobbi, D. G.; Fichtinger, G.

    2010-01-01

    In prostate brachytherapy, a transrectal ultrasound (TRUS) will show the prostate boundary but not all the implanted seeds, while fluoroscopy will show all the seeds clearly but not the boundary. We propose an intensity-based registration between TRUS images and the implant reconstructed from uoroscopy as a means of achieving accurate intra-operative dosimetry. The TRUS images are first filtered and compounded, and then registered to the uoroscopy model via mutual information. A training phantom was implanted with 48 seeds and imaged. Various ultrasound filtering techniques were analyzed, and the best results were achieved with the Bayesian combination of adaptive thresholding, phase congruency, and compensation for the non-uniform ultrasound beam profile in the elevation and lateral directions. The average registration error between corresponding seeds relative to the ground truth was 0.78 mm. The effect of false positives and false negatives in ultrasound were investigated by masking true seeds in the uoroscopy volume or adding false seeds. The registration error remained below 1.01 mm when the false positive rate was 31%, and 0.96 mm when the false negative rate was 31%. This fully automated method delivers excellent registration accuracy and robustness in phantom studies, and promises to demonstrate clinically adequate performance on human data as well. Keywords: Prostate brachytherapy, Ultrasound, Fluoroscopy, Registration. PMID:21152376

  14. A self-adaption compensation control for hysteresis nonlinearity in piezo-actuated stages based on Pi-sigma fuzzy neural network

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Zhou, Miaolei

    2018-04-01

    Piezo-actuated stages are widely applied in the high-precision positioning field nowadays. However, the inherent hysteresis nonlinearity in piezo-actuated stages greatly deteriorates the positioning accuracy of piezo-actuated stages. This paper first utilizes a nonlinear autoregressive moving average with exogenous inputs (NARMAX) model based on the Pi-sigma fuzzy neural network (PSFNN) to construct an online rate-dependent hysteresis model for describing the hysteresis nonlinearity in piezo-actuated stages. In order to improve the convergence rate of PSFNN and modeling precision, we adopt the gradient descent algorithm featuring three different learning factors to update the model parameters. The convergence of the NARMAX model based on the PSFNN is analyzed effectively. To ensure that the parameters can converge to the true values, the persistent excitation condition is considered. Then, a self-adaption compensation controller is designed for eliminating the hysteresis nonlinearity in piezo-actuated stages. A merit of the proposed controller is that it can directly eliminate the complex hysteresis nonlinearity in piezo-actuated stages without any inverse dynamic models. To demonstrate the effectiveness of the proposed model and control methods, a set of comparative experiments are performed on piezo-actuated stages. Experimental results show that the proposed modeling and control methods have excellent performance.

  15. Decision-making in pigeon flocks: a democratic view of leadership.

    PubMed

    Jorge, Paulo E; Marques, Paulo A M

    2012-07-15

    When travelling in groups, animals frequently have to make decisions on the direction of travel. These decisions can be based on consensus, when all individuals take part in the decision (i.e. democratic decision; social information), or leadership, when one member or a minority of members make the decision (i.e. despotic decision; personal information). Here we investigated whether decision-making on the navigation of small flocks is based on democratic or despotic decisions. Using individual and flock releases as the experimental approach, we compared the homing performances of homing pigeons that fly singly and in groups of three. Our findings show that although small groups were either governed (i.e. when individuals in the flock had age differences) or not (i.e. when individuals in the flock had the same age) by leaders, with concern to decision-making they were all ruled by democratic decisions. Moreover, the individual homing performances were not associated with leadership. Because true leaders did not assume right away the front position in the flock, we suggest that as in human groups, starting from a central position is more effective as it allows leaders to not only transmit their own information but also to average the tendencies of the other group members. Together, the results highlight the importance of democratic decisions in group decision-making.

  16. The plaque- and gingivitis-inhibiting capacity of a commercially available essential oil product. A parallel, split-mouth, single blind, randomized, placebo-controlled clinical study.

    PubMed

    Preus, Hans Ragnar; Koldsland, Odd Carsten; Aass, Anne Merete; Sandvik, Leiv; Hansen, Bjørn Frode

    2013-11-01

    Studies have reported commercially available essential oils with convincing plaque and gingivitis preventing properties. However, no tests have compared these essential oils, i.e. Listerine(®), against their true vehicle controls. To compare the plaque and gingivitis inhibiting effect of a commercially-available essential oil (Listerine(®) Total Care) to a negative (22% hydro-alcohol solution) and a positive (0.2% chlorhexidine (CHX)) control in an experimental gingivitis model. In three groups of 15 healthy volunteers, experimental gingivitis was induced and monitored over 21 days, simultaneously treated with Listerine(®) Total Care (test), 22% hydro-alcohol solution (negative control) and 0.2% chlorhexidine solution (positive control), respectively. The upper right quadrant of each individual received mouthwash only, whereas the upper left quadrant was subject to both rinses and mechanical oral hygiene. Plaque, gingivitis and side-effects were assessed at day 7, 14 and 21. After 21 days, the chlorhexidine group showed significantly lower average plaque and gingivitis scores than the Listerine(®) and alcohol groups, whereas there was little difference between the two latter. Listerine(®) Total Care had no statistically significant effect on plaque formation as compared to its vehicle control.

  17. Correcting for possible tissue distortion between provocation and assessment in skin testing: the divergent beam UVB photo-test.

    PubMed

    O'Doherty, Jim; Henricson, Joakim; Falk, Magnus; Anderson, Chris D

    2013-11-01

    In tissue viability imaging (TiVi), an assessment method for skin erythema, correct orientation of skin position from provocation to assessment optimizes data interpretation. Image processing algorithms could compensate for the effects of skin translation, torsion and rotation realigning assessment images to the position of the skin at provocation. A reference image of a divergent, UVB phototest was acquired, as well as test images at varying levels of translation, rotation and torsion. Using 12 skin markers, an algorithm was applied to restore the distorted test images to the reference image. The algorithm corrected torsion and rotation up to approximately 35 degrees. The radius of the erythemal reaction and average value of the input image closely matched that of the reference image's 'true value'. The image 'de-warping' procedure improves the robustness of the response image evaluation in a clinical research setting and opens the possibility of the correction of possibly flawed images performed away from the laboratory setting by the subject/patient themselves. This opportunity may increase the use of photo-testing and, by extension, other late response skin testing where the necessity of a return assessment visit is a disincentive to performance of the test. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Using Semantic Association to Extend and Infer Literature-Oriented Relativity Between Terms.

    PubMed

    Cheng, Liang; Li, Jie; Hu, Yang; Jiang, Yue; Liu, Yongzhuang; Chu, Yanshuo; Wang, Zhenxing; Wang, Yadong

    2015-01-01

    Relative terms often appear together in the literature. Methods have been presented for weighting relativity of pairwise terms by their co-occurring literature and inferring new relationship. Terms in the literature are also in the directed acyclic graph of ontologies, such as Gene Ontology and Disease Ontology. Therefore, semantic association between terms may help for establishing relativities between terms in literature. However, current methods do not use these associations. In this paper, an adjusted R-scaled score (ARSS) based on information content (ARSSIC) method is introduced to infer new relationship between terms. First, set inclusion relationship between terms of ontology was exploited to extend relationships between these terms and literature. Next, the ARSS method was presented to measure relativity between terms across ontologies according to these extensional relationships. Then, the ARSSIC method using ratios of information shared of term's ancestors was designed to infer new relationship between terms across ontologies. The result of the experiment shows that ARSS identified more pairs of statistically significant terms based on corresponding gene sets than other methods. And the high average area under the receiver operating characteristic curve (0.9293) shows that ARSSIC achieved a high true positive rate and a low false positive rate. Data is available at http://mlg.hit.edu.cn/ARSSIC/.

  19. Willingness to pay for one-stop anesthesia in pediatric day surgery

    PubMed Central

    2011-01-01

    Background This study assesses the parents' Willingness To Pay (WTP) for One Stop Anesthesia (OSA). OSA is part of a free screening procedure that determines the timing of the anesthesiological assessment. In OSA-positive patients, the preoperative assessment is carried out on the same day as the surgery. The OSA allows patients who have to undergo surgery in a pediatric day surgery to avoid accessing the pre-admission clinic. Method This is a descriptive cohort study. A sample of 106 parents were interviewed directly by means of a questionnaire. The questionnaire builds a hypothetical scenario where the interviewee has a chance to buy the OSA health service with the WTP. The WTP values are distributed in classes and are contingent to the market built in the questionnaire. The Chi Square and Cramer's V tests evaluate the WTP dependence on the parents' place of origin and occupation. Results The approximate average of the WTP classes is €87.21 per family. The Chi Square test relative to the WTP classes and the places of origin is statistically significant (p < 0.05). The Cramer's V test is 0.347 and points to a positive association between the two demographics. The Cramer's V test of the WTP classes and the types of job is 0.339 and indicates a positive association. Conclusion Nearly 90% of pediatric patients who were screened for timing the preoperative assessment are true positives to OSA. This allows doing away with the pre-hospitalization, with definite advantages for the families. This screening is a health service that families would be hypothetically willing to pay. PMID:21586162

  20. Climate modeling for Yamal territory using supercomputer atmospheric circulation model ECHAM5-wiso

    NASA Astrophysics Data System (ADS)

    Denisova, N. Y.; Gribanov, K. G.; Werner, M.; Zakharov, V. I.

    2015-11-01

    Dependences of monthly means of regional averages of model atmospheric parameters on initial and boundary condition remoteness in the past are the subject of the study. We used atmospheric general circulation model ECHAM5-wiso for simulation of monthly means of regional averages of climate parameters for Yamal region and different periods of premodeling. Time interval was varied from several months to 12 years. We present dependences of model monthly means of regional averages of surface temperature, 2 m air temperature and humidity for December of 2000 on duration of premodeling. Comparison of these results with reanalysis data showed that best coincidence with true parameters could be reached if duration of pre-modelling is approximately 10 years.

  1. MBA for the School Leader

    ERIC Educational Resources Information Center

    Davis, Andrew P.

    2012-01-01

    After five years of teaching in two San Francisco independent schools, the author returned to Stanford University to pursue an MBA and a Master's in Education. While it is true that some of the topics he studied, such as the weighted average cost of capital, have little, if any, use in the independent school world, throughout his two years at…

  2. 26 CFR 1.41-3A - Base period research expense.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 1 2014-04-01 2013-04-01 true Base period research expense. 1.41-3A Section 1... Research Credit-for Taxable Years Beginning Before January 1, 1990 § 1.41-3A Base period research expense... average qualified research expenses during the base period), the taxpayer shall be treated as— (1) Having...

  3. 26 CFR 1.41-3A - Base period research expense.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 1 2011-04-01 2009-04-01 true Base period research expense. 1.41-3A Section 1... Research Credit-for Taxable Years Beginning Before January 1, 1990 § 1.41-3A Base period research expense... average qualified research expenses during the base period), the taxpayer shall be treated as— (1) Having...

  4. 26 CFR 1.41-3A - Base period research expense.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Base period research expense. 1.41-3A Section 1... Research Credit-for Taxable Years Beginning Before January 1, 1990 § 1.41-3A Base period research expense... average qualified research expenses during the base period), the taxpayer shall be treated as— (1) Having...

  5. 21 CFR 176.180 - Components of paper and paperboard in contact with dry food.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...,1,3,3-Tetramethylbutyl)phenyl]-omega hydroxypoly(oxyethylene) mixture of dihydrogen phosphate and...) content averaging 6-9 or 40 moles. α-[p-(1,1,3,3-Tetramethylbutyl)phenyl or p-nonylphenyl]-omega... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Components of paper and paperboard in contact with...

  6. 26 CFR 16A.126-1 - Certain cost-sharing payments-in general.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 14 2014-04-01 2013-04-01 true Certain cost-sharing payments-in general. 16A... CERTAIN CONSERVATION COST-SHARING PAYMENTS § 16A.126-1 Certain cost-sharing payments—in general. (a... average annual income derived from the affected property prior to receipt of the improvement or an amount...

  7. Accurate indel prediction using paired-end short reads

    PubMed Central

    2013-01-01

    Background One of the major open challenges in next generation sequencing (NGS) is the accurate identification of structural variants such as insertions and deletions (indels). Current methods for indel calling assign scores to different types of evidence or counter-evidence for the presence of an indel, such as the number of split read alignments spanning the boundaries of a deletion candidate or reads that map within a putative deletion. Candidates with a score above a manually defined threshold are then predicted to be true indels. As a consequence, structural variants detected in this manner contain many false positives. Results Here, we present a machine learning based method which is able to discover and distinguish true from false indel candidates in order to reduce the false positive rate. Our method identifies indel candidates using a discriminative classifier based on features of split read alignment profiles and trained on true and false indel candidates that were validated by Sanger sequencing. We demonstrate the usefulness of our method with paired-end Illumina reads from 80 genomes of the first phase of the 1001 Genomes Project ( http://www.1001genomes.org) in Arabidopsis thaliana. Conclusion In this work we show that indel classification is a necessary step to reduce the number of false positive candidates. We demonstrate that missing classification may lead to spurious biological interpretations. The software is available at: http://agkb.is.tuebingen.mpg.de/Forschung/SV-M/. PMID:23442375

  8. Using a Regression Method for Estimating Performance in a Rapid Serial Visual Presentation Target-Detection Task

    DTIC Science & Technology

    2017-12-01

    values designating each stimulus as a target ( true ) or nontarget (false). Both stim_time and stim_label should have length equal to the number of...position unless so designated by other authorized documents. Citation of manufacturer’s or trade names does not constitute an official endorsement or...depend strongly on the true values of hit rate and false-alarm rate. Based on its better estimation of hit rate and false-alarm rate, the regression

  9. Oyster parasites Bonamia ostreae and B. exitiosa co-occur in Galicia (NW Spain): spatial distribution and infection dynamics.

    PubMed

    Ramilo, Andrea; González, Mar; Carballal, María J; Darriba, Susana; Abollo, Elvira; Villalba, Antonio

    2014-07-24

    Bonamiosis constrains the flat oyster industry worldwide. The protistan species Bonamia ostreae had been considered solely responsible for this disease in Europe, but the report of B. exitiosa infecting Ostrea edulis 5 yr ago in Galicia (NW Spain), and subsequently in other European countries, raised the question of the relevance of each species in bonamiosis. The spatial distribution of B. exitiosa and B. ostreae in Galicia was addressed by sampling 7 natural O. edulis beds and 3 culture raft areas, up to 3 times in the period 2009 to 2010. B. ostreae infected flat oysters in every natural bed and every raft culture area. True B. exitiosa infections (histological diagnosis) were detected in every raft culture area but only in 2 natural beds, i.e. in 4 rías. PCR-positive results for B. exitiosa were recorded in 4 out of 5 beds where true infections were not found, thus the occurrence of B. exitiosa in those 4 beds cannot be ruled out. Additionally, 4 cohorts of hatchery-produced oyster spat were transferred to a raft to analyse Bonamia spp. infection dynamics through oyster on-growing. The highest percentages of oysters PCR-positive for both Bonamia spp. were recorded in the first months of on-growing; other peaks of PCR-positive diagnosis were successively lower. Differences in the percentage of PCR-positive cases and in the prevalence of true infection between B. exitiosa and B. ostreae through on-growing were not significant. Our results support that B. exitiosa is adapted to infect O. edulis in the Galician marine ecosystem.

  10. 41 CFR Appendix D to Part 60 - 741-Guidelines Regarding Positions Engaged in Carrying Out a Contract

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true 741-Guidelines Regarding... Management Other Provisions Relating to Public Contracts OFFICE OF FEDERAL CONTRACT COMPLIANCE PROGRAMS... performed and their relationship to contract performance. A position is included if its duties included work...

  11. 22 CFR 92.27 - Affiant's allegations in affidavit.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... allegations: (1) Material facts within the personal knowledge of the affiant should be alleged directly and positively. Facts are not to be inferred where the affiant has it in his power to state them positively and... rather than on facts within his personal knowledge, he should aver that such matters are true to the best...

  12. The Glass Ceiling: Progress and Persistent Challenges

    ERIC Educational Resources Information Center

    McLlwain, Wendy M.

    2012-01-01

    It has been written that since 2001, there has not been any significant progress and the glass ceiling is still intact. Women are still underrepresented in top positions (Anonymous, 2004). If this is true, the glass ceiling presents a major barrier between women and their desire to advance into executive or senior management positions. In addition…

  13. We Need to Talk about Well-Being

    ERIC Educational Resources Information Center

    Cigman, Ruth

    2012-01-01

    In this paper, I explore the enhancement agenda, which aims to enhance well-being nationwide and particularly among young people. Although it is said by its proponents to embody the ideas of Aristotle, I argue that its true theoretical underpinning is the polarised thinking of positive psychology. The sharp distinction between positive and…

  14. The Cause of Category-Based Distortions in Spatial Memory: A Distribution Analysis

    ERIC Educational Resources Information Center

    Sampaio, Cristina; Wang, Ranxiao Frances

    2017-01-01

    Recall of remembered locations reliably reflects a compromise between a target's true position and its region's prototypical position. The effect is quite robust, and a standard interpretation for these data is that the metric and categorical codings blend in a Bayesian combinatory fashion. However, there has been no direct experimental evidence…

  15. Put on Your Dancing Shoes! Choreographing Positive Partnerships with Parents of Gifted Children.

    ERIC Educational Resources Information Center

    Riley, Tracy L.

    1999-01-01

    Uses the analogy of dance to discuss development of positive relationships between teachers and parents of gifted children. Emphasis is on providing parents with necessary information, overcoming barriers to true partnerships, and rules of thumb for implementing the partnership on an ongoing basis, such as following the golden rule in…

  16. 41 CFR Appendix D to Part 60 - 741-Guidelines Regarding Positions Engaged in Carrying Out a Contract

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 41 Public Contracts and Property Management 1 2012-07-01 2009-07-01 true 741-Guidelines Regarding... Management Other Provisions Relating to Public Contracts OFFICE OF FEDERAL CONTRACT COMPLIANCE PROGRAMS... performed and their relationship to contract performance. A position is included if its duties included work...

  17. 41 CFR Appendix D to Part 60 - 741-Guidelines Regarding Positions Engaged in Carrying Out a Contract

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 41 Public Contracts and Property Management 1 2011-07-01 2009-07-01 true 741-Guidelines Regarding... Management Other Provisions Relating to Public Contracts OFFICE OF FEDERAL CONTRACT COMPLIANCE PROGRAMS... performed and their relationship to contract performance. A position is included if its duties included work...

  18. True Tension Traverse: What Research Shows about the Antidepressant Effects of Challenge Course Experiences.

    ERIC Educational Resources Information Center

    Bunting, Camille

    2000-01-01

    A study of five undergraduate physical education classes explored the influence on positive and negative affect of different types of physical activity. Results indicated that running, skiing, and challenge course activities, especially the 14-foot wall, increased positive affect more than other activities. Implications for physical education…

  19. Super-Arrhenius diffusion in an undercooled binary Lennard-Jones liquid results from a quantifiable correlation effect.

    PubMed

    de Souza, Vanessa K; Wales, David J

    2006-02-10

    On short time scales an underlying Arrhenius temperature dependence of the diffusion constant can be extracted from the fragile, super-Arrhenius diffusion of a binary Lennard-Jones mixture. This Arrhenius diffusion is related to the true super-Arrhenius behavior by a factor that depends on the average angle between steps in successive time windows. The correction factor accounts for the fact that on average, successive displacements are negatively correlated, and this effect can therefore be linked directly with the higher apparent activation energy for diffusion at low temperature.

  20. Geometrical correction factors for heat flux meters

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.; Papell, S. S.

    1974-01-01

    General formulas are derived for determining gage averaging errors of strip-type heat flux meters used in the measurement of one-dimensional heat flux distributions. The local averaging error e(x) is defined as the difference between the measured value of the heat flux and the local value which occurs at the center of the gage. In terms of e(x), a correction procedure is presented which allows a better estimate for the true value of the local heat flux. For many practical problems, it is possible to use relatively large gages to obtain acceptable heat flux measurements.

  1. Effect of gage size on the measurement of local heat flux. [formulas for determining gage averaging errors

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.; Papell, S. S.

    1973-01-01

    General formulas are derived for determining gage averaging errors of strip-type heat flux meters used in the measurement of one-dimensional heat flux distributions. In addition, a correction procedure is presented which allows a better estimate for the true value of the local heat flux. As an example of the technique, the formulas are applied to the cases of heat transfer to air slot jets impinging on flat and concave surfaces. It is shown that for many practical problems, the use of very small heat flux gages is often unnecessary.

  2. Combining multiple positive training sets to generate confidence scores for protein-protein interactions.

    PubMed

    Yu, Jingkai; Finley, Russell L

    2009-01-01

    High-throughput experimental and computational methods are generating a wealth of protein-protein interaction data for a variety of organisms. However, data produced by current state-of-the-art methods include many false positives, which can hinder the analyses needed to derive biological insights. One way to address this problem is to assign confidence scores that reflect the reliability and biological significance of each interaction. Most previously described scoring methods use a set of likely true positives to train a model to score all interactions in a dataset. A single positive training set, however, may be biased and not representative of true interaction space. We demonstrate a method to score protein interactions by utilizing multiple independent sets of training positives to reduce the potential bias inherent in using a single training set. We used a set of benchmark yeast protein interactions to show that our approach outperforms other scoring methods. Our approach can also score interactions across data types, which makes it more widely applicable than many previously proposed methods. We applied the method to protein interaction data from both Drosophila melanogaster and Homo sapiens. Independent evaluations show that the resulting confidence scores accurately reflect the biological significance of the interactions.

  3. Tropical Cyclone Activity in the North Atlantic Basin During the Weather Satellite Era, 1960-2014

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2016-01-01

    This Technical Publication (TP) represents an extension of previous work concerning the tropical cyclone activity in the North Atlantic basin during the weather satellite era, 1960-2014, in particular, that of an article published in The Journal of the Alabama Academy of Science. With the launch of the TIROS-1 polar-orbiting satellite in April 1960, a new era of global weather observation and monitoring began. Prior to this, the conditions of the North Atlantic basin were determined only from ship reports, island reports, and long-range aircraft reconnaissance. Consequently, storms that formed far from land, away from shipping lanes, and beyond the reach of aircraft possibly could be missed altogether, thereby leading to an underestimate of the true number of tropical cyclones forming in the basin. Additionally, new analysis techniques have come into use which sometimes has led to the inclusion of one or more storms at the end of a nominal hurricane season that otherwise would not have been included. In this TP, examined are the yearly (or seasonal) and 10-year moving average (10-year moving average) values of the (1) first storm day (FSD), last storm day (LSD), and length of season (LOS); (2) frequencies of tropical cyclones (by class); (3) average peak 1-minute sustained wind speed () and average lowest pressure (); (4) average genesis location in terms of north latitudinal () and west longitudinal () positions; (5) sum and average power dissipation index (); (6) sum and average accumulated cyclone energy (); (7) sum and average number of storm days (); (8) sum of the number of hurricane days (NHD) and number of major hurricane days (NMHD); (9) net tropical cyclone activity index (NTCA); (10) largest individual storm (LIS) PWS, LP, PDI, ACE, NSD, NHD, NMHD; and (11) number of category 4 and 5 hurricanes (N4/5). Also examined are the December-May (D-M) and June-November (J-N) averages and 10-year moving average values of several climatic factors, including the (1) oceanic Nino index (); (2) Atlantic multi-decadal oscillation () index; (3) Atlantic meridional mode () index; (4) global land-ocean temperature index (); and (5) quasi-biennial oscillation () index. Lastly, the associational aspects (using both linear and nonparametric statistical tests) between selected tropical cyclone parameters and the climatic factors are examined based on their 10-year moving average trend values.

  4. Pressure ulcer knowledge among nurses in a Brazilian university hospital.

    PubMed

    Chianca, Tânia Couto Machado; Rezende, Jomara Figueiredo Pinto; Borges, Eline Lima; Nogueira, Vera Lucia; Caliri, Maria Helena Larcher

    2010-10-01

     To facilitate the implementation of evidence-based skin and pressure ulcer (PU) care practices and related staff education programs in a university hospital in Brazil, a cross-sectional study was conducted to evaluate nurses' knowledge about PU prevention, wound assessment, and staging. Of the 141 baccalaureate nurses (BSN) employed by the hospital at the time of the study, 106 consented to participate. Using a Portuguese version of Pieper's Pressure Ulcer Knowledge Test (PUKT), participants were asked to indicate whether 33 statements about PU prevention and eight about PU assessment and staging were true or false. For the 33 prevention statements, the average number answered correctly was 26.07 (SD 4.93) and for the eight assessment statements the average was 4.59 (SD 1.62). Nurses working on inpatient clinical nursing units had significantly better scores (P = 0.000). Years of nursing experience had a weak and negative correlation with correct PUKT scores (r = -0.21, P = 0.033) as did years of experience working in the university hospital (r = -.179, P <.071). Incorrect responses were most common for statements related to patient positioning, massage, PU assessment, and staging definitions. The results of this study confirm that nurses have an overall understanding of PU prevention and assessment principles but important knowledge deficits exist. Focused continuing education efforts are needed to facilitate the implementation of evidence-based care.

  5. Visualizing disease associations: graphic analysis of frequency distributions as a function of age using moving average plots (MAP) with application to Alzheimer's and Parkinson's disease.

    PubMed

    Payami, Haydeh; Kay, Denise M; Zabetian, Cyrus P; Schellenberg, Gerard D; Factor, Stewart A; McCulloch, Colin C

    2010-01-01

    Age-related variation in marker frequency can be a confounder in association studies, leading to both false-positive and false-negative findings and subsequently to inconsistent reproducibility. We have developed a simple method, based on a novel extension of moving average plots (MAP), which allows investigators to inspect the frequency data for hidden age-related variations. MAP uses the standard case-control association data and generates a birds-eye view of the frequency distributions across the age spectrum; a picture in which one can see if, how, and when the marker frequencies in cases differ from that in controls. The marker can be specified as an allele, genotype, haplotype, or environmental factor; and age can be age-at-onset, age when subject was last known to be unaffected, or duration of exposure. Signature patterns that emerge can help distinguish true disease associations from spurious associations due to age effects, age-varying associations from associations that are uniform across all ages, and associations with risk from associations with age-at-onset. Utility of MAP is illustrated by application to genetic and epidemiological association data for Alzheimer's and Parkinson's disease. MAP is intended as a descriptive method, to complement standard statistical techniques. Although originally developed for age patterns, MAP is equally useful for visualizing any quantitative trait.

  6. Fuzzy Logic Particle Tracking

    NASA Technical Reports Server (NTRS)

    2005-01-01

    A new all-electronic Particle Image Velocimetry technique that can efficiently map high speed gas flows has been developed in-house at the NASA Lewis Research Center. Particle Image Velocimetry is an optical technique for measuring the instantaneous two component velocity field across a planar region of a seeded flow field. A pulsed laser light sheet is used to illuminate the seed particles entrained in the flow field at two instances in time. One or more charged coupled device (CCD) cameras can be used to record the instantaneous positions of particles. Using the time between light sheet pulses and determining either the individual particle displacements or the average displacement of particles over a small subregion of the recorded image enables the calculation of the fluid velocity. Fuzzy logic minimizes the required operator intervention in identifying particles and computing velocity. Using two cameras that have the same view of the illumination plane yields two single exposure image frames. Two competing techniques that yield unambiguous velocity vector direction information have been widely used for reducing the single-exposure, multiple image frame data: (1) cross-correlation and (2) particle tracking. Correlation techniques yield averaged velocity estimates over subregions of the flow, whereas particle tracking techniques give individual particle velocity estimates. For the correlation technique, the correlation peak corresponding to the average displacement of particles across the subregion must be identified. Noise on the images and particle dropout result in misidentification of the true correlation peak. The subsequent velocity vector maps contain spurious vectors where the displacement peaks have been improperly identified. Typically these spurious vectors are replaced by a weighted average of the neighboring vectors, thereby decreasing the independence of the measurements. In this work, fuzzy logic techniques are used to determine the true correlation displacement peak even when it is not the maximum peak, hence maximizing the information recovery from the correlation operation, maintaining the number of independent measurements, and minimizing the number of spurious velocity vectors. Correlation peaks are correctly identified in both high and low seed density cases. The correlation velocity vector map can then be used as a guide for the particle-tracking operation. Again fuzzy logic techniques are used, this time to identify the correct particle image pairings between exposures to determine particle displacements, and thus the velocity. Combining these two techniques makes use of the higher spatial resolution available from the particle tracking. Particle tracking alone may not be possible in the high seed density images typically required for achieving good results from the correlation technique. This two-staged velocimetric technique can measure particle velocities with high spatial resolution over a broad range of seeding densities.

  7. [Analysis of false-positive reaction for bacterial detection of blood samples with the automated microbial detection system BacT/ALERT 3D].

    PubMed

    Zhu, Li-Wei; Yang, Xue-Mei; Xu, Xiao-Qin; Xu, Jian; Lu, Huang-Jun; Yan, Li-Xing

    2008-10-01

    This study was aimed to analyze the results of false positive reaction in bacterial detection of blood samples with BacT/ALERT 3D system, to evaluate the specificity of this system, and to decrease the false positive reaction. Each reaction flasks in past five years were processed for bacteria isolation and identification. When the initial cultures were positive, the remaining samples and the corresponding units were recultured if still available. 11395 blood samples were detected. It is worthy of note that the incubator temperature should be stabilized, avoiding fluctuation; when the cultures were alarmed, the reaction flasks showed be kept some hours for further incubation so as to trace a sharply increasing signal to support the judgement of true bacterial growth. The results indicated that 122 samples (1.07%) wee positive at initial culture, out of them 107 samples (88.7%) were found bacterial, and 15 samples (12.3%) were found nothing. The detection curves of positive samples resulted from bacterial growth showed ascent. In conclusion, maintenance of temperature stability and avoidance of temperature fluctuation in incubator could decrease the occurrence of false-positive reaction in detection process. The reaction flasks with positive results at initial culture should be recultured, and whether existence of a sharply ascending logarilhimic growth phase in bacterial growth curve should be further detected, which are helpful to distinguish false-positive reactions from true positive, and thus increase the specificity of the BacT/ALERT system.

  8. Size Matters: FTIR Spectral Analysis of Apollo Regolith Samples Exhibits Grain Size Dependence.

    NASA Astrophysics Data System (ADS)

    Martin, Dayl; Joy, Katherine; Pernet-Fisher, John; Wogelius, Roy; Morlok, Andreas; Hiesinger, Harald

    2017-04-01

    The Mercury Thermal Infrared Spectrometer (MERTIS) on the upcoming BepiColombo mission is designed to analyse the surface of Mercury in thermal infrared wavelengths (7-14 μm) to investigate the physical properties of the surface materials [1]. Laboratory analyses of analogue materials are useful for investigating how various sample properties alter the resulting infrared spectrum. Laboratory FTIR analysis of Apollo fine (<1mm) soil samples 14259,672, 15401,147, and 67481,96 have provided an insight into how grain size, composition, maturity (i.e., exposure to space weathering processes), and proportion of glassy material affect their average infrared spectra. Each of these samples was analysed as a bulk sample and five size fractions: <25, 25-63, 63-125, 125-250, and <250 μm. Sample 14259,672 is a highly mature highlands regolith with a large proportion of agglutinates [2]. The high agglutinate content (>60%) causes a 'flattening' of the spectrum, with reduced reflectance in the Reststrahlen Band region (RB) as much as 30% in comparison to samples that are dominated by a high proportion of crystalline material. Apollo 15401,147 is an immature regolith with a high proportion of volcanic glass pyroclastic beads [2]. The high mafic mineral content results in a systematic shift in the Christiansen Feature (CF - the point of lowest reflectance) to longer wavelength: 8.6 μm. The glass beads dominate the spectrum, displaying a broad peak around the main Si-O stretch band (at 10.8 μm). As such, individual mineral components of this sample cannot be resolved from the average spectrum alone. Apollo 67481,96 is a sub-mature regolith composed dominantly of anorthite plagioclase [2]. The CF position of the average spectrum is shifted to shorter wavelengths (8.2 μm) due to the higher proportion of felsic minerals. Its average spectrum is dominated by anorthite reflectance bands at 8.7, 9.1, 9.8, and 10.8 μm. The average reflectance is greater than the other samples due to a lower proportion of glassy material. In each soil, the smallest fractions (0-25 and 25-63 μm) have CF positions 0.1-0.4 μm higher than the larger grain sizes. Also, the bulk-sample spectra mostly closely resemble the 0-25 μm sieved size fraction spectrum, indicating that this size fraction of each sample dominates the bulk spectrum regardless of other physical properties. This has implications for surface analyses of other Solar System bodies where some mineral phases or components could be concentrated in a particular size fraction. For example, the anorthite grains in 67481,96 are dominantly >25 μm in size and therefore may not contribute proportionally to the bulk average spectrum (compared to the <25 μm fraction). The resulting bulk spectrum of 67481,96 has a CF position 0.2 μm higher than all size fractions >25 microns and therefore does not represent a true average composition of the sample. Further investigation of how grain size and composition alters the average spectrum is required to fully understand infrared spectra of planetary surfaces. [1] - Hiesinger H., Helbert J., and MERTIS Co-I Team. (2010). The Mercury Radiometer and Thermal Infrared Spectrometer (MERTIS) for the BepiColombo Mission. Planetary and Space Science. 58, 144-165. [2] - NASA Lunar Sample Compendium. https://curator.jsc.nasa.gov/lunar/lsc/

  9. Averaging in SU(2) open quantum random walk

    NASA Astrophysics Data System (ADS)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  10. Probability of success for phase III after exploratory biomarker analysis in phase II.

    PubMed

    Götte, Heiko; Kirchner, Marietta; Sailer, Martin Oliver

    2017-05-01

    The probability of success or average power describes the potential of a future trial by weighting the power with a probability distribution of the treatment effect. The treatment effect estimate from a previous trial can be used to define such a distribution. During the development of targeted therapies, it is common practice to look for predictive biomarkers. The consequence is that the trial population for phase III is often selected on the basis of the most extreme result from phase II biomarker subgroup analyses. In such a case, there is a tendency to overestimate the treatment effect. We investigate whether the overestimation of the treatment effect estimate from phase II is transformed into a positive bias for the probability of success for phase III. We simulate a phase II/III development program for targeted therapies. This simulation allows to investigate selection probabilities and allows to compare the estimated with the true probability of success. We consider the estimated probability of success with and without subgroup selection. Depending on the true treatment effects, there is a negative bias without selection because of the weighting by the phase II distribution. In comparison, selection increases the estimated probability of success. Thus, selection does not lead to a bias in probability of success if underestimation due to the phase II distribution and overestimation due to selection cancel each other out. We recommend to perform similar simulations in practice to get the necessary information about the risk and chances associated with such subgroup selection designs. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Retinal Microaneurysms Detection Using Gradient Vector Analysis and Class Imbalance Classification.

    PubMed

    Dai, Baisheng; Wu, Xiangqian; Bu, Wei

    2016-01-01

    Retinal microaneurysms (MAs) are the earliest clinically observable lesions of diabetic retinopathy. Reliable automated MAs detection is thus critical for early diagnosis of diabetic retinopathy. This paper proposes a novel method for the automated MAs detection in color fundus images based on gradient vector analysis and class imbalance classification, which is composed of two stages, i.e. candidate MAs extraction and classification. In the first stage, a candidate MAs extraction algorithm is devised by analyzing the gradient field of the image, in which a multi-scale log condition number map is computed based on the gradient vectors for vessel removal, and then the candidate MAs are localized according to the second order directional derivatives computed in different directions. Due to the complexity of fundus image, besides a small number of true MAs, there are also a large amount of non-MAs in the extracted candidates. Classifying the true MAs and the non-MAs is an extremely class imbalanced classification problem. Therefore, in the second stage, several types of features including geometry, contrast, intensity, edge, texture, region descriptors and other features are extracted from the candidate MAs and a class imbalance classifier, i.e., RUSBoost, is trained for the MAs classification. With the Retinopathy Online Challenge (ROC) criterion, the proposed method achieves an average sensitivity of 0.433 at 1/8, 1/4, 1/2, 1, 2, 4 and 8 false positives per image on the ROC database, which is comparable with the state-of-the-art approaches, and 0.321 on the DiaRetDB1 V2.1 database, which outperforms the state-of-the-art approaches.

  12. The Kepler Follow-up Observation Program. I. A Catalog of Companions to Kepler Stars from High-Resolution Imaging

    NASA Astrophysics Data System (ADS)

    Furlan, E.; Ciardi, D. R.; Everett, M. E.; Saylors, M.; Teske, J. K.; Horch, E. P.; Howell, S. B.; van Belle, G. T.; Hirsch, L. A.; Gautier, T. N., III; Adams, E. R.; Barrado, D.; Cartier, K. M. S.; Dressing, C. D.; Dupree, A. K.; Gilliland, R. L.; Lillo-Box, J.; Lucas, P. W.; Wang, J.

    2017-02-01

    We present results from high-resolution, optical to near-IR imaging of host stars of Kepler Objects of Interest (KOIs), identified in the original Kepler field. Part of the data were obtained under the Kepler imaging follow-up observation program over six years (2009-2015). Almost 90% of stars that are hosts to planet candidates or confirmed planets were observed. We combine measurements of companions to KOI host stars from different bands to create a comprehensive catalog of projected separations, position angles, and magnitude differences for all detected companion stars (some of which may not be bound). Our compilation includes 2297 companions around 1903 primary stars. From high-resolution imaging, we find that ˜10% (˜30%) of the observed stars have at least one companion detected within 1″ (4″). The true fraction of systems with close (≲4″) companions is larger than the observed one due to the limited sensitivities of the imaging data. We derive correction factors for planet radii caused by the dilution of the transit depth: assuming that planets orbit the primary stars or the brightest companion stars, the average correction factors are 1.06 and 3.09, respectively. The true effect of transit dilution lies in between these two cases and varies with each system. Applying these factors to planet radii decreases the number of KOI planets with radii smaller than 2 {R}\\oplus by ˜2%-23% and thus affects planet occurrence rates. This effect will also be important for the yield of small planets from future transit missions such as TESS.

  13. Poster — Thur Eve — 55: An automated XML technique for isocentre verification on the Varian TrueBeam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asiev, Krum; Mullins, Joel; DeBlois, François

    2014-08-15

    Isocentre verification tests, such as the Winston-Lutz (WL) test, have gained popularity in the recent years as techniques such as stereotactic radiosurgery/radiotherapy (SRS/SRT) treatments are more commonly performed on radiotherapy linacs. These highly conformal treatments require frequent monitoring of the geometrical accuracy of the isocentre to ensure proper radiation delivery. At our clinic, the WL test is performed by acquiring with the EPID a collection of 8 images of a WL phantom fixed on the couch for various couch/gantry angles. This set of images is later analyzed to determine the isocentre size. The current work addresses the acquisition process. Amore » manual WL test acquisition performed by and experienced physicist takes in average 25 minutes and is prone to user manipulation errors. We have automated this acquisition on a Varian TrueBeam STx linac (Varian, Palo Alto, USA). The Varian developer mode allows the execution of custom-made XML script files to control all aspects of the linac operation. We have created an XML-WL script that cycles through each couch/gantry combinations taking an EPID image at each position. This automated acquisition is done in less than 4 minutes. The reproducibility of the method was verified by repeating the execution of the XML file 5 times. The analysis of the images showed variation of the isocenter size less than 0.1 mm along the X, Y and Z axes and compares favorably to a manual acquisition for which we typically observe variations up to 0.5 mm.« less

  14. A Method to Estimate the Size and Characteristics of HIV-positive Populations Using an Individual-based Stochastic Simulation Model.

    PubMed

    Nakagawa, Fumiyo; van Sighem, Ard; Thiebaut, Rodolphe; Smith, Colette; Ratmann, Oliver; Cambiano, Valentina; Albert, Jan; Amato-Gauci, Andrew; Bezemer, Daniela; Campbell, Colin; Commenges, Daniel; Donoghoe, Martin; Ford, Deborah; Kouyos, Roger; Lodwick, Rebecca; Lundgren, Jens; Pantazis, Nikos; Pharris, Anastasia; Quinten, Chantal; Thorne, Claire; Touloumi, Giota; Delpech, Valerie; Phillips, Andrew

    2016-03-01

    It is important not only to collect epidemiologic data on HIV but to also fully utilize such information to understand the epidemic over time and to help inform and monitor the impact of policies and interventions. We describe and apply a novel method to estimate the size and characteristics of HIV-positive populations. The method was applied to data on men who have sex with men living in the UK and to a pseudo dataset to assess performance for different data availability. The individual-based simulation model was calibrated using an approximate Bayesian computation-based approach. In 2013, 48,310 (90% plausibility range: 39,900-45,560) men who have sex with men were estimated to be living with HIV in the UK, of whom 10,400 (6,160-17,350) were undiagnosed. There were an estimated 3,210 (1,730-5,350) infections per year on average between 2010 and 2013. Sixty-two percent of the total HIV-positive population are thought to have viral load <500 copies/ml. In the pseudo-epidemic example, HIV estimates have narrower plausibility ranges and are closer to the true number, the greater the data availability to calibrate the model. We demonstrate that our method can be applied to settings with less data, however plausibility ranges for estimates will be wider to reflect greater uncertainty of the data used to fit the model.

  15. TU-CD-304-01: FEATURED PRESENTATION and BEST IN PHYSICS (THERAPY): Trajectory Modulated Arc Therapy: Development of Novel Arc Delivery Techniques Integrating Dynamic Table Motion for Extended Volume Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, E; Hoppe, R; Million, L

    2015-06-15

    Purpose: Integration of coordinated robotic table motion with inversely-planned arc delivery has the potential to resolve table-top delivery limitations of large-field treatments such as Total Body Irradiation (TBI), Total Lymphoid Irradiation (TLI), and Cranial-Spinal Irradiation (CSI). We formulate the foundation for Trajectory Modulated Arc Therapy (TMAT), and using Varian Developer Mode capabilities, experimentally investigate its practical implementation for such techniques. Methods: A MATLAB algorithm was developed for inverse planning optimization of the table motion, MLC positions, and gantry motion under extended-SSD geometry. To maximize the effective field size, delivery trajectories for TMAT TBI were formed with the table rotated atmore » 270° IEC and dropped vertically to 152.5cm SSD. Preliminary testing of algorithm parameters was done through retrospective planning analysis. Robotic delivery was programmed using custom XML scripting on the TrueBeam Developer Mode platform. Final dose was calculated using the Eclipse AAA algorithm. Initial verification of delivery accuracy was measured using OSLDs on a solid water phantom of varying thickness. Results: A comparison of DVH curves demonstrated that dynamic couch motion irradiation was sufficiently approximated by static control points spaced in intervals of less than 2cm. Optimized MLC motion decreased the average lung dose to 68.5% of the prescription dose. The programmed irradiation integrating coordinated table motion was deliverable on a TrueBeam STx linac in 6.7 min. With the couch translating under an open 10cmx20cm field angled at 10°, OSLD measurements along the midline of a solid water phantom at depths of 3, 5, and 9cm were within 3% of the TPS AAA algorithm with an average deviation of 1.2%. Conclusion: A treatment planning and delivery system for Trajectory Modulated Arc Therapy of extended volumes has been established and experimentally demonstrated for TBI. Extension to other treatment techniques such as TLI and CSI is readily achievable through the developed platform. Grant Funding by Varian Medical Systems.« less

  16. WE-G-204-07: Automated Characterization of Perceptual Quality of Clinical Chest Radiographs: Improvements in Lung, Spine, and Hardware Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, J; Zhang, L; Samei, E

    Purpose: To develop and validate more robust methods for automated lung, spine, and hardware detection in AP/PA chest images. This work is part of a continuing effort to automatically characterize the perceptual image quality of clinical radiographs. [Y. Lin et al. Med. Phys. 39, 7019–7031 (2012)] Methods: Our previous implementation of lung/spine identification was applicable to only one vendor. A more generalized routine was devised based on three primary components: lung boundary detection, fuzzy c-means (FCM) clustering, and a clinically-derived lung pixel probability map. Boundary detection was used to constrain the lung segmentations. FCM clustering produced grayscale- and neighborhood-based pixelmore » classification probabilities which are weighted by the clinically-derived probability maps to generate a final lung segmentation. Lung centerlines were set along the left-right lung midpoints. Spine centerlines were estimated as a weighted average of body contour, lateral lung contour, and intensity-based centerline estimates. Centerline estimation was tested on 900 clinical AP/PA chest radiographs which included inpatient/outpatient, upright/bedside, men/women, and adult/pediatric images from multiple imaging systems. Our previous implementation further did not account for the presence of medical hardware (pacemakers, wires, implants, staples, stents, etc.) potentially biasing image quality analysis. A hardware detection algorithm was developed using a gradient-based thresholding method. The training and testing paradigm used a set of 48 images from which 1920 51×51 pixel{sup 2} ROIs with and 1920 ROIs without hardware were manually selected. Results: Acceptable lung centerlines were generated in 98.7% of radiographs while spine centerlines were acceptable in 99.1% of radiographs. Following threshold optimization, the hardware detection software yielded average true positive and true negative rates of 92.7% and 96.9%, respectively. Conclusion: Updated segmentation and centerline estimation methods in addition to new gradient-based hardware detection software provide improved data integrity control and error-checking for automated clinical chest image quality characterization across multiple radiography systems.« less

  17. Influence of Observed Diurnal Cycles of Aerosol Optical Depth on Aerosol Direct Radiative Effect

    NASA Technical Reports Server (NTRS)

    Arola, A.; Eck, T. F.; Huttunen, J.; Lehtinen, K. E. J.; Lindfors, A. V.; Myhre, G.; Smirinov, A.; Tripathi, S. N.; Yu, H.

    2013-01-01

    The diurnal variability of aerosol optical depth (AOD) can be significant, depending on location and dominant aerosol type. However, these diurnal cycles have rarely been taken into account in measurement-based estimates of aerosol direct radiative forcing (ADRF) or aerosol direct radiative effect (ADRE). The objective of our study was to estimate the influence of diurnal aerosol variability at the top of the atmosphere ADRE estimates. By including all the possible AERONET sites, we wanted to assess the influence on global ADRE estimates. While focusing also in more detail on some selected sites of strongest impact, our goal was to also see the possible impact regionally.We calculated ADRE with different assumptions about the daily AOD variability: taking the observed daily AOD cycle into account and assuming diurnally constant AOD. Moreover, we estimated the corresponding differences in ADREs, if the single AOD value for the daily mean was taken from the the Moderate Resolution Imaging Spectroradiometer (MODIS) Terra or Aqua overpass times, instead of accounting for the true observed daily variability. The mean impact of diurnal AOD variability on 24 h ADRE estimates, averaged over all AERONET sites, was rather small and it was relatively small even for the cases when AOD was chosen to correspond to the Terra or Aqua overpass time. This was true on average over all AERONET sites, while clearly there can be much stronger impact in individual sites. Examples of some selected sites demonstrated that the strongest observed AOD variability (the strongest morning afternoon contrast) does not typically result in a significant impact on 24 h ADRE. In those cases, the morning and afternoon AOD patterns are opposite and thus the impact on 24 h ADRE, when integrated over all solar zenith angles, is reduced. The most significant effect on daily ADRE was induced by AOD cycles with either maximum or minimum AOD close to local noon. In these cases, the impact on 24 h ADRE was typically around 0.1-0.2W/sq m (both positive and negative) in absolute values, 5-10% in relative ones.

  18. PubChem3D: Shape compatibility filtering using molecular shape quadrupoles

    PubMed Central

    2011-01-01

    Background PubChem provides a 3-D neighboring relationship, which involves finding the maximal shape overlap between two static compound 3-D conformations, a computationally intensive step. It is highly desirable to avoid this overlap computation, especially if it can be determined with certainty that a conformer pair cannot meet the criteria to be a 3-D neighbor. As such, PubChem employs a series of pre-filters, based on the concept of volume, to remove approximately 65% of all conformer neighbor pairs prior to shape overlap optimization. Given that molecular volume, a somewhat vague concept, is rather effective, it leads one to wonder: can the existing PubChem 3-D neighboring relationship, which consists of billions of shape similar conformer pairs from tens of millions of unique small molecules, be used to identify additional shape descriptor relationships? Or, put more specifically, can one place an upper bound on shape similarity using other "fuzzy" shape-like concepts like length, width, and height? Results Using a basis set of 4.18 billion 3-D neighbor pairs identified from single conformer per compound neighboring of 17.1 million molecules, shape descriptors were computed for all conformers. These steric shape descriptors included several forms of molecular volume and shape quadrupoles, which essentially embody the length, width, and height of a conformer. For a given 3-D neighbor conformer pair, the volume and each quadrupole component (Qx, Qy, and Qz) were binned and their frequency of occurrence was examined. Per molecular volume type, this effectively produced three different maps, one per quadrupole component (Qx, Qy, and Qz), of allowed values for the similarity metric, shape Tanimoto (ST) ≥ 0.8. The efficiency of these relationships (in terms of true positive, true negative, false positive and false negative) as a function of ST threshold was determined in a test run of 13.2 billion conformer pairs not previously considered by the 3-D neighbor set. At an ST ≥ 0.8, a filtering efficiency of 40.4% of true negatives was achieved with only 32 false negatives out of 24 million true positives, when applying the separate Qx, Qy, and Qz maps in a series (Qxyz). This efficiency increased linearly as a function of ST threshold in the range 0.8-0.99. The Qx filter was consistently the most efficient followed by Qy and then by Qz. Use of a monopole volume showed the best overall performance, followed by the self-overlap volume and then by the analytic volume. Application of the monopole-based Qxyz filter in a "real world" test of 3-D neighboring of 4,218 chemicals of biomedical interest against 26.1 million molecules in PubChem reduced the total CPU cost of neighboring by between 24-38% and, if used as the initial filter, removed from consideration 48.3% of all conformer pairs at almost negligible computational overhead. Conclusion Basic shape descriptors, such as those embodied by size, length, width, and height, can be highly effective in identifying shape incompatible compound conformer pairs. When performing a 3-D search using a shape similarity cut-off, computation can be avoided by identifying conformer pairs that cannot meet the result criteria. Applying this methodology as a filter for PubChem 3-D neighboring computation, an improvement of 31% was realized, increasing the average conformer pair throughput from 154,000 to 202,000 per second per CPU core. PMID:21774809

  19. A Simple Algorithm for Predicting Bacteremia Using Food Consumption and Shaking Chills: A Prospective Observational Study.

    PubMed

    Komatsu, Takayuki; Takahashi, Erika; Mishima, Kentaro; Toyoda, Takeo; Saitoh, Fumihiro; Yasuda, Akari; Matsuoka, Joe; Sugita, Manabu; Branch, Joel; Aoki, Makoto; Tierney, Lawrence; Inoue, Kenji

    2017-07-01

    Predicting the presence of true bacteremia based on clinical examination is unreliable. We aimed to construct a simple algorithm for predicting true bacteremia by using food consumption and shaking chills. A prospective multicenter observational study. Three hospital centers in a large Japanese city. In total, 1,943 hospitalized patients aged 14 to 96 years who underwent blood culture acquisitions between April 2013 and August 2014 were enrolled. Patients with anorexia-inducing conditions were excluded. We assessed the patients' oral food intake based on the meal immediately prior to the blood culture with definition as "normal food consumption" when >80% of a meal was consumed and "poor food consumption" when <80% was consumed. We also concurrently evaluated for a history of shaking chills. We calculated the statistical characteristics of food consumption and shaking chills for the presence of true bacteremia, and subsequently built the algorithm by using recursive partitioning analysis. Among 1,943 patients, 223 cases were true bacteremia. Among patients with normal food consumption, without shaking chills, the incidence of true bacteremia was 2.4% (13/552). Among patients with poor food consumption and shaking chills, the incidence of true bacteremia was 47.7% (51/107). The presence of poor food consumption had a sensitivity of 93.7% (95% confidence interval [CI], 89.4%-97.9%) for true bacteremia, and the absence of poor food consumption (ie, normal food consumption) had a negative likelihood ratio (LR) of 0.18 (95% CI, 0.17-0.19) for excluding true bacteremia, respectively. Conversely, the presence of the shaking chills had a specificity of 95.1% (95% CI, 90.7%-99.4%) and a positive LR of 4.78 (95% CI, 4.56-5.00) for true bacteremia. A 2-item screening checklist for food consumption and shaking chills had excellent statistical properties as a brief screening instrument for predicting true bacteremia. © 2017 Society of Hospital Medicine

  20. How Fuzzy-Trace Theory Predicts True and False Memories for Words, Sentences, and Narratives

    PubMed Central

    Reyna, Valerie F.; Corbin, Jonathan C.; Weldon, Rebecca B.; Brainerd, Charles J.

    2016-01-01

    Fuzzy-trace theory posits independent verbatim and gist memory processes, a distinction that has implications for such applied topics as eyewitness testimony. This distinction between precise, literal verbatim memory and meaning-based, intuitive gist accounts for memory paradoxes including dissociations between true and false memory, false memories outlasting true memories, and developmental increases in false memory. We provide an overview of fuzzy-trace theory, and, using mathematical modeling, also present results demonstrating verbatim and gist memory in true and false recognition of narrative sentences and inferences. Results supported fuzzy-trace theory's dual-process view of memory: verbatim memory was relied on to reject meaning-consistent, but unpresented, sentences (via recollection rejection). However, verbatim memory was often not retrieved, and gist memory supported acceptance of these sentences (via similarity judgment and phantom recollection). Thus, mathematical models of words can be extended to explain memory for complex stimuli, such as narratives, the kind of memory interrogated in law. PMID:27042402

  1. The power metric: a new statistically robust enrichment-type metric for virtual screening applications with early recovery capability.

    PubMed

    Lopes, Julio Cesar Dias; Dos Santos, Fábio Mendes; Martins-José, Andrelly; Augustyns, Koen; De Winter, Hans

    2017-01-01

    A new metric for the evaluation of model performance in the field of virtual screening and quantitative structure-activity relationship applications is described. This metric has been termed the power metric and is defined as the fraction of the true positive rate divided by the sum of the true positive and false positive rates, for a given cutoff threshold. The performance of this metric is compared with alternative metrics such as the enrichment factor, the relative enrichment factor, the receiver operating curve enrichment factor, the correct classification rate, Matthews correlation coefficient and Cohen's kappa coefficient. The performance of this new metric is found to be quite robust with respect to variations in the applied cutoff threshold and ratio of the number of active compounds to the total number of compounds, and at the same time being sensitive to variations in model quality. It possesses the correct characteristics for its application in early-recognition virtual screening problems.

  2. [Clinicopathological characterization of true hermaphroditism complicated with seminoma and review of the literature].

    PubMed

    Hua, Xing; Liu, Shao-Jie; Lu, Lin; Li, Chao-Xia; Yu, Li-Na

    2012-08-01

    To study the clinicopathological characteristics and diagnosis of true hermaphroditism complicated with seminoma. We retrospectively analyzed the clinicopathological data of a case of true hermaphroditism complicated with seminoma and reviewed the related literature. The patient was a 42-year-old male, admitted for bilateral lower back pain and discomfort. CT showed a huge mass in the lower middle abdomen. Gross pathological examination revealed a mass of uterine tissue, 7 cm x 2 cm x 6 cm in size, with bilateral oviducts and ovarian tissue. There was a cryptorchidism (4.0 cm x 2.5 cm x 1.5 cm) on the left and a huge tumor (22 cm x9 cm x6 cm) on the right of the uterine tissue. The tumor was completely encapsulated, with some testicular tissue. Microscopically, the tumor tissue was arranged in nests or sheets divided and surrounded by fibrous tissue. The tumor cells were large, with abundant and transparent cytoplasm, deeply stained nuclei, coarse granular chromatins, visible mitosis, and infiltration of a small number of lymphocytes in the stroma. The karyotype was 46, XX. Immunohistochemistry showed that PLAP and CD117 were positive, while the AFP, Vimentin, EMA, S100, CK-LMW, Desmin, CD34 and CD30 were negative, and Ki-67 was 20% positive. A small amount of residual normal testicular tissue was seen in the tumor tissue. True hermaphroditism complicated with seminoma is rare. Histopathological analysis combined with immunohistochemical detection is of great value for its diagnosis and differential diagnosis.

  3. Fusion of Computed Tomography and PROPELLER Diffusion-Weighted Magnetic Resonance Imaging for the Detection and Localization of Middle Ear Cholesteatoma.

    PubMed

    Locketz, Garrett D; Li, Peter M M C; Fischbein, Nancy J; Holdsworth, Samantha J; Blevins, Nikolas H

    2016-10-01

    A method to optimize imaging of cholesteatoma by combining the strengths of available modalities will improve diagnostic accuracy and help to target treatment. To assess whether fusing Periodically Rotated Overlapping Parallel Lines With Enhanced Reconstruction (PROPELLER) diffusion-weighted magnetic resonance imaging (DW-MRI) with corresponding temporal bone computed tomography (CT) images could increase cholesteatoma diagnostic and localization accuracy across 6 distinct anatomical regions of the temporal bone. Case series and preliminary technology evaluation of adults with preoperative temporal bone CT and PROPELLER DW-MRI scans who underwent surgery for clinically suggested cholesteatoma at a tertiary academic hospital. When cholesteatoma was encountered surgically, the precise location was recorded in a diagram of the middle ear and mastoid. For each patient, the 3 image data sets (CT, PROPELLER DW-MRI, and CT-MRI fusion) were reviewed in random order for the presence or absence of cholesteatoma by an investigator blinded to operative findings. If cholesteatoma was deemed present on review of each imaging modality, the location of the lesion was mapped presumptively. Image analysis was then compared with surgical findings. Twelve adults (5 women and 7 men; median [range] age, 45.5 [19-77] years) were included. The use of CT-MRI fusion had greater diagnostic sensitivity (0.88 vs 0.75), positive predictive value (0.88 vs 0.86), and negative predictive value (0.75 vs 0.60) than PROPELLER DW-MRI alone. Image fusion also showed increased overall localization accuracy when stratified across 6 distinct anatomical regions of the temporal bone (localization sensitivity and specificity, 0.76 and 0.98 for CT-MRI fusion vs 0.58 and 0.98 for PROPELLER DW-MRI). For PROPELLER DW-MRI, there were 15 true-positive, 45 true-negative, 1 false-positive, and 11 false-negative results; overall accuracy was 0.83. For CT-MRI fusion, there were 20 true-positive, 45 true-negative, 1 false-positive, and 6 false-negative results; overall accuracy was 0.90. The poor anatomical spatial resolution of DW-MRI makes precise localization of cholesteatoma within the middle ear and mastoid a diagnostic challenge. This study suggests that the bony anatomic detail obtained via CT coupled with the excellent sensitivity and specificity of PROPELLER DW-MRI for cholesteatoma can improve both preoperative identification and localization of disease over DW-MRI alone.

  4. Year-to-year variations in annual average indoor 222Rn concentrations.

    PubMed

    Martz, D E; Rood, A S; George, J L; Pearson, M D; Langner, G H

    1991-09-01

    Annual average indoor 222Rn concentrations in 40 residences in and around Grand Junction, CO, have been measured repeatedly since 1984 using commercial alpha-track monitors (ATM) deployed for successive 12-mo time periods. Data obtained provide a quantitative measure of the year-to-year variations in the annual average Rn concentrations in these structures over this 6-y period. A mean coefficient of variation of 25% was observed for the year-to-year variability of the measurements at 25 sampling stations for which complete data were available. Individual coefficients of variation at the various stations ranged from a low of 7.7% to a high of 51%. The observed mean coefficient of variation includes contributions due to the variability in detector response as well as the true year-to-year variation in the annual average Rn concentrations. Factoring out the contributions from the measured variability in the response of the detectors used, the actual year-to-year variability of the annual average Rn concentrations was approximately 22%.

  5. Automatic Brain Tumor Detection in T2-weighted Magnetic Resonance Images

    NASA Astrophysics Data System (ADS)

    Dvořák, P.; Kropatsch, W. G.; Bartušek, K.

    2013-10-01

    This work focuses on fully automatic detection of brain tumors. The first aim is to determine, whether the image contains a brain with a tumor, and if it does, localize it. The goal of this work is not the exact segmentation of tumors, but the localization of their approximate position. The test database contains 203 T2-weighted images of which 131 are images of healthy brain and the remaining 72 images contain brain with pathological area. The estimation, whether the image shows an afflicted brain and where a pathological area is, is done by multi resolution symmetry analysis. The first goal was tested by five-fold cross-validation technique with 100 repetitions to avoid the result dependency on sample order. This part of the proposed method reaches the true positive rate of 87.52% and the true negative rate of 93.14% for an afflicted brain detection. The evaluation of the second part of the algorithm was carried out by comparing the estimated location to the true tumor location. The detection of the tumor location reaches the rate of 95.83% of correct anomaly detection and the rate 87.5% of correct tumor location.

  6. Examining the Relationships among Doctoral Completion Time, Gender, and Future Salary Prospects for Physical Scientists

    ERIC Educational Resources Information Center

    Potvin, Geoff; Tai, Robert H.

    2012-01-01

    Using data from a national survey of Ph.D.-holding chemists and physicists, time-to-doctoral degree is found to be a strong predictor of salary: each additional year in graduate school corresponds to a significantly lower average salary. This is true even while controlling for standard measures of scientific merit (grant funding and publication…

  7. Improving Access to Prekindergarten for Children of Immigrants: "Building Relationships." Fact Sheet No. 3

    ERIC Educational Resources Information Center

    Lei, Serena

    2014-01-01

    Pre-K has been shown to strongly boost children's learning trajectories. This is as true, or even truer, for children of immigrants and English language learners (ELLs) as for children overall. Children of immigrants, who make up about a quarter of children in the United States, have significantly lower rates of pre-K enrollment, on average, than…

  8. Improving Access to Prekindergarten for Children of Immigrants: "Enrollment Strategies." Fact Sheet No. 2

    ERIC Educational Resources Information Center

    Lei, Serena

    2014-01-01

    Pre-K has been shown to strongly boost children's learning trajectories. This is as true, or even truer, for children of immigrants and English language learners (ELLs) as for children overall. Children of immigrants, who make up about a quarter of children in the United States, have significantly lower rates of pre-K enrollment, on average, than…

  9. Improving Access to Prekindergarten for Children of Immigrants: "Outreach." Fact Sheet No. 1

    ERIC Educational Resources Information Center

    Lei, Serena

    2014-01-01

    Pre-K has been shown to strongly boost children's learning trajectories. This is as true, or even truer, for children of immigrants and English language learners (ELLs) as for children overall. Children of immigrants, who make up about a quarter of children in the United States, have significantly lower rates of pre-K enrollment, on average, than…

  10. 34 CFR 395.8 - Distribution and use of income from vending machines on Federal property.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... blind vendor in any amount exceeding the average net income of the total number of blind vendors in the... 34 Education 2 2011-07-01 2010-07-01 true Distribution and use of income from vending machines on... use of income from vending machines on Federal property. (a) Vending machine income from vending...

  11. 40 CFR 63.5985 - What are my alternatives for meeting the emission limits for tire production affected sources?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... emission limits in § 63.5984. (a) Purchase alternative. Use only cements and solvents that, as purchased... constituent option). (b) Monthly average alternative, without using an add-on control device. Use cements and... 40 Protection of Environment 12 2011-07-01 2009-07-01 true What are my alternatives for meeting...

  12. 40 CFR 63.5985 - What are my alternatives for meeting the emission limits for tire production affected sources?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... emission limits in § 63.5984. (a) Purchase alternative. Use only cements and solvents that, as purchased... constituent option). (b) Monthly average alternative, without using an add-on control device. Use cements and... 40 Protection of Environment 12 2010-07-01 2010-07-01 true What are my alternatives for meeting...

  13. The Equity Indexed Annuity: A Monte Carlo Forensic Investigation into a Controversial Financial Product

    ERIC Educational Resources Information Center

    Carver, Andrew B.

    2013-01-01

    Equity Indexed Annuities (EIAs) are controversial financial products because the payoffs to investors are based on formulas that are supposedly too complex for average investors to understand. This brief describes how Monte Carlo simulation can provide insight into the true risk and return of an EIA. This approach can be used as a project…

  14. Frequency-Wavenumber (F-K) Processing for Infrasound Distributed Arrays

    DTIC Science & Technology

    2012-10-01

    UNCLASSIFIED Approved for public release; distribution is unlimited (U) Frequency-Wavenumber (F-K) Processing for Infrasound Distributed...have conventionally been used to detect infrasound . Pipe arrays, used in conjunction with microbarometers, provide noise reduction by averaging wind...signals. This is especially true for infrasound and low-frequency acoustic sources of tactical interest in the 1 to 100 Hz range. The work described

  15. 40 CFR Figure 1 to Subpart Qqq of... - Data Summary Sheet for Determination of Average Opacity

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Data Summary Sheet for Determination of... SOURCE CATEGORIES National Emission Standards for Hazardous Air Pollutants for Primary Copper Smelting Pt. 63, Subpt. QQQ, Fig. 1 Figure 1 to Subpart QQQ of Part 63—Data Summary Sheet for Determination of...

  16. Environmental Data Collection Using Autonomous Wave Gliders

    DTIC Science & Technology

    2014-12-01

    Observing System IMU Inertial Measurement Unit LRI Liquid Robotics, Inc. MASFlux Marine-Air-Sea-Flux METOC meteorological and oceanographic...position, velocity, heading, pitch, roll , and six-axis acceleration rates (Figure 11). A separate temperature probe also provides sea surface...Position, Velocity, and Magnetic declination True North Revolution Technologies GS Gyro Stabilized Electronic Compass Heading, Pitch, and Roll

  17. How to get statistically significant effects in any ERP experiment (and why you shouldn't).

    PubMed

    Luck, Steven J; Gaspelin, Nicholas

    2017-01-01

    ERP experiments generate massive datasets, often containing thousands of values for each participant, even after averaging. The richness of these datasets can be very useful in testing sophisticated hypotheses, but this richness also creates many opportunities to obtain effects that are statistically significant but do not reflect true differences among groups or conditions (bogus effects). The purpose of this paper is to demonstrate how common and seemingly innocuous methods for quantifying and analyzing ERP effects can lead to very high rates of significant but bogus effects, with the likelihood of obtaining at least one such bogus effect exceeding 50% in many experiments. We focus on two specific problems: using the grand-averaged data to select the time windows and electrode sites for quantifying component amplitudes and latencies, and using one or more multifactor statistical analyses. Reanalyses of prior data and simulations of typical experimental designs are used to show how these problems can greatly increase the likelihood of significant but bogus results. Several strategies are described for avoiding these problems and for increasing the likelihood that significant effects actually reflect true differences among groups or conditions. © 2016 Society for Psychophysiological Research.

  18. How to Get Statistically Significant Effects in Any ERP Experiment (and Why You Shouldn’t)

    PubMed Central

    Luck, Steven J.; Gaspelin, Nicholas

    2016-01-01

    Event-related potential (ERP) experiments generate massive data sets, often containing thousands of values for each participant, even after averaging. The richness of these data sets can be very useful in testing sophisticated hypotheses, but this richness also creates many opportunities to obtain effects that are statistically significant but do not reflect true differences among groups or conditions (bogus effects). The purpose of this paper is to demonstrate how common and seemingly innocuous methods for quantifying and analyzing ERP effects can lead to very high rates of significant-but-bogus effects, with the likelihood of obtaining at least one such bogus effect exceeding 50% in many experiments. We focus on two specific problems: using the grand average data to select the time windows and electrode sites for quantifying component amplitudes and latencies, and using one or more multi-factor statistical analyses. Re-analyses of prior data and simulations of typical experimental designs are used to show how these problems can greatly increase the likelihood of significant-but-bogus results. Several strategies are described for avoiding these problems and for increasing the likelihood that significant effects actually reflect true differences among groups or conditions. PMID:28000253

  19. Estimating the duration of geologic intervals from a small number of age determinations: A challenge common to petrology and paleobiology

    NASA Astrophysics Data System (ADS)

    Glazner, Allen F.; Sadler, Peter M.

    2016-12-01

    The duration of a geologic interval, such as the time over which a given volume of magma accumulated to form a pluton, or the lifespan of a large igneous province, is commonly determined from a relatively small number of geochronologic determinations (e.g., 4-10) within that interval. Such sample sets can underestimate the true length of the interval by a significant amount. For example, the average interval determined from a sample of size n = 5, drawn from a uniform random distribution, will underestimate the true interval by 50%. Even for n = 10, the average sample only captures ˜80% of the interval. If the underlying distribution is known then a correction factor can be determined from theory or Monte Carlo analysis; for a uniform random distribution, this factor is n+1n-1. Systematic undersampling of interval lengths can have a large effect on calculated magma fluxes in plutonic systems. The problem is analogous to determining the duration of an extinct species from its fossil occurrences. Confidence interval statistics developed for species origination and extinction times are applicable to the onset and cessation of magmatic events.

  20. The Effect of Technological Devices on Cervical Lordosis.

    PubMed

    Öğrenci, Ahmet; Koban, Orkun; Yaman, Onur; Dalbayrak, Sedat; Yılmaz, Mesut

    2018-03-15

    There is a need for cervical flexion and even cervical hyperflexion for the use of technological devices, especially mobile phones. We investigated the effect of this use on the cervical lordosis angle. A group of 156 patients who applied with only neck pain between 2013-2016 and had no additional problems were included. Patients are specifically questioned about mobile phone, tablet, and other devices usage. The value obtained by multiplying the year of usage and the average usage (hour) in daily life was determined as the total usage value (an average hour per day x year: hy). Cervical lordosis angles were statistically compared with the total time of use. In the general ROC analysis, the cut-off value was found to be 20.5 hy. When the cut-off value is tested, the overall accuracy is very good with 72.4%. The true estimate of true risk and non-risk is quite high. The ROC analysis is statistically significant. The use of computing devices, especially mobile telephones, and the increase in the flexion of the cervical spine indicate that cervical vertebral problems will increase even in younger people in future. Also, to using with attention at this point, ergonomic devices must also be developed.

  1. Prion protein immunocytochemistry helps to establish the true incidence of prion diseases.

    PubMed

    Lantos, P L; McGill, I S; Janota, I; Doey, L J; Collinge, J; Bruce, M T; Whatley, S A; Anderton, B H; Clinton, J; Roberts, G W

    1992-11-23

    Creutzfeldt-Jakob disease (CJD) and Gerstmann-Strüssler-Scheinker disease (GSSD) are transmissible spongiform encephalopathies or prion diseases affecting man. It has been reported that prion diseases may occur without the histological hallmarks of spongiform encephalopathies: vacuolation of the cerebral grey matter, neuronal loss and astrocytosis. These cases without characteristic neuropathology may go undiagnosed and consequently the true incidence of transmissible dementias is likely to have been under-estimated. Immunocytochemistry using antibodies to prion protein gives positive staining of these cases, albeit the pattern of immunostaining differs from that seen in typical forms. Accumulation of prion protein is a molecular hallmark of prion diseases, and thus a reproducible, speedy and cost-efficient immunocytochemical screening of unusual dementias may help to establish the true incidence of prion diseases.

  2. Defining the lateral and accessory views of the patella: an anatomic and radiographic study with implications for fracture treatment.

    PubMed

    Berkes, Marschall B; Little, Milton T M; Pardee, Nadine C; Lazaro, Lionel E; Helfet, David L; Lorich, Dean G

    2013-12-01

    The majority of orthopaedic surgeons rely on a lateral fluoroscopic image to assess reduction during patella fracture osteosynthesis. However, a comprehensive radiographic description of the lateral view of the patella has not been performed previously, and no accessory views to better visualize specific anatomic features have been developed. The purpose of this study was to provide a detailed anatomic description of all radiographic features of the true lateral of the patella, describe reproducible accessory views for assessing specific features of the patella, and demonstrate their utility in a fracture model. Twelve cadaver knee specimens free of patellofemoral pathology were used, and imaging was performed using standard C-arm fluoroscopy. For each specimen, a true lateral radiographic projection of the patella was obtained and distinct features were noted. Next, an arthrotomy was made and steel wire was contoured and fixed to various anatomic regions of the patella so as to obliterate the radiographic densities on the true lateral projection, thus confirming their anatomic correlation. Ideal views of the lateral and medial facets themselves were determined using radiographic markers and varying amounts of internal or external rotation of the specimen. Last, a transverse osteotomy was created in each patella and the ability of the true lateral and accessory views to detect malreduction was assessed. The true lateral projection of the patella was obtained with the limb in neutral alignment. Constant radiographic features of the lateral view of the patella include the articular tangent, a secondary articular density of variable length, and a dorsal cortical density. The articular tangent was produced by the central ridge between the medial and lateral facets in all specimens. The secondary articular density was created by a confluence of the edge of the lateral and edge of the medial facets in 5 patellas, a confluence of the edge of the lateral facet and the intersection of the odd and medial facets in 6 patellas, and the edge of the lateral facet alone in 1 patella. The edge of the lateral facet gave a constant contribution to the appearance of the secondary articular density in all cases. A distinct accessory view of the tangent of the lateral facet could be seen with an average of 17 degrees of patella external rotation (range, 12-35 degrees), and the tangent of the medial facet with an average of 26.5 degrees of internal rotation (range, 15-45 degrees). These accessory views were better able to visualize malreduction than the single lateral projection in a fracture model in all specimens. Described here is a comprehensive description of the true lateral radiographic view of the patella and accessory views. These views can be used in the evaluation of minimally displaced patella fractures if a computerized tomography is not desired to better assess the true amount of displacement and when assessing intraoperative reduction during patella fracture osteosynthesis.

  3. The dose delivery effect of the different Beam ON interval in FFF SBRT: TrueBEAM

    NASA Astrophysics Data System (ADS)

    Tawonwong, T.; Suriyapee, S.; Oonsiri, S.; Sanghangthum, T.; Oonsiri, P.

    2016-03-01

    The purpose of this study is to determine the dose delivery effect of the different Beam ON interval in Flattening Filter Free Stereotactic Body Radiation Therapy (FFF-SBRT). The three 10MV-FFF SBRT plans (2 half rotating Rapid Arc, 9 to10 Gray/Fraction) were selected and irradiated in three different intervals (100%, 50% and 25%) using the RPM gating system. The plan verification was performed by the ArcCHECK for gamma analysis and the ionization chamber for point dose measurement. The dose delivery time of each interval were observed. For gamma analysis (2%&2mm criteria), the average percent pass of all plans for 100%, 50% and 25% intervals were 86.1±3.3%, 86.0±3.0% and 86.1±3.3%, respectively. For point dose measurement, the average ratios of each interval to the treatment planning were 1.012±0.015, 1.011±0.014 and 1.011±0.013 for 100%, 50% and 25% interval, respectively. The average dose delivery time was increasing from 74.3±5.0 second for 100% interval to 154.3±12.6 and 347.9±20.3 second for 50% and 25% interval, respectively. The same quality of the dose delivery from different Beam ON intervals in FFF-SBRT by TrueBEAM was illustrated. While the 100% interval represents the breath-hold treatment technique, the differences for the free-breathing using RPM gating system can be treated confidently.

  4. Pooled Results From 5 Validation Studies of Dietary Self-Report Instruments Using Recovery Biomarkers for Energy and Protein Intake

    PubMed Central

    Freedman, Laurence S.; Commins, John M.; Moler, James E.; Arab, Lenore; Baer, David J.; Kipnis, Victor; Midthune, Douglas; Moshfegh, Alanna J.; Neuhouser, Marian L.; Prentice, Ross L.; Schatzkin, Arthur; Spiegelman, Donna; Subar, Amy F.; Tinker, Lesley F.; Willett, Walter

    2014-01-01

    We pooled data from 5 large validation studies of dietary self-report instruments that used recovery biomarkers as references to clarify the measurement properties of food frequency questionnaires (FFQs) and 24-hour recalls. The studies were conducted in widely differing US adult populations from 1999 to 2009. We report on total energy, protein, and protein density intakes. Results were similar across sexes, but there was heterogeneity across studies. Using a FFQ, the average correlation coefficients for reported versus true intakes for energy, protein, and protein density were 0.21, 0.29, and 0.41, respectively. Using a single 24-hour recall, the coefficients were 0.26, 0.40, and 0.36, respectively, for the same nutrients and rose to 0.31, 0.49, and 0.46 when three 24-hour recalls were averaged. The average rate of under-reporting of energy intake was 28% with a FFQ and 15% with a single 24-hour recall, but the percentages were lower for protein. Personal characteristics related to under-reporting were body mass index, educational level, and age. Calibration equations for true intake that included personal characteristics provided improved prediction. This project establishes that FFQs have stronger correlations with truth for protein density than for absolute protein intake, that the use of multiple 24-hour recalls substantially increases the correlations when compared with a single 24-hour recall, and that body mass index strongly predicts under-reporting of energy and protein intakes. PMID:24918187

  5. Emotional content enhances true but not false memory for categorized stimuli.

    PubMed

    Choi, Hae-Yoon; Kensinger, Elizabeth A; Rajaram, Suparna

    2013-04-01

    Past research has shown that emotion enhances true memory, but that emotion can either increase or decrease false memory. Two theoretical possibilities-the distinctiveness of emotional stimuli and the conceptual relatedness of emotional content-have been implicated as being responsible for influencing both true and false memory for emotional content. In the present study, we sought to identify the mechanisms that underlie these mixed findings by equating the thematic relatedness of the study materials across each type of valence used (negative, positive, or neutral). In three experiments, categorically bound stimuli (e.g., funeral, pets, and office items) were used for this purpose. When the encoding task required the processing of thematic relatedness, a significant true-memory enhancement for emotional content emerged in recognition memory, but no emotional boost to false memory (exp. 1). This pattern persisted for true memory with a longer retention interval between study and test (24 h), and false recognition was reduced for emotional items (exp. 2). Finally, better recognition memory for emotional items once again emerged when the encoding task (arousal ratings) required the processing of the emotional aspect of the study items, with no emotional boost to false recognition (EXP. 3). Together, these findings suggest that when emotional and neutral stimuli are equivalently high in thematic relatedness, emotion continues to improve true memory, but it does not override other types of grouping to increase false memory.

  6. False memory and level of processing effect: an event-related potential study.

    PubMed

    Beato, Maria Soledad; Boldini, Angela; Cadavid, Sara

    2012-09-12

    Event-related potentials (ERPs) were used to determine the effects of level of processing on true and false memory, using the Deese-Roediger-McDermott (DRM) paradigm. In the DRM paradigm, lists of words highly associated to a single nonpresented word (the 'critical lure') are studied and, in a subsequent memory test, critical lures are often falsely remembered. Lists with three critical lures per list were auditorily presented here to participants who studied them with either a shallow (saying whether the word contained the letter 'o') or a deep (creating a mental image of the word) processing task. Visual presentation modality was used on a final recognition test. True recognition of studied words was significantly higher after deep encoding, whereas false recognition of nonpresented critical lures was similar in both experimental groups. At the ERP level, true and false recognition showed similar patterns: no FN400 effect was found, whereas comparable left parietal and late right frontal old/new effects were found for true and false recognition in both experimental conditions. Items studied under shallow encoding conditions elicited more positive ERP than items studied under deep encoding conditions at a 1000-1500 ms interval. These ERP results suggest that true and false recognition share some common underlying processes. Differential effects of level of processing on true and false memory were found only at the behavioral level but not at the ERP level.

  7. Potential breeding distributions of U.S. birds predicted with both short-term variability and long-term average climate data.

    PubMed

    Bateman, Brooke L; Pidgeon, Anna M; Radeloff, Volker C; Flather, Curtis H; VanDerWal, Jeremy; Akçakaya, H Resit; Thogmartin, Wayne E; Albright, Thomas P; Vavrus, Stephen J; Heglund, Patricia J

    2016-12-01

    Climate conditions, such as temperature or precipitation, averaged over several decades strongly affect species distributions, as evidenced by experimental results and a plethora of models demonstrating statistical relations between species occurrences and long-term climate averages. However, long-term averages can conceal climate changes that have occurred in recent decades and may not capture actual species occurrence well because the distributions of species, especially at the edges of their range, are typically dynamic and may respond strongly to short-term climate variability. Our goal here was to test whether bird occurrence models can be predicted by either covariates based on short-term climate variability or on long-term climate averages. We parameterized species distribution models (SDMs) based on either short-term variability or long-term average climate covariates for 320 bird species in the conterminous USA and tested whether any life-history trait-based guilds were particularly sensitive to short-term conditions. Models including short-term climate variability performed well based on their cross-validated area-under-the-curve AUC score (0.85), as did models based on long-term climate averages (0.84). Similarly, both models performed well compared to independent presence/absence data from the North American Breeding Bird Survey (independent AUC of 0.89 and 0.90, respectively). However, models based on short-term variability covariates more accurately classified true absences for most species (73% of true absences classified within the lowest quarter of environmental suitability vs. 68%). In addition, they have the advantage that they can reveal the dynamic relationship between species and their environment because they capture the spatial fluctuations of species potential breeding distributions. With this information, we can identify which species and guilds are sensitive to climate variability, identify sites of high conservation value where climate variability is low, and assess how species' potential distributions may have already shifted due recent climate change. However, long-term climate averages require less data and processing time and may be more readily available for some areas of interest. Where data on short-term climate variability are not available, long-term climate information is a sufficient predictor of species distributions in many cases. However, short-term climate variability data may provide information not captured with long-term climate data for use in SDMs. © 2016 by the Ecological Society of America.

  8. "That never happened": adults' discernment of children's true and false memory reports.

    PubMed

    Block, Stephanie D; Shestowsky, Donna; Segovia, Daisy A; Goodman, Gail S; Schaaf, Jennifer M; Alexander, Kristen Weede

    2012-10-01

    Adults' evaluations of children's reports can determine whether legal proceedings are undertaken and whether they ultimately lead to justice. The current study involved 92 undergraduates and 35 laypersons who viewed and evaluated videotaped interviews of 3- and 5-year-olds providing true or false memory reports. The children's reports fell into the following categories based on a 2 (event type: true vs. false) × 2 (child report: assent vs. denial) factorial design: accurate reports, false reports, accurate denials, and false denials. Results revealed that adults were generally better able to correctly judge accurate reports, accurate denials, and false reports compared with false denials: For false denials, adults were, on average, "confident" that the event had not occurred, even though the event had in fact been experienced. Participant age predicted performance. These findings underscore the greater difficulty adults have in evaluating young children's false denials compared with other types of reports. Implications for law-related situations in which adults are called upon to evaluate children's statements are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  9. The Role of Binarity in the Angular Momentum Evolution of M Dwarfs

    NASA Astrophysics Data System (ADS)

    Stauffer, John; Rebull, Luisa; K2 clusters team

    2018-01-01

    We have analysed K2 light curves for of order a thousand low mass stars in each of the 8 Myr old Upper Sco association, the 125 Myr age Pleiades open cluster and the ~700 Myr old Praesepe cluster. A very large fraction of these stars show well-determined rotation periods with K2, and where the star is a binary, we usually are able to determine periods for both stars. In Upper Sco, where there are ~150 M dwarf binaries with K2 light curves, the binary stars have periods that are much shorter on average and much closer to each other than would be true if drawn at random from the Upper Sco M dwarf single stars. The same is true in the Pleiades,though the size of the differences from the single M dwarf population is smaller. By Praesepe age, the M dwarf binaries are still somewhat rapidly rotating but their period differences are not significantly different from what would be true if drawn by chance from the singles.

  10. Optic Disc Drusen in Children

    PubMed Central

    Chang, Melinda Y.; Pineles, Stacy L.

    2016-01-01

    Optic disc drusen occur in 0.4% of children and consist of acellular intracellular and extracellular deposits that often become calcified over time. They are typically buried early in life and generally become superficial, and therefore visible, later in childhood, at the average age of 12 years. Their main clinical significance lies in the ability of optic disc drusen, particularly when buried, to simulate true optic disc edema. Misdiagnosing drusen as true disc edema may lead to an invasive and unnecessary workup for elevated intracranial pressure. Ancillary testing, including ultrasonography, fluorescein angiography, fundus autofluorescence, and optical coherence tomography, may aid in the correct diagnosis of optic disc drusen. Complications of optic disc drusen in children include visual field defects, hemorrhages, choroidal neovascular membrane, non-arteritic anterior ischemic optic neuropathy, and retinal vascular occlusions. Treatment options for these complications include ocular hypotensive agents for visual field defects and intravitreal anti-vascular endothelial growth factor (anti-VEGF) agents for choroidal neovascular membranes. In most cases, however, children with optic disc drusen can be managed by observation with serial examinations and visual field testing, once true optic disc edema has been excluded. PMID:27033945

  11. “That Never Happened”: Adults’ Discernment of Children’s True and False Memory Reports

    PubMed Central

    Block, Stephanie D.; Shestowsky, Donna; Segovia, Daisy A.; Goodman, Gail S.; Schaaf, Jennifer M.; Alexander, Kristen Weede

    2014-01-01

    Adults’ evaluations of children’s reports can determine whether legal proceedings are undertaken and whether they ultimately lead to justice. The current study involved 92 undergraduates and 35 laypersons who viewed and evaluated videotaped interviews of 3- and 5-year-olds providing true or false memory reports. The children’s reports fell into the following categories based on a 2 (event type: true vs. false) × 2 (child report: assent vs. denial) factorial design: Accurate reports, false reports, accurate denials, and false denials. Results revealed that adults were generally better able to correctly judge accurate reports, accurate denials, and false reports compared to false denials: For false denials, adults were, on average, “confident” that the event had not occurred, even though the event had in fact been experienced. Participant age predicted performance. These findings underscore the greater difficulty adults have in evaluating young children’s false denials compared to other types of reports. Implications for law-related situations in which adults are called upon to evaluate children’s statements are discussed. PMID:23030818

  12. Detection and classification of gaseous sulfur compounds by solid electrolyte cyclic voltammetry of cermet sensor array.

    PubMed

    Kramer, Kirsten E; Rose-Pehrsson, Susan L; Hammond, Mark H; Tillett, Duane; Streckert, Holger H

    2007-02-12

    Electrochemical sensors composed of a ceramic-metallic (cermet) solid electrolyte are used for the detection of gaseous sulfur compounds SO(2), H(2)S, and CS(2) in a study involving 11 toxic industrial chemical (TIC) compounds. The study examines a sensor array containing four cermet sensors varying in electrode-electrolyte composition, designed to offer selectivity for multiple compounds. The sensors are driven by cyclic voltammetry to produce a current-voltage profile for each analyte. Raw voltammograms are processed by background subtraction of clean air, and the four sensor signals are concatenated to form one vector of points. The high-resolution signal is compressed by wavelet transformation and a probabilistic neural network is used for classification. In this study, training data from one sensor array was used to formulate models which were validated with data from a second sensor array. Of the 11 gases studied, 3 that contained sulfur produced the strongest responses and were successfully analyzed when the remaining compounds were treated as interferents. Analytes were measured from 10 to 200% of their threshold-limited value (TLV) according to the 8-h time weighted average (TWA) exposure limits defined by the National Institute of Occupational Safety and Health (NIOSH). True positive classification rates of 93.3, 96.7, and 76.7% for SO(2), H(2)S, and CS(2), respectively, were achieved for prediction of one sensor unit when a second sensor was used for modeling. True positive rates of 83.3, 90.0, and 90.0% for SO(2), H(2)S, and CS(2), respectively, were achieved for the second sensor unit when the first sensor unit was used for modeling. Most of the misclassifications were for low concentration levels (such 10-25% TLV) in which case the compound was classified as clean air. Between the two sensors, the false positive rates were 2.2% or lower for the three sulfur compounds, 0.9% or lower for the interferents (eight remaining analytes), and 5.8% or lower for clean air. The cermet sensor arrays used in this analysis are rugged, low cost, reusable, and show promise for multiple compound detection at parts-per-million (ppm) levels.

  13. Characterization of difference of Gaussian filters in the detection of mammographic regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Catarious, David M. Jr.; Baydush, Alan H.; Floyd, Carey E. Jr.

    2006-11-15

    In this article, we present a characterization of the effect of difference of Gaussians (DoG) filters in the detection of mammographic regions. DoG filters have been used previously in mammographic mass computer-aided detection (CAD) systems. As DoG filters are constructed from the subtraction of two bivariate Gaussian distributions, they require the specification of three parameters: the size of the filter template and the standard deviations of the constituent Gaussians. The influence of these three parameters in the detection of mammographic masses has not been characterized. In this work, we aim to determine how the parameters affect (1) the physical descriptorsmore » of the detected regions (2) the true and false positive rates, and (3) the classification performance of the individual descriptors. To this end, 30 DoG filters are created from the combination of three template sizes and four values for each of the Gaussians' standard deviations. The filters are used to detect regions in a study database of 181 craniocaudal-view mammograms extracted from the Digital Database for Screening Mammography. To describe the physical characteristics of the identified regions, morphological and textural features are extracted from each of the detected regions. Differences in the mean values of the features caused by altering the DoG parameters are examined through statistical and empirical comparisons. The parameters' effects on the true and false positive rate are determined by examining the mean malignant sensitivities and false positives per image (FPpI). Finally, the effect on the classification performance is described by examining the variation in FPpI at the point where 81% of the malignant masses in the study database are detected. Overall, the findings of the study indicate that increasing the standard deviations of the Gaussians used to construct a DoG filter results in a dramatic decrease in the number of regions identified at the expense of missing a small number of malignancies. The sharp reduction in the number of identified regions allowed the identification of textural differences between large and small mammographic regions. We find that the classification performances of the features that achieve the lowest average FPpI are influenced by all three of the parameters.« less

  14. The Feminist Movement and Equality in the Federal Workforce: Understanding the Position of Women in USAID’s Foreign Service

    DTIC Science & Technology

    2015-05-21

    as well as personal, fulfillment.1 However, numerous battles remain before American society reaches true equality between the sexes . The mere...clear that the true intention of feminist theory is to expand options available to both sexes while eliminating gender stratification within society...de Pizan’s attack on sexist clerics in the 15th century is “the first time a woman takes up her pen to defend her sex .”12 She was, however, by no

  15. Tumor Metabolism and Blood Flow as Assessed by PET Varies by Tumor Subtype in Locally Advanced Breast Cancer

    PubMed Central

    Specht, Jennifer M.; Kurland, Brenda F.; Montgomery, Susan K.; Dunnwald, Lisa K.; Doot, Robert K.; Gralow, Julie R.; Ellis, Georgina K.; Linden, Hannah M.; Livingston, Robert B.; Allison, Kimberly H.; Schubert, Erin K.; Mankoff, David A.

    2010-01-01

    Purpose Dynamic PET imaging can identify patterns of breast cancer metabolism and perfusion in patients receiving neoadjuvant chemotherapy (NC) that are predictive of response. This analysis examines tumor metabolism and perfusion by tumor subtype. Experimental Design Tumor subtype was defined by immunohistochemistry (IHC) in 71 patients with LABC undergoing NC. Subtype was defined as luminal (ER/PR positive), triple-negative (TN; ER/PR negative, HER2 negative) and HER2 (ER/PR negative, HER2 over-expressing). Metabolic rate (MRFDG) and blood flow (BF) were calculated from PET imaging prior to NC. Pathologic complete response (pCR) to NC was classified as pCR versus other. Results Twenty-five (35%) of 71 patients had TN tumors, 6 (8%) were HER2 and 40 (56%) were luminal. MRFDG for TN tumors was on average 67% greater than for luminal tumors (95% CI 9% – 156%), and average MRFDG/BF ratio was 53% greater in TN compared to luminal tumors (95% CI 9% – 114%) (p < 0.05 for both). Average blood flow levels did not differ by subtype (p = 0.73). Most luminal tumors showed relatively low MRFDG and BF (and did not achieve pCR); high MRFDG was generally matched with high BF in luminal tumors, and predicted pCR. This was not true in TN tumors. Conclusions The relationship between breast tumor metabolism and perfusion differed by subtype. The high MRFDG/BF ratio that predicts poor response to NC was more common in TN tumors. Metabolism and perfusion measures may identify subsets of tumors susceptible and resistant to NC and may help direct targeted therapy. PMID:20460489

  16. GNSS Ephemeris with Graceful Degradation and Measurement Fusion

    NASA Technical Reports Server (NTRS)

    Garrison, James Levi (Inventor); Walker, Michael Allen (Inventor)

    2015-01-01

    A method for providing an extended propagation ephemeris model for a satellite in Earth orbit, the method includes obtaining a satellite's orbital position over a first period of time, applying a least square estimation filter to determine coefficients defining osculating Keplarian orbital elements and harmonic perturbation parameters associated with a coordinate system defining an extended propagation ephemeris model that can be used to estimate the satellite's position during the first period, wherein the osculating Keplarian orbital elements include semi-major axis of the satellite (a), eccentricity of the satellite (e), inclination of the satellite (i), right ascension of ascending node of the satellite (.OMEGA.), true anomaly (.theta.*), and argument of periapsis (.omega.), applying the least square estimation filter to determine a dominant frequency of the true anomaly, and applying a Fourier transform to determine dominant frequencies of the harmonic perturbation parameters.

  17. Evaluation of the Kodak Surecell Chlamydia test for the laboratory diagnosis of adult inclusion conjunctivitis.

    PubMed

    Tantisira, J G; Kowalski, R P; Gordon, Y J

    1995-07-01

    The Kodak Surecell Chlamydia test, a rapid enzyme immunoassay, has been reported to be highly sensitive (93%) and specific (96%) for detecting chlamydial lipopolysaccharide antigen in conjunctival specimens from infants, but has not been evaluated previously in adult conjunctival specimens. This study was designed to determine the efficacy of the Kodak Surecell Chlamydia test for the laboratory diagnosis of adult inclusion conjunctivitis. Twenty Chlamydia culture-positive conjunctival specimens from adults (true-positives) and 20 true-negative specimens were tested with the Kodak Surecell Chlamydia test. The Kodak Surecell Chlamydia test was 40% (8/20) sensitive, 100% (20/20) specific, and 70% (28/40) efficient. This study indicates that the Kodak Surecell Chlamydia test, though highly specific, is less sensitive in its ability to diagnose chlamydial conjunctivitis in adults than has been reported previously in infants.

  18. The preembryo as potential: a reply to John A. Robertson.

    PubMed

    McCormick, Richard A

    1991-12-01

    ... In conclusion, let me agree with Robertson that reasonable persons may indeed disagree on concrete conclusions touching preembryo freezing, discard, research, and diagnosis. But it is one of the challenges to reasonable people to give reasons for their conclusions. When Robertson notes that preembryo research "has been found acceptable by most bodies that have examined the subject," he leaves unstated the fact that many of these bodies have not given reasons for their conclusions. This is especially true of the Warnock Committee. It is definitely not true of John Robertson. He has attempted to give analytic support for his rather permissive positions. I find this support too fragile for its assigned task, though I hasten to say that this does not mean that only a totally prohibitive position is defensible or is mine. Prima facie still means prima facie.

  19. Determining decision thresholds and evaluating indicators when conservation status is measured as a continuum.

    PubMed

    Connors, B M; Cooper, A B

    2014-12-01

    Categorization of the status of populations, species, and ecosystems underpins most conservation activities. Status is often based on how a system's current indicator value (e.g., change in abundance) relates to some threshold of conservation concern. Receiver operating characteristic (ROC) curves can be used to quantify the statistical reliability of indicators of conservation status and evaluate trade-offs between correct (true positive) and incorrect (false positive) classifications across a range of decision thresholds. However, ROC curves assume a discrete, binary relationship between an indicator and the conservation status it is meant to track, which is a simplification of the more realistic continuum of conservation status, and may limit the applicability of ROC curves in conservation science. We describe a modified ROC curve that treats conservation status as a continuum rather than a discrete state. We explored the influence of this continuum and typical sources of variation in abundance that can lead to classification errors (i.e., random variation and measurement error) on the true and false positive rates corresponding to varying decision thresholds and the reliability of change in abundance as an indicator of conservation status, respectively. We applied our modified ROC approach to an indicator of endangerment in Pacific salmon (Oncorhynchus nerka) (i.e., percent decline in geometric mean abundance) and an indicator of marine ecosystem structure and function (i.e., detritivore biomass). Failure to treat conservation status as a continuum when choosing thresholds for indicators resulted in the misidentification of trade-offs between true and false positive rates and the overestimation of an indicator's reliability. We argue for treating conservation status as a continuum when ROC curves are used to evaluate decision thresholds in indicators for the assessment of conservation status. © 2014 Society for Conservation Biology.

  20. Economic evaluation of laboratory testing strategies for hospital-associated Clostridium difficile infection.

    PubMed

    Schroeder, Lee F; Robilotti, Elizabeth; Peterson, Lance R; Banaei, Niaz; Dowdy, David W

    2014-02-01

    Clostridium difficile infection (CDI) is the most common cause of infectious diarrhea in health care settings, and for patients presumed to have CDI, their isolation while awaiting laboratory results is costly. Newer rapid tests for CDI may reduce this burden, but the economic consequences of different testing algorithms remain unexplored. We used decision analysis from the hospital perspective to compare multiple CDI testing algorithms for adult inpatients with suspected CDI, assuming patient management according to laboratory results. CDI testing strategies included combinations of on-demand PCR (odPCR), batch PCR, lateral-flow diagnostics, plate-reader enzyme immunoassay, and direct tissue culture cytotoxicity. In the reference scenario, algorithms incorporating rapid testing were cost-effective relative to nonrapid algorithms. For every 10,000 symptomatic adults, relative to a strategy of treating nobody, lateral-flow glutamate dehydrogenase (GDH)/odPCR generated 831 true-positive results and cost $1,600 per additional true-positive case treated. Stand-alone odPCR was more effective and more expensive, identifying 174 additional true-positive cases at $6,900 per additional case treated. All other testing strategies were dominated by (i.e., more costly and less effective than) stand-alone odPCR or odPCR preceded by lateral-flow screening. A cost-benefit analysis (including estimated costs of missed cases) favored stand-alone odPCR in most settings but favored odPCR preceded by lateral-flow testing if a missed CDI case resulted in less than $5,000 of extended hospital stay costs and <2 transmissions, if lateral-flow GDH diagnostic sensitivity was >93%, or if the symptomatic carrier proportion among the toxigenic culture-positive cases was >80%. These results can aid guideline developers and laboratory directors who are considering rapid testing algorithms for diagnosing CDI.

  1. Four Years' Experience in the Diagnosis of Very Long-Chain Acyl-CoA Dehydrogenase Deficiency in Infants Detected in Three Spanish Newborn Screening Centers.

    PubMed

    Merinero, B; Alcaide, P; Martín-Hernández, E; Morais, A; García-Silva, M T; Quijada-Fraile, P; Pedrón-Giner, C; Dulin, E; Yahyaoui, R; Egea, J M; Belanger-Quintana, A; Blasco-Alonso, J; Fernandez Ruano, M L; Besga, B; Ferrer-López, I; Leal, F; Ugarte, M; Ruiz-Sala, P; Pérez, B; Pérez-Cerdá, C

    2018-01-01

    Identification of very long-chain acyl-CoA dehydrogenase deficiency is possible in the expanded newborn screening (NBS) due to the increase in tetradecenoylcarnitine (C14:1) and in the C14:1/C2, C14:1/C16, C14:1/C12:1 ratios detected in dried blood spots. Nevertheless, different confirmatory tests must be performed to confirm the final diagnosis. We have revised the NBS results and the results of the confirmatory tests (plasma acylcarnitine profiles, molecular findings, and lymphocytes VLCAD activity) for 36 cases detected in three Spanish NBS centers during 4 years, correlating these with the clinical outcome and treatment. Our aim was to distinguish unambiguously true cases from disease carriers in order to obtain useful diagnostic information for clinicians that can be applied in the follow-up of neonates identified by NBS.Increases in C14:1 and of the different ratios, the presence of two pathogenic mutations, and deficient enzyme activity in lymphocytes (<12% of the intra-assay control) identified 12 true-positive cases. These cases were given nutritional therapy and all of them are asymptomatic, except one. Seventeen individuals were considered disease carriers based on the mild increase in plasma C14:1, in conjunction with the presence of only one mutation and/or intermediate residual activity (18-57%). In addition, seven cases were classified as false positives, with normal biochemical parameters and no mutations in the exonic region of ACADVL. All these carriers and the false positive cases remained asymptomatic. The combined evaluation of the acylcarnitine profiles, genetic results, and residual enzyme activities have proven useful to definitively classify individuals with suspected VLCAD deficiency into true-positive cases and carriers, and to decide which cases need treatment.

  2. Economic Evaluation of Laboratory Testing Strategies for Hospital-Associated Clostridium difficile Infection

    PubMed Central

    Robilotti, Elizabeth; Peterson, Lance R.; Banaei, Niaz; Dowdy, David W.

    2014-01-01

    Clostridium difficile infection (CDI) is the most common cause of infectious diarrhea in health care settings, and for patients presumed to have CDI, their isolation while awaiting laboratory results is costly. Newer rapid tests for CDI may reduce this burden, but the economic consequences of different testing algorithms remain unexplored. We used decision analysis from the hospital perspective to compare multiple CDI testing algorithms for adult inpatients with suspected CDI, assuming patient management according to laboratory results. CDI testing strategies included combinations of on-demand PCR (odPCR), batch PCR, lateral-flow diagnostics, plate-reader enzyme immunoassay, and direct tissue culture cytotoxicity. In the reference scenario, algorithms incorporating rapid testing were cost-effective relative to nonrapid algorithms. For every 10,000 symptomatic adults, relative to a strategy of treating nobody, lateral-flow glutamate dehydrogenase (GDH)/odPCR generated 831 true-positive results and cost $1,600 per additional true-positive case treated. Stand-alone odPCR was more effective and more expensive, identifying 174 additional true-positive cases at $6,900 per additional case treated. All other testing strategies were dominated by (i.e., more costly and less effective than) stand-alone odPCR or odPCR preceded by lateral-flow screening. A cost-benefit analysis (including estimated costs of missed cases) favored stand-alone odPCR in most settings but favored odPCR preceded by lateral-flow testing if a missed CDI case resulted in less than $5,000 of extended hospital stay costs and <2 transmissions, if lateral-flow GDH diagnostic sensitivity was >93%, or if the symptomatic carrier proportion among the toxigenic culture-positive cases was >80%. These results can aid guideline developers and laboratory directors who are considering rapid testing algorithms for diagnosing CDI. PMID:24478478

  3. Prolactin as a Marker of Successful Catheterization during IPSS in Patients with ACTH-Dependent Cushing's Syndrome

    PubMed Central

    Sharma, S. T.; Raff, H.

    2011-01-01

    Context: Anomalous venous drainage can lead to false-negative inferior petrosal sinus sampling (IPSS) results. Baseline inferior petrosal sinus to peripheral (IPS/P) prolactin ratio higher than 1.8 ipsilateral to the highest ACTH ratio has been proposed to verify successful catheterization. Prolactin-normalized ACTH IPS/P ratios may differentiate Cushing's disease (CD) from ectopic ACTH syndrome (EAS). Objective: Our objective was to examine the utility of prolactin measurement during IPSS. Design, Setting, and Participants: We conducted a retrospective analysis of prolactin levels in basal and CRH-stimulated IPSS samples in ACTH-dependent Cushing's syndrome (2007–2010). Results: Twenty-five of 29 patients had a pathologically proven diagnosis (17 CD and eight EAS). IPSS results were partitioned into true positive for CD (n = 16), true negative (n = 7), false negative (n = 1), and false positive (n = 1). Prolactin IPS/P ratio suggested successful IPSS in eight of 11 with abnormal venograms. Baseline prolactin IPS/P ratio was helpful in two patients with abnormal venograms and false-negative (catheterization unsuccessful) or true-negative (catheterization successful) IPSS results; the normalized ratio correctly diagnosed their disease. Normalized ACTH IPS/P ratio was at least 1.3 in all with CD, but prolactin IPS/P ratios were misleadingly low in two. One patient with cyclic EAS had a false-positive IPSS when eucortisolemic (baseline prolactin IPS/P = 1.7; normalized ratio = 5.6). All other EAS patients had normalized ratios no higher than 0.7. Conclusion: Prolactin measurement and evaluation of the venogram can improve diagnostic accuracy when IPSS results suggest EAS but is not necessary with positive IPSS results. Confirmation of hypercortisolemia remains a prerequisite for IPSS. A normalized ratio of 0.7–1.3 was not diagnostic. PMID:22031511

  4. Validation of SmartRank: A likelihood ratio software for searching national DNA databases with complex DNA profiles.

    PubMed

    Benschop, Corina C G; van de Merwe, Linda; de Jong, Jeroen; Vanvooren, Vanessa; Kempenaers, Morgane; Kees van der Beek, C P; Barni, Filippo; Reyes, Eusebio López; Moulin, Léa; Pene, Laurent; Haned, Hinda; Sijen, Titia

    2017-07-01

    Searching a national DNA database with complex and incomplete profiles usually yields very large numbers of possible matches that can present many candidate suspects to be further investigated by the forensic scientist and/or police. Current practice in most forensic laboratories consists of ordering these 'hits' based on the number of matching alleles with the searched profile. Thus, candidate profiles that share the same number of matching alleles are not differentiated and due to the lack of other ranking criteria for the candidate list it may be difficult to discern a true match from the false positives or notice that all candidates are in fact false positives. SmartRank was developed to put forward only relevant candidates and rank them accordingly. The SmartRank software computes a likelihood ratio (LR) for the searched profile and each profile in the DNA database and ranks database entries above a defined LR threshold according to the calculated LR. In this study, we examined for mixed DNA profiles of variable complexity whether the true donors are retrieved, what the number of false positives above an LR threshold is and the ranking position of the true donors. Using 343 mixed DNA profiles over 750 SmartRank searches were performed. In addition, the performance of SmartRank and CODIS were compared regarding DNA database searches and SmartRank was found complementary to CODIS. We also describe the applicable domain of SmartRank and provide guidelines. The SmartRank software is open-source and freely available. Using the best practice guidelines, SmartRank enables obtaining investigative leads in criminal cases lacking a suspect. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Detection of the index tumour and tumour volume in prostate cancer using T2-weighted and diffusion-weighted magnetic resonance imaging (MRI) alone.

    PubMed

    Rud, Erik; Klotz, Dagmar; Rennesund, Kristin; Baco, Eduard; Berge, Viktor; Lien, Diep; Svindland, Aud; Lundeby, Eskild; Berg, Rolf E; Eri, Lars M; Eggesbø, Heidi B

    2014-12-01

    To examine the performance of T2-weighted (T2W) and diffusion-weighted (DW) magnetic resonance imaging (MRI) for detecting the index tumour in patients with prostate cancer and to examine the agreement between MRI and histology when assessing tumour volume (TV) and overall tumour burden. The study included 199 consecutive patients with biopsy confirmed prostate cancer randomised to MRI before radical prostatectomy from December 2009 to July 2012. MRI-detected tumours (MRTs) were ranked from 1 to 3 according to decreasing volume and were compared with histologically detected tumours (HTs) ranked from 1 to 3, with HT 1 = index tumour. Whole-mount section histology was used as a reference standard. The TVs of true-positive MRTs (MRTVs 1-3) were compared with the TVs found by histology (HTVs 1-3). All tumours were registered on a 30-sector map and by classifying each sector as positive/negative, the rate of true-positive and -negative sectors was calculated. The detection rate for the HT 1 (index tumour) was 92%; HT 2, 45%; and HT 3, 37%. The MRTV 1-3 vs the HTV 1-3 were 2.8 mL vs 4.0 mL (index tumour, P < 0.001), 1.0 mL vs 0.9 mL (tumour 2, P = 0.413), and 0.6 mL vs 0.5 mL (tumour 3, P = 0.492). The rate of true-positive and -negative sectors was 50% and 88%, κ = 0.39. A combination of T2W and DW MRI detects the index tumour in 92% of cases, although MRI underestimates both TV and tumour burden compared with histology. © 2014 The Authors. BJU International © 2014 BJU International.

  6. Blood Cultures Drawn From Arterial Catheters Are Reliable for the Detection of Bloodstream Infection in Critically Ill Children.

    PubMed

    Berger, Itay; Gil Margolis, Merav; Nahum, Elhanan; Dagan, Ovdi; Levy, Itzhak; Kaplan, Eytan; Shostak, Eran; Shmuelov, Esther; Schiller, Ofer; Kadmon, Gili

    2018-05-01

    Arterial catheters may serve as an additional source for blood cultures in children when peripheral venipuncture is challenging. The aim of the study was to evaluate the accuracy of cultures obtained through indwelling arterial catheters for the diagnosis of bloodstream infections in critically ill pediatric patients. Observational and comparative. General and cardiac ICUs of a tertiary, university-affiliated pediatric medical center. The study group consisted of 138 patients admitted to the general or cardiac PICU in 2014-2015 who met the following criteria: presence of an indwelling arterial catheter and indication for blood culture. Blood was drawn by peripheral venipuncture and through the arterial catheter for each patient and sent for culture (total 276 culture pairs). Two specialists blinded to the blood source evaluated each positive culture to determine if the result represented true bloodstream infection or contamination. The sensitivity, specificity, and positive and negative predictive values of the arterial catheter and peripheral cultures for the diagnosis of bloodstream infection were calculated. Of the 56 positive cultures, 41 (15% of total samples) were considered diagnostic of true bloodstream infection. In the other 15 (5%), the results were attributed to contamination. The rate of false-positive results was higher for arterial catheter than for peripheral venipuncture cultures (4% vs 1.5%) but did not lead to prolonged unnecessary antibiotic treatment. On statistical analysis, arterial catheter blood cultures had high sensitivity (85%) and specificity (95%) for the diagnosis of true bloodstream infection, with comparable performance to peripheral blood cultures. Cultures of arterial catheter-drawn blood are reliable for the detection of bloodstream infection in PICUs.

  7. True HIV seroprevalence in Indian blood donors.

    PubMed

    Choudhury, N; Ayagiri, A; Ray, V L

    2000-03-01

    The National AIDS Control Organization (NACO), the apex body for controlling AIDS in India, projected that HIV seroprevalence would increase from 7/1000 in 1995 to 21.2/1000 in 1997. A high incidence (8.2%) of HIV was observed in blood donors. This study was carried out to find out the true HIV positivity in Indian blood donors. Blood donors from our centre were followed for more than 5 years to determine the true HIV seroprevalence and our result was compared with similar studies from India. Voluntary and relative blood donors who visited the SGPGIMS, Lucknow, since 1993 to June 1998 were included. They were screened for HIV 1/2 by ELISA kits (WHO approved). First-time HIV-positive samples were preserved frozen for further study (stage-I). They were repeated in duplicate and retested with other kits. If found positive, the sample was labelled as ELISA positive (stage-II). ELISA-positive samples were confirmed by Western Blot (WB) at stage-III. A total of 65 288 donors were included and 834 (12.8/1000) were reactive at stage-I. But 1.1/1000 donors were found to be ELISA positive at stage-II, and 0.28/1000 donors were positive by WB at stage-III. The 'seropositivity' rate from the NACO was significantly (P < 0.001) higher than our study. There were five similar Indian studies and seropositivity rate varied from 0.72/1000 (using ELISA and WB) to 5.5/1000 (using ELISA alone). The 'seropositivity' rate from the NACO was significantly (P < 0.001) higher than all these studies. HIV seroprevalence in the present study is lower (P < 0.001) than other Indian figures. The present and other studies confirmed that the projected HIV seroprevalence (82/1000) in Indian blood donors was high. The NACO result was based on one-time ELISA screening reports from zonal blood testing centres which also receive samples from paid donors donating in commercial blood banks. The HIV prevalence of blood donors (and national prevalence) is to be reassessed.

  8. Poor effectiveness of antenatal detection of fetal growth restriction and consequences for obstetric management and neonatal outcomes: a French national study.

    PubMed

    Monier, I; Blondel, B; Ego, A; Kaminiski, M; Goffinet, F; Zeitlin, J

    2015-03-01

    To assess the proportion of small for gestational age (SGA) and normal birthweight infants suspected of fetal growth restriction (FGR) during pregnancy, and to investigate obstetric and neonatal outcomes by suspicion of FGR and SGA status at birth. Population-based study. All French maternity units in 2010. Representative sample of singleton births (n = 14,100). We compared SGA infants with a birthweight of less than the 10th percentile suspected of FGR, defined as mention of FGR in medical charts (true positives), non-SGA infants suspected of FGR (false positives), SGA infants without suspicion of FGR (false negatives) and non-SGA infants without suspicion of FGR (true negatives). Multivariable analyses were adjusted for maternal and neonatal characteristics hypothesised to affect closer surveillance for FGR and our outcomes. Obstetric management (caesarean, provider-initiated preterm and early term delivery) and neonatal outcomes (late fetal death, preterm birth, Apgar score, resuscitation at birth). 21.7% of SGA infants (n = 265) and 2.1% of non-SGA infants (n = 271) were suspected of FGR during pregnancy. Compared with true negatives, provider-initiated preterm deliveries were higher for true and false positives (adjusted risk ratio [aRR], 6.1 [95% CI, 3.8-9.8] and 4.6 [95% CI, 3.2-6.7]), but not for false negatives (aRR, 1.1 [95% CI, 0.6-1.9]). Neonatal outcomes were not better for SGA infants if FGR was suspected. Antenatal suspicion of FGR among SGA infants was low and one-half of infants suspected of FGR were not SGA. The increased risk of provider-initiated delivery observed in non-SGA infants suspected of FGR raises concerns about the iatrogenic consequences of screening. © 2014 The Authors BJOG An International Journal of Obstetrics and Gynaecology published by John Wiley & Sons Ltd on behalf of Royal College of Obstetricians and Gynaecologists.

  9. Automatic detection of patient identification and positioning errors in radiation therapy treatment using 3-dimensional setup images.

    PubMed

    Jani, Shyam S; Low, Daniel A; Lamb, James M

    2015-01-01

    To develop an automated system that detects patient identification and positioning errors between 3-dimensional computed tomography (CT) and kilovoltage CT planning images. Planning kilovoltage CT images were collected for head and neck (H&N), pelvis, and spine treatments with corresponding 3-dimensional cone beam CT and megavoltage CT setup images from TrueBeam and TomoTherapy units, respectively. Patient identification errors were simulated by registering setup and planning images from different patients. For positioning errors, setup and planning images were misaligned by 1 to 5 cm in the 6 anatomical directions for H&N and pelvis patients. Spinal misalignments were simulated by misaligning to adjacent vertebral bodies. Image pairs were assessed using commonly used image similarity metrics as well as custom-designed metrics. Linear discriminant analysis classification models were trained and tested on the imaging datasets, and misclassification error (MCE), sensitivity, and specificity parameters were estimated using 10-fold cross-validation. For patient identification, our workflow produced MCE estimates of 0.66%, 1.67%, and 0% for H&N, pelvis, and spine TomoTherapy images, respectively. Sensitivity and specificity ranged from 97.5% to 100%. MCEs of 3.5%, 2.3%, and 2.1% were obtained for TrueBeam images of the above sites, respectively, with sensitivity and specificity estimates between 95.4% and 97.7%. MCEs for 1-cm H&N/pelvis misalignments were 1.3%/5.1% and 9.1%/8.6% for TomoTherapy and TrueBeam images, respectively. Two-centimeter MCE estimates were 0.4%/1.6% and 3.1/3.2%, respectively. MCEs for vertebral body misalignments were 4.8% and 3.6% for TomoTherapy and TrueBeam images, respectively. Patient identification and gross misalignment errors can be robustly and automatically detected using 3-dimensional setup images of different energies across 3 commonly treated anatomical sites. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  10. Neutrosophic segmentation of breast lesions for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Lee, Juhun; Nishikawa, Robert M.; Reiser, Ingrid; Boone, John M.

    2017-03-01

    We proposed the neutrosophic approach for segmenting breast lesions in breast Computer Tomography (bCT) images. The neutrosophic set (NS) considers the nature and properties of neutrality (or indeterminacy), which is neither true nor false. We considered the image noise as an indeterminate component, while treating the breast lesion and other breast areas as true and false components. We first transformed the image into the NS domain. Each voxel in the image can be described as its membership in True, Indeterminate, and False sets. Operations α-mean, β-enhancement, and γ-plateau iteratively smooth and contrast-enhance the image to reduce the noise level of the true set. Once the true image no longer changes, we applied one existing algorithm for bCT images, the RGI segmentation, on the resulting image to segment the breast lesions. We compared the segmentation performance of the proposed method (named as NS-RGI) to that of the regular RGI segmentation. We used a total of 122 breast lesions (44 benign, 78 malignant) of 123 non-contrasted bCT cases. We measured the segmentation performances of the NS-RGI and the RGI using the DICE coefficient. The average DICE value of the NS-RGI was 0.82 (STD: 0.09), while that of the RGI was 0.8 (STD: 0.12). The difference between the two DICE values was statistically significant (paired t test, p-value = 0.0007). We conducted a subsequent feature analysis on the resulting segmentations. The classifier performance for the NS-RGI (AUC = 0.8) improved over that of the RGI (AUC = 0.69, p-value = 0.006).

  11. Hirsch index and truth survival in clinical research.

    PubMed

    Poynard, Thierry; Thabut, Dominique; Munteanu, Mona; Ratziu, Vlad; Benhamou, Yves; Deckmyn, Olivier

    2010-08-06

    Factors associated with the survival of truth of clinical conclusions in the medical literature are unknown. We hypothesized that publications with a first author having a higher Hirsch' index value (h-I), which quantifies and predicts an individual's scientific research output, should have a longer half-life. 474 original articles concerning cirrhosis or hepatitis published from 1945 to 1999 were selected. The survivals of the main conclusions were updated in 2009. The truth survival was assessed by time-dependent methods (Kaplan Meier method and Cox). A conclusion was considered to be true, obsolete or false when three or more observers out of the six stated it to be so. 284 out of 474 conclusions (60%) were still considered true, 90 (19%) were considered obsolete and 100 (21%) false. The median of the h-I was=24 (range 1-85). Authors with true conclusions had significantly higher h-I (median=28) than those with obsolete (h-I=19; P=0.002) or false conclusions (h-I=19; P=0.01). The factors associated (P<0.0001) with h-I were: scientific life (h-I=33 for>30 years vs. 16 for<30 years), -methodological quality score (h-I=36 for high vs. 20 for low scores), and -positive predictive value combining power, ratio of true to not-true relationships and bias (h-I=33 for high vs. 20 for low values). In multivariate analysis, the risk ratio of h-I was 1.003 (95%CI, 0.994-1.011), and was not significant (P=0.56). In a subgroup restricted to 111 articles with a negative conclusion, we observed a significant independent prognostic value of h-I (risk ratio=1.033; 95%CI, 1.008-1.059; P=0.009). Using an extrapolation of h-I at the time of article publication there was a significant and independent prognostic value of baseline h-I (risk ratio=0.027; P=0.0001). The present study failed to clearly demonstrate that the h-index of authors was a prognostic factor for truth survival. However the h-index was associated with true conclusions, methodological quality of trials and positive predictive values.

  12. Single‐fraction spine SBRT end‐to‐end testing on TomoTherapy, Vero, TrueBeam, and CyberKnife treatment platforms using a novel anthropomorphic phantom

    PubMed Central

    Kaufman, Isaac; Powell, Rachel; Pandya, Shalini; Somnay, Archana; Bossenberger, Todd; Ramirez, Ezequiel; Reynolds, Robert; Solberg, Timothy; Burmeister, Jay

    2015-01-01

    Spine SBRT involves the delivery of very high doses of radiation to targets adjacent to the spinal cord and is most commonly delivered in a single fraction. Highly conformal planning and accurate delivery of such plans is imperative for successful treatment without catastrophic adverse effects. End–to‐end testing is an important practice for evaluating the entire treatment process from simulation through treatment delivery. We performed end‐to‐end testing for a set of representative spine targets planned and delivered using four different treatment planning systems (TPSs) and delivery systems to evaluate the various capabilities of each. An anthropomorphic E2E SBRT phantom was simulated and treated on each system to evaluate agreement between measured and calculated doses. The phantom accepts ion chambers in the thoracic region and radiochromic film in the lumbar region. Four representative targets were developed within each region (thoracic and lumbar) to represent different presentations of spinal metastases and planned according to RTOG 0631 constraints. Plans were created using the TomoTherapy TPS for delivery using the Hi·Art system, the iPlan TPS for delivery using the Vero system, the Eclipse TPS for delivery using the TrueBeam system in both flattened and flattening filter free (FFF), and the MultiPlan TPS for delivery using the CyberKnife system. Delivered doses were measured using a 0.007 cm3 ion chamber in the thoracic region and EBT3 GAFCHROMIC film in the lumbar region. Films were scanned and analyzed using an Epson Expression 10000XL flatbed scanner in conjunction with FilmQAPro2013. All treatment platforms met all dose constraints required by RTOG 0631. Ion chamber measurements in the thoracic targets delivered an overall average difference of 1.5%. Specifically, measurements agreed with the TPS to within 2.2%, 3.2%, 1.4%, 3.1%, and 3.0% for all three measureable cases on TomoTherapy, Vero, TrueBeam (FFF), TrueBeam (flattened), and CyberKnife, respectively. Film measurements for the lumbar targets resulted in average global gamma index passing rates of 100% at 3%/3 mm, 96.9% at 2%/2 mm, and 61.8% at 1%/1 mm, with a 10% minimum threshold for all plans on all platforms. Local gamma analysis was also performed with similar results. While gamma passing rates were consistently accurate across all platforms through 2%/2 mm, treatment beam‐on delivery times varied greatly between each platform with TrueBeam FFF being shortest, averaging 4.4 min, TrueBeam using flattened beam at 9.5 min, TomoTherapy at 30.5 min, Vero at 19 min, and CyberKnife at 46.0 min. In spite of the complexity of the representative targets and their proximity to the spinal cord, all treatment platforms were able to create plans meeting all RTOG 0631 dose constraints and produced exceptional agreement between calculated and measured doses. However, there were differences in the plan characteristics and significant differences in the beam‐on delivery time between platforms. Thus, clinical judgment is required for each particular case to determine most appropriate treatment planning/delivery platform. PACS number: 87.53.Ly PMID:25679169

  13. Porous membrane utilization in plant nutrient delivery

    NASA Technical Reports Server (NTRS)

    Dreschel, T. W.; Hinkle, C. R.; Prince, R. P.; Knott, W. M., III

    1987-01-01

    A spacecraft hydroponic plant growth unit of tubular configuration, employing a microporous membrane as a capilary interface between plant roots and a nutrient solution, is presented. All three of the experimental trials undertaken successfully grew wheat from seed to harvest. Attention is given to the mass/seed, number of seeds/head, ratio of seed dry mass to total plant dry mass, production of tillers, and mass of seed/plant. Dry matter production is found to be reduced with increasing suction pressure; this is true for both average seed and average total dry matter/plant. This may be due to a reduction in water and nutrient availability through the microporous membrane.

  14. Design of Interrogation Protocols for Radiation Dose Measurements Using Optically-Stimulated Luminescent Dosimeters.

    PubMed

    Abraham, Sara A; Kearfott, Kimberlee J; Jawad, Ali H; Boria, Andrew J; Buth, Tobias J; Dawson, Alexander S; Eng, Sheldon C; Frank, Samuel J; Green, Crystal A; Jacobs, Mitchell L; Liu, Kevin; Miklos, Joseph A; Nguyen, Hien; Rafique, Muhammad; Rucinski, Blake D; Smith, Travis; Tan, Yanliang

    2017-03-01

    Optically-stimulated luminescent dosimeters are capable of being interrogated multiple times post-irradiation. Each interrogation removes a fraction of the signal stored within the optically-stimulated luminescent dosimeter. This signal loss must be corrected to avoid systematic errors in estimating the average signal of a series of optically-stimulated luminescent dosimeter interrogations and requires a minimum number of consecutive readings to determine an average signal that is within a desired accuracy of the true signal with a desired statistical confidence. This paper establishes a technical basis for determining the required number of readings for a particular application of these dosimeters when using certain OSL dosimetry systems.

  15. Plausibility assessment of a 2-state self-paced mental task-based BCI using the no-control performance analysis.

    PubMed

    Faradji, Farhad; Ward, Rabab K; Birch, Gary E

    2009-06-15

    The feasibility of having a self-paced brain-computer interface (BCI) based on mental tasks is investigated. The EEG signals of four subjects performing five mental tasks each are used in the design of a 2-state self-paced BCI. The output of the BCI should only be activated when the subject performs a specific mental task and should remain inactive otherwise. For each subject and each task, the feature coefficient and the classifier that yield the best performance are selected, using the autoregressive coefficients as the features. The classifier with a zero false positive rate and the highest true positive rate is selected as the best classifier. The classifiers tested include: linear discriminant analysis, quadratic discriminant analysis, Mahalanobis discriminant analysis, support vector machine, and radial basis function neural network. The results show that: (1) some classifiers obtained the desired zero false positive rate; (2) the linear discriminant analysis classifier does not yield acceptable performance; (3) the quadratic discriminant analysis classifier outperforms the Mahalanobis discriminant analysis classifier and performs almost as well as the radial basis function neural network; and (4) the support vector machine classifier has the highest true positive rates but unfortunately has nonzero false positive rates in most cases.

  16. Verification of ARMA identification for modelling temporal correlation of GPS observations using the toolbox ARMASA

    NASA Astrophysics Data System (ADS)

    Luo, Xiaoguang; Mayer, Michael; Heck, Bernhard

    2010-05-01

    One essential deficiency of the stochastic model used in many GNSS (Global Navigation Satellite Systems) software products consists in neglecting temporal correlation of GNSS observations. Analysing appropriately detrended time series of observation residuals resulting from GPS (Global Positioning System) data processing, the temporal correlation behaviour of GPS observations can be sufficiently described by means of so-called autoregressive moving average (ARMA) processes. Using the toolbox ARMASA which is available free of charge in MATLAB® Central (open exchange platform for the MATLAB® and SIMULINK® user community), a well-fitting time series model can be identified automatically in three steps. Firstly, AR, MA, and ARMA models are computed up to some user-specified maximum order. Subsequently, for each model type, the best-fitting model is selected using the combined (for AR processes) resp. generalised (for MA and ARMA processes) information criterion. The final model identification among the best-fitting AR, MA, and ARMA models is performed based on the minimum prediction error characterising the discrepancies between the given data and the fitted model. The ARMA coefficients are computed using Burg's maximum entropy algorithm (for AR processes), Durbin's first (for MA processes) and second (for ARMA processes) methods, respectively. This paper verifies the performance of the automated ARMA identification using the toolbox ARMASA. For this purpose, a representative data base is generated by means of ARMA simulation with respect to sample size, correlation level, and model complexity. The model error defined as a transform of the prediction error is used as measure for the deviation between the true and the estimated model. The results of the study show that the recognition rates of underlying true processes increase with increasing sample sizes and decrease with rising model complexity. Considering large sample sizes, the true underlying processes can be correctly recognised for nearly 80% of the analysed data sets. Additionally, the model errors of first-order AR resp. MA processes converge clearly more rapidly to the corresponding asymptotical values than those of high-order ARMA processes.

  17. Sensitivity and Specificity of Histoplasma Antigen Detection by Enzyme Immunoassay.

    PubMed

    Cunningham, Lauren; Cook, Audrey; Hanzlicek, Andrew; Harkin, Kenneth; Wheat, Joseph; Goad, Carla; Kirsch, Emily

    2015-01-01

    The objective of this study was to evaluate the sensitivity and specificity of an antigen enzyme immunoassay (EIA) on urine samples for the diagnosis of histoplasmosis in dogs. This retrospective medical records review included canine cases with urine samples submitted for Histoplasma EIA antigen assay between 2007 and 2011 from three veterinary institutions. Cases for which urine samples were submitted for Histoplasma antigen testing were reviewed and compared to the gold standard of finding Histoplasma organisms or an alternative diagnosis on cytology or histopathology. Sensitivity, specificity, negative predictive value, positive predictive value, and the kappa coefficient and associated confidence interval were calculated for the EIA-based Histoplasma antigen assay. Sixty cases met the inclusion criteria. Seventeen cases were considered true positives based on identification of the organism, and 41 cases were considered true negatives with an alternative definitive diagnosis. Two cases were considered false negatives, and there were no false positives. Sensitivity was 89.47% and the negative predictive value was 95.35%. Specificity and the positive predictive value were both 100%. The kappa coefficient was 0.9207 (95% confidence interval, 0.8131-1). The Histoplasma antigen EIA test demonstrated high specificity and sensitivity for the diagnosis of histoplasmosis in dogs.

  18. Confirmed severe maternal morbidity is associated with high rate of preterm delivery.

    PubMed

    Kilpatrick, Sarah J; Abreo, Anisha; Gould, Jeffrey; Greene, Naomi; Main, Elliot K

    2016-08-01

    Because severe maternal morbidity (SMM) is increasing in the United States, affecting up to 50,000 women per year, there was a recent call to review all mothers with SMM to better understand their morbidity and improve outcomes. Administrative screening methods for SMM have recently been shown to have low positive predictive value for true SMM after chart review. To ultimately reduce maternal morbidity and mortality we must better understand risk factors, and preventability issues about true SMM such that interventions could be designed to improve care. Our objective was to determine risk factors associated with true SMM identified from California delivery admissions, including the relationship between SMM and preterm delivery. In this retrospective cohort study, SMM cases were screened for using International Classification of Diseases, Ninth Revision codes for severe illness and procedures, prolonged postpartum length of stay, intensive care unit admission, and transfusion from all deliveries in 16 hospitals from July 2012 through June 2013. Charts of screen-positive cases were reviewed and true SMM diagnosed based on expert panel agreement. Underlying disease diagnosis was determined. Women with true-positive SMM were compared to SMM-negative women for the following variables: maternal age, ethnicity, gestational age at delivery, prior cesarean delivery, and multiple gestation. In all, 491 women had true SMM and 66,977 women did not have SMM for a 0.7% rate of true SMM. Compared to SMM-negative women, SMM cases were significantly more likely to be age >35 years (33.6 vs 23.8%; P < .0001), be African American (14.1 vs 7.9%; P < .0001), have had a multiple gestation (9.7 vs 2.1%; P < .0001), and, for the multiparous women, have had a prior cesarean delivery (58 vs 30.2%; P < .0001). Preterm delivery was significantly more common in SMM women compared to SMM-negative women (41 vs 8%; P < .0001), including delivery <32 weeks (18 vs 2%; P < .0001). The most common underlying disease was obstetric hemorrhage (42%) followed by hypertensive disorders (20%) and placental hemorrhage (14%). Only 1.6% of women with SMM had cardiovascular disease as the underlying disease category. An extremely high proportion of women with severe morbidity (42.5%) delivered preterm with 17.8% delivering <32 weeks, which underscores the importance of access to appropriate-level care for mothers with SMM and their newborns. Further, the extremely high rate of preterm delivery (75%) in women with placental hemorrhage in combination with their 63% prior cesarean delivery rate highlights another risk of prior cesarean delivery: subsequent preterm delivery. These data provide a reminder that a cesarean delivery could be a contributing factor to not only hemorrhage-related SMM, but also to increased subsequent preterm delivery, more reason to continue national efforts to safely reduce initial cesarean deliveries. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Detection of occult, undisplaced hip fractures with a dual-energy CT algorithm targeted to detection of bone marrow edema.

    PubMed

    Reddy, T; McLaughlin, P D; Mallinson, P I; Reagan, A C; Munk, P L; Nicolaou, S; Ouellette, H A

    2015-02-01

    The purpose of this study is to describe our initial clinical experience with dual-energy computed tomography (DECT) virtual non-calcium (VNC) images for the detection of bone marrow (BM) edema in patients with suspected hip fracture following trauma. Twenty-five patients presented to the emergency department at a level 1 trauma center between January 1, 2011 and January 1, 2013 with clinical suspicion of hip fracture and normal radiographs were included. All CT scans were performed on a dual-source, dual-energy CT system. VNC images were generated using prototype software and were compared to regular bone reconstructions by two musculoskeletal radiologists in consensus. Radiological and/or clinical diagnosis of fracture at 30-day follow-up was used as the reference standard. Twenty-one patients were found to have DECT-VNC signs of bone marrow edema. Eighteen of these 21 patients were true positive and three were false positive. A concordant fracture was clearly seen on bone reconstruction images in 15 of the 18 true positive cases. In three cases, DECT-VNC was positive for bone marrow edema where bone reconstruction CT images were negative. Four patients demonstrated no DECT-VNC signs of bone marrow edema: two cases were true negative, two cases were false negative. When compared with the gold standard of hip fracture determined at retrospective follow-up, the sensitivity of DECT-VNC images of the hip was 90 %, specificity was 40 %, positive predictive value was 86 %, and negative predictive value was 50 %. Our initial experience would suggest that DECT-VNC is highly sensitive but poorly specific in the diagnosis of hip fractures in patients with normal radiographs. The value of DECT-VNC primarily lies in its ability to help detect fractures which may be subtle or undetectable on bone reconstruction CT images.

  20. Assessment of disease extent on contrast-enhanced MRI in breast cancer detected at digital breast tomosynthesis versus digital mammography alone.

    PubMed

    Chudgar, A V; Conant, E F; Weinstein, S P; Keller, B M; Synnestvedt, M; Yamartino, P; McDonald, E S

    2017-07-01

    To compare the utility of breast magnetic resonance imaging (MRI) in determining the extent of disease in patients with newly diagnosed breast cancer detected on combination digital breast tomosynthesis (DBT) versus digital screening mammography (DM). Review of 24,563 DBT-screened patients and 10,751 DM-screened patients was performed. Two hundred and thirty-five DBT patients underwent subsequent MRI examinations; 82 to determine extent of disease after newly diagnosed breast cancer. Eighty-three DM patients underwent subsequent MRI examinations; 23 to determine extent of disease. MRI examinations performed to assess disease extent were considered true positives if additional disease was discovered in the contralateral breast or >2 cm away from the index malignancy. Differences in cancer subtypes and MRI outcomes between the DM and DBT cohorts were compared using chi-squared tests and post-hoc Bonferroni-adjusted tests for equal proportions. No differences in cancer subtype findings were observed between the two cohorts; however, MRI outcomes were found to differ between the DBT and DM cohorts (p=0.024). Specifically, the DBT cohort had significantly (p=0.013) fewer true-positive findings (7/82, 8.5%) than did the DM cohort (7/23; 30%), whereas the false-positive rate was similar between the cohorts (not statistically significant). When stratifying by breast density, this difference in true-positive rates was primarily observed when evaluating women with non-dense breasts (p=0.001). In both the DM- and DBT-screened populations with new cancer diagnoses, MRI is able to detect additional cancer; however, in those patients who have DBT screen-detected cancers the positive impact of preoperative MRI is diminished, particularly in those women with non-dense breasts. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  1. Estimating the alcohol-breast cancer association: a comparison of diet diaries, FFQs and combined measurements.

    PubMed

    Keogh, Ruth H; Park, Jin Young; White, Ian R; Lentjes, Marleen A H; McTaggart, Alison; Bhaniani, Amit; Cairns, Benjamin J; Key, Timothy J; Greenwood, Darren C; Burley, Victoria J; Cade, Janet E; Dahm, Christina C; Pot, Gerda K; Stephen, Alison M; Masset, Gabriel; Brunner, Eric J; Khaw, Kay-Tee

    2012-07-01

    The alcohol-breast cancer association has been established using alcohol intake measurements from Food Frequency Questionnaires (FFQ). For some nutrients diet diary measurements are more highly correlated with true intake compared with FFQ measurements, but it is unknown whether this is true for alcohol. A case-control study (656 breast cancer cases, 1905 matched controls) was sampled from four cohorts in the UK Dietary Cohort Consortium. Alcohol intake was measured prospectively using FFQs and 4- or 7-day diet diaries. Both relied on fixed portion sizes allocated to given beverage types, but those used to obtain FFQ measurements were lower. FFQ measurements were therefore on average lower and to enable fair comparison the FFQ was "calibrated" using diet diary portion sizes. Diet diaries gave more zero measurements, demonstrating the challenge of distinguishing never-from episodic-consumers using short term instruments. To use all information, two combined measurements were calculated. The first is an average of the two measurements with special treatment of zeros. The second is the expected true intake given both measurements, calculated using a measurement error model. After confounder adjustment the odds ratio (OR) per 10 g/day of alcohol intake was 1.05 (95 % CI 0.98, 1.13) using diet diaries, and 1.13 (1.02, 1.24) using FFQs. The calibrated FFQ measurement and combined measurements 1 and 2 gave ORs 1.10 (1.03, 1.18), 1.09 (1.01, 1.18), 1.09 (0.99,1.20), respectively. The association was modified by HRT use, being stronger among users versus non-users. In summary, using an alcohol measurement from a diet diary at one time point gave attenuated associations compared with FFQ.

  2. A Monte-Carlo simulation analysis for evaluating the severity distribution functions (SDFs) calibration methodology and determining the minimum sample-size requirements.

    PubMed

    Shirazi, Mohammadali; Reddy Geedipally, Srinivas; Lord, Dominique

    2017-01-01

    Severity distribution functions (SDFs) are used in highway safety to estimate the severity of crashes and conduct different types of safety evaluations and analyses. Developing a new SDF is a difficult task and demands significant time and resources. To simplify the process, the Highway Safety Manual (HSM) has started to document SDF models for different types of facilities. As such, SDF models have recently been introduced for freeway and ramps in HSM addendum. However, since these functions or models are fitted and validated using data from a few selected number of states, they are required to be calibrated to the local conditions when applied to a new jurisdiction. The HSM provides a methodology to calibrate the models through a scalar calibration factor. However, the proposed methodology to calibrate SDFs was never validated through research. Furthermore, there are no concrete guidelines to select a reliable sample size. Using extensive simulation, this paper documents an analysis that examined the bias between the 'true' and 'estimated' calibration factors. It was indicated that as the value of the true calibration factor deviates further away from '1', more bias is observed between the 'true' and 'estimated' calibration factors. In addition, simulation studies were performed to determine the calibration sample size for various conditions. It was found that, as the average of the coefficient of variation (CV) of the 'KAB' and 'C' crashes increases, the analyst needs to collect a larger sample size to calibrate SDF models. Taking this observation into account, sample-size guidelines are proposed based on the average CV of crash severities that are used for the calibration process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Adapting detection sensitivity based on evidence of irregular sinus arrhythmia to improve atrial fibrillation detection in insertable cardiac monitors.

    PubMed

    Pürerfellner, Helmut; Sanders, Prashanthan; Sarkar, Shantanu; Reisfeld, Erin; Reiland, Jerry; Koehler, Jodi; Pokushalov, Evgeny; Urban, Luboš; Dekker, Lukas R C

    2017-10-03

    Intermittent change in p-wave discernibility during periods of ectopy and sinus arrhythmia is a cause of inappropriate atrial fibrillation (AF) detection in insertable cardiac monitors (ICM). To address this, we developed and validated an enhanced AF detection algorithm. Atrial fibrillation detection in Reveal LINQ ICM uses patterns of incoherence in RR intervals and absence of P-wave evidence over a 2-min period. The enhanced algorithm includes P-wave evidence during RR irregularity as evidence of sinus arrhythmia or ectopy to adaptively optimize sensitivity for AF detection. The algorithm was developed and validated using Holter data from the XPECT and LINQ Usability studies which collected surface electrocardiogram (ECG) and continuous ICM ECG over a 24-48 h period. The algorithm detections were compared with Holter annotations, performed by multiple reviewers, to compute episode and duration detection performance. The validation dataset comprised of 3187 h of valid Holter and LINQ recordings from 138 patients, with true AF in 37 patients yielding 108 true AF episodes ≥2-min and 449 h of AF. The enhanced algorithm reduced inappropriately detected episodes by 49% and duration by 66% with <1% loss in true episodes or duration. The algorithm correctly identified 98.9% of total AF duration and 99.8% of total sinus or non-AF rhythm duration. The algorithm detected 97.2% (99.7% per-patient average) of all AF episodes ≥2-min, and 84.9% (95.3% per-patient average) of detected episodes involved AF. An enhancement that adapts sensitivity for AF detection reduced inappropriately detected episodes and duration with minimal reduction in sensitivity. © The Author 2017. Published by Oxford University Press on behalf of the European Society of Cardiology

  4. Natural vertical transmission of dengue viruses by Aedes aegypti in Bolivia

    PubMed Central

    Le Goff, G.; Revollo, J.; Guerra, M.; Cruz, M.; Barja Simon, Z.; Roca, Y.; Vargas Florès, J.; Hervé, J.P.

    2011-01-01

    The natural transmission of dengue virus from an infected female mosquito to its progeny, namely the vertical transmission, was researched in wild caught Aedes aegypti during an important outbreak in the town of Santa Cruz de la Sierra, Bolivia. Mosquitoes were collected at the preimaginal stages (eggs, larvae and pupae) then reared up to adult stage for viral detection using molecular methods. Dengue virus serotypes 1 and 3 were found to be co-circulating with significant higher prevalence in male than in female mosquitoes. Of the 97 pools of Ae. aegypti (n = 635 male and 748 female specimens) screened, 14 pools, collected in February-May in 2007, were found positive for dengue virus infection: five DEN-1 and nine DEN-3. The average true infection rate (TIR) and minimum infection rate (MIR) were respectively 1.08% and 1.01%. These observations suggest that vertical transmission of dengue virus may be detected in vectors at the peak of an outbreak as well as several months before an epidemic occurs in human population. PMID:21894270

  5. Flux density calibration in diffuse optical tomographic systems.

    PubMed

    Biswas, Samir Kumar; Rajan, Kanhirodan; Vasu, Ram M

    2013-02-01

    The solution of the forward equation that models the transport of light through a highly scattering tissue material in diffuse optical tomography (DOT) using the finite element method gives flux density (Φ) at the nodal points of the mesh. The experimentally measured flux (Umeasured) on the boundary over a finite surface area in a DOT system has to be corrected to account for the system transfer functions (R) of various building blocks of the measurement system. We present two methods to compensate for the perturbations caused by R and estimate true flux density (Φ) from Umeasuredcal. In the first approach, the measurement data with a homogeneous phantom (Umeasuredhomo) is used to calibrate the measurement system. The second scheme estimates the homogeneous phantom measurement using only the measurement from a heterogeneous phantom, thereby eliminating the necessity of a homogeneous phantom. This is done by statistically averaging the data (Umeasuredhetero) and redistributing it to the corresponding detector positions. The experiments carried out on tissue mimicking phantom with single and multiple inhomogeneities, human hand, and a pork tissue phantom demonstrate the robustness of the approach.

  6. Using high-resolution variant frequencies to empower clinical genome interpretation.

    PubMed

    Whiffin, Nicola; Minikel, Eric; Walsh, Roddy; O'Donnell-Luria, Anne H; Karczewski, Konrad; Ing, Alexander Y; Barton, Paul J R; Funke, Birgit; Cook, Stuart A; MacArthur, Daniel; Ware, James S

    2017-10-01

    PurposeWhole-exome and whole-genome sequencing have transformed the discovery of genetic variants that cause human Mendelian disease, but discriminating pathogenic from benign variants remains a daunting challenge. Rarity is recognized as a necessary, although not sufficient, criterion for pathogenicity, but frequency cutoffs used in Mendelian analysis are often arbitrary and overly lenient. Recent very large reference datasets, such as the Exome Aggregation Consortium (ExAC), provide an unprecedented opportunity to obtain robust frequency estimates even for very rare variants.MethodsWe present a statistical framework for the frequency-based filtering of candidate disease-causing variants, accounting for disease prevalence, genetic and allelic heterogeneity, inheritance mode, penetrance, and sampling variance in reference datasets.ResultsUsing the example of cardiomyopathy, we show that our approach reduces by two-thirds the number of candidate variants under consideration in the average exome, without removing true pathogenic variants (false-positive rate<0.001).ConclusionWe outline a statistically robust framework for assessing whether a variant is "too common" to be causative for a Mendelian disorder of interest. We present precomputed allele frequency cutoffs for all variants in the ExAC dataset.

  7. Developmental phenotypic plasticity helps bridge stochastic weather events associated with climate change.

    PubMed

    Burggren, Warren

    2018-05-10

    The slow, inexorable rise in annual average global temperatures and acidification of the oceans are often advanced as consequences of global change. However, many environmental changes, especially those involving weather (as opposed to climate), are often stochastic, variable and extreme, particularly in temperate terrestrial or freshwater habitats. Moreover, few studies of animal and plant phenotypic plasticity employ realistic (i.e. short-term, stochastic) environmental change in their protocols. Here, I posit that the frequently abrupt environmental changes (days, weeks, months) accompanying much longer-term general climate change (e.g. global warming over decades or centuries) require consideration of the true nature of environmental change (as opposed to statistical means) coupled with an expansion of focus to consider developmental phenotypic plasticity. Such plasticity can be in multiple forms - obligatory/facultative, beneficial/deleterious - depending upon the degree and rate of environmental variability at specific points in organismal development. Essentially, adult phenotypic plasticity, as important as it is, will be irrelevant if developing offspring lack sufficient plasticity to create modified phenotypes necessary for survival. © 2018. Published by The Company of Biologists Ltd.

  8. Accidents caused by hazardous trees on California forest recreation sites

    Treesearch

    Lee A. Paine

    1966-01-01

    From 1959 to early 1966, tree failures caused an average of more than two injuries or deaths per year on forest recreation sites in California. Annual property damage is estimated at $25,100. Conifers accounted for three of every four accidents reported; pines and true firs were involved in 6 of every 10 incidents involving property damage, and in 9 of every 10...

  9. I Know This to Be True...: Perceptions of Teachers in One Rural Elementary School Regarding Writing Scores

    ERIC Educational Resources Information Center

    Brashears, Kathy

    2006-01-01

    This study is set in an elementary school located in a rural, Appalachian area and considers the reasons that teachers attribute to student success on state writing assessments as well as to what reasons they attribute their students' lack of success in moving beyond an average ranking. In considering these reasons, patterns emerge in the data…

  10. Comparison of T2, T1rho, and diffusion metrics in assessment of liver fibrosis in rats.

    PubMed

    Zhang, Hui; Yang, Qihua; Yu, Taihui; Chen, Xiaodong; Huang, Jingwen; Tan, Cui; Liang, Biling; Guo, Hua

    2017-03-01

    To evaluate the value of T 2 , T 1 rho, and diffusion metrics in assessment of liver fibrosis in rats. Liver fibrosis in a rat model (n = 72) was induced by injection of carbon tetrachloride (CCl 4 ) at 3T. T 2 , T 1 rho, and diffusion parameters (apparent diffusion coefficient (ADC), D true ) via spin echo (SE) diffusion-weighted imaging (DWI) and stimulated echo acquisition mode (STEAM) DWI with three diffusion times (DT: 80, 106, 186 msec) were obtained in surviving rats with hepatic fibrosis (n = 52) and controls (n = 8). Liver fibrosis stage (F0-F6) was identified based on pathological results using the traditional liver fibrosis staging method for rodents. Nonparametric statistical methods and receiver operating characteristic (ROC) curve analysis were employed to determine the diagnostic accuracy. Mean T 2 , T 1 rho, ADC, and D true with DT = 186 msec correlated with the severity of fibrosis with r = 0.73, 0.83, -0.83, and -0.85 (all P < 0.001), respectively. The average areas under the ROC curve at different stages for T 1 rho and diffusion parameters (DT = 186 msec) were larger than those of T 2 and SE DWI (0.92, 0.92, and 0.92 vs. 0.86, 0.82, and 0.83). The corresponding average sensitivity and specificity for T 1 rho and diffusion parameters with a long DT were larger (89.35 and 88.90, 88.36 and 89.97, 90.16 and 87.13) than T 2 and SE DWI (90.28 and 79.93, 85.30 and 77.64, 78.21 and 82.41). The performances of T 1 rho and D true (DT = 186 msec) were comparable (average AUC: 0.92 and 0.92). Among the evaluated sequences, T 1 rho and STEAM DWI with a long DT may serve as superior imaging biomarkers for assessing liver fibrosis and monitoring disease severity. 1 J. Magn. Reson. Imaging 2017;45:741-750. © 2016 International Society for Magnetic Resonance in Medicine.

  11. Does head posture have a significant effect on the hyoid bone position and sternocleidomastoid electromyographic activity in young adults?

    PubMed

    Valenzuela, Saúl; Miralles, Rodolfo; Ravera, María José; Zúñiga, Claudia; Santander, Hugo; Ferrer, Marcelo; Nakouzi, Jorge

    2005-07-01

    The aim of this study was to evaluate the associations between head posture (head extension, normal head posture, and head flexion) and anteroposterior head position, hyoid bone position, and the sternocleidomastoid integrated electromyographic (IEMG) activity in a sample of young adults. The study included 50 individuals with natural dentition and bilateral molar support. A lateral craniocervical radiograph was taken for each subject and a cephalometric analysis was performed. Head posture was measured by means of the craniovertebral angle formed by the MacGregor plane and the odontoid plane. According to the value of this angle, the sample was divided into the following three groups: head extension (less than 95 degrees); normal head posture (between 95 degrees and 106 degrees); and head flexion (more than 106 degrees). The following cephalometric measurements were taken to compare the three groups: anteroposterior head position (true vertical plane/pterygoid distance), anteroposterior hyoid bone position (true vertical plane-Ha distance), vertical hyoid bone position (H-H' distance in the hyoid triangle), and CO-C2 distance. In the three groups, IEMG recordings at rest and during swallowing of saliva and maximal voluntary clenching were performed by placing bipolar surface electrodes on the right and left sternocleidomastoid muscles. In addition, the condition with/without craniomandibular dysfunction (CMD) in each group was also assessed. Head posture showed no significant association with anteroposterior head position, anteroposterior hyoid bone position, vertical hyoid bone position, or sternocleidomastoid IEMG activity. There was no association to head posture with/without the condition of CMD. Clinical relevance of the results is discussed.

  12. Occupational exposure assessment of magnetic fields generated by induction heating equipment-the role of spatial averaging.

    PubMed

    Kos, Bor; Valič, Blaž; Kotnik, Tadej; Gajšek, Peter

    2012-10-07

    Induction heating equipment is a source of strong and nonhomogeneous magnetic fields, which can exceed occupational reference levels. We investigated a case of an induction tempering tunnel furnace. Measurements of the emitted magnetic flux density (B) were performed during its operation and used to validate a numerical model of the furnace. This model was used to compute the values of B and the induced in situ electric field (E) for 15 different body positions relative to the source. For each body position, the computed B values were used to determine their maximum and average values, using six spatial averaging schemes (9-285 averaging points) and two averaging algorithms (arithmetic mean and quadratic mean). Maximum and average B values were compared to the ICNIRP reference level, and E values to the ICNIRP basic restriction. Our results show that in nonhomogeneous fields, the maximum B is an overly conservative predictor of overexposure, as it yields many false positives. The average B yielded fewer false positives, but as the number of averaging points increased, false negatives emerged. The most reliable averaging schemes were obtained for averaging over the torso with quadratic averaging, with no false negatives even for the maximum number of averaging points investigated.

  13. A validation of 11 body-condition indices in a giant snake species that exhibits positive allometry.

    PubMed

    Falk, Bryan G; Snow, Ray W; Reed, Robert N

    2017-01-01

    Body condition is a gauge of the energy stores of an animal, and though it has important implications for fitness, survival, competition, and disease, it is difficult to measure directly. Instead, body condition is frequently estimated as a body condition index (BCI) using length and mass measurements. A desirable BCI should accurately reflect true body condition and be unbiased with respect to size (i.e., mean BCI estimates should not change across different length or mass ranges), and choosing the most-appropriate BCI is not straightforward. We evaluated 11 different BCIs in 248 Burmese pythons (Python bivittatus), organisms that, like other snakes, exhibit simple body plans well characterized by length and mass. We found that the length-mass relationship in Burmese pythons is positively allometric, where mass increases rapidly with respect to length, and this allowed us to explore the effects of allometry on BCI verification. We employed three alternative measures of 'true' body condition: percent fat, scaled fat, and residual fat. The latter two measures mostly accommodated allometry in true body condition, but percent fat did not. Our inferences of the best-performing BCIs depended heavily on our measure of true body condition, with most BCIs falling into one of two groups. The first group contained most BCIs based on ratios, and these were associated with percent fat and body length (i.e., were biased). The second group contained the scaled mass index and most of the BCIs based on linear regressions, and these were associated with both scaled and residual fat but not body length (i.e., were unbiased). Our results show that potential differences in measures of true body condition should be explored in BCI verification studies, particularly in organisms undergoing allometric growth. Furthermore, the caveats of each BCI and similarities to other BCIs are important to consider when determining which BCI is appropriate for any particular taxon.

  14. Analysis of the sex ratio of reported gonorrhoea incidence in Shenzhen, China

    PubMed Central

    Xiong, Mingzhou; Lan, Lina; Feng, Tiejian; Zhao, Guanglu; Wang, Feng; Hong, Fuchang; Wu, Xiaobing; Zhang, Chunlai; Wen, Lizhang; Liu, Aizhong; Best, John McCulloch; Tang, Weiming

    2016-01-01

    Objective To assess the clinical process of gonorrhoea diagnosis and report in China, and to determine the difference of sex ratio between reported incidence based on reporting data and true diagnosis rate based on reference tests of gonorrhoea. Setting A total of 26 dermatology and sexually transmitted disease (STD) departments, 34 obstetrics-gynaecology clinics and 28 urology outpatient clinics selected from 34 hospitals of Shenzhen regarded as our study sites. Participants A total of 2754 participants were recruited in this study, and 2534 participants completed the questionnaire survey and provided genital tract secretion specimens. There were 1106 male and 1428 female participants. Eligible participants were patients who presented for outpatient STD care at the selected clinics for the first time in October 2012 were at least 18 years old, and were able to give informed consent. Outcome measures Untested rate, true-positive rate, false-negative rate and unreported rate of gonorrhoea, as well as reported gonorrhoea incidence sex ratio and true diagnosis sex ratio were calculated and used to describe the results. Results 2534 participants were enrolled in the study. The untested rate of gonorrhoea among females was significantly higher than that among males (female 88.1%, male 68.3%, p=0.001). The male-to-female sex ratios of untested rate, true-positive rate, false-negative rate and unreported rate were 1:1.3, 1.2:1, 1:1.6 and 1:1.4, respectively. The reported gonorrhoea incidence sex ratio of new diagnosed gonorrhoea was 19.8:1 (male vs female: 87/1106 vs 5/1420), while the true diagnosis sex ratio was 2.5:1 (male vs female: 161/1106 vs 84/1420). These data indicate that the sex ratio of reported gonorrhoea incidence has been overestimated by a factor of 7.9 (19.8/2.5). Conclusions We found the current reported gonorrhoea incidence and sex ratios to be inaccurate due to underestimations of gonorrhoea incidence, especially among women. PMID:26975933

  15. Beyond Point Charges: Dynamic Polarization from Neural Net Predicted Multipole Moments.

    PubMed

    Darley, Michael G; Handley, Chris M; Popelier, Paul L A

    2008-09-09

    Intramolecular polarization is the change to the electron density of a given atom upon variation in the positions of the neighboring atoms. We express the electron density in terms of multipole moments. Using glycine and N-methylacetamide (NMA) as pilot systems, we show that neural networks can capture the change in electron density due to polarization. After training, modestly sized neural networks successfully predict the atomic multipole moments from the nuclear positions of all atoms in the molecule. Accurate electrostatic energies between two atoms can be then obtained via a multipole expansion, inclusive of polarization effects. As a result polarization is successfully modeled at short-range and without an explicit polarizability tensor. This approach puts charge transfer and multipolar polarization on a common footing. The polarization procedure is formulated within the context of quantum chemical topology (QCT). Nonbonded atom-atom interactions in glycine cover an energy range of 948 kJ mol(-1), with an average energy difference between true and predicted energy of 0.2 kJ mol(-1), the largest difference being just under 1 kJ mol(-1). Very similar energy differences are found for NMA, which spans a range of 281 kJ mol(-1). The current proof-of-concept enables the construction of a new protein force field that incorporates electron density fragments that dynamically respond to their fluctuating environment.

  16. Generation of fluoroscopic 3D images with a respiratory motion model based on an external surrogate signal

    NASA Astrophysics Data System (ADS)

    Hurwitz, Martina; Williams, Christopher L.; Mishra, Pankaj; Rottmann, Joerg; Dhou, Salam; Wagar, Matthew; Mannarino, Edward G.; Mak, Raymond H.; Lewis, John H.

    2015-01-01

    Respiratory motion during radiotherapy can cause uncertainties in definition of the target volume and in estimation of the dose delivered to the target and healthy tissue. In this paper, we generate volumetric images of the internal patient anatomy during treatment using only the motion of a surrogate signal. Pre-treatment four-dimensional CT imaging is used to create a patient-specific model correlating internal respiratory motion with the trajectory of an external surrogate placed on the chest. The performance of this model is assessed with digital and physical phantoms reproducing measured irregular patient breathing patterns. Ten patient breathing patterns are incorporated in a digital phantom. For each patient breathing pattern, the model is used to generate images over the course of thirty seconds. The tumor position predicted by the model is compared to ground truth information from the digital phantom. Over the ten patient breathing patterns, the average absolute error in the tumor centroid position predicted by the motion model is 1.4 mm. The corresponding error for one patient breathing pattern implemented in an anthropomorphic physical phantom was 0.6 mm. The global voxel intensity error was used to compare the full image to the ground truth and demonstrates good agreement between predicted and true images. The model also generates accurate predictions for breathing patterns with irregular phases or amplitudes.

  17. A hybrid NIRS-EEG system for self-paced brain computer interface with online motor imagery.

    PubMed

    Koo, Bonkon; Lee, Hwan-Gon; Nam, Yunjun; Kang, Hyohyeong; Koh, Chin Su; Shin, Hyung-Cheul; Choi, Seungjin

    2015-04-15

    For a self-paced motor imagery based brain-computer interface (BCI), the system should be able to recognize the occurrence of a motor imagery, as well as the type of the motor imagery. However, because of the difficulty of detecting the occurrence of a motor imagery, general motor imagery based BCI studies have been focusing on the cued motor imagery paradigm. In this paper, we present a novel hybrid BCI system that uses near infrared spectroscopy (NIRS) and electroencephalography (EEG) systems together to achieve online self-paced motor imagery based BCI. We designed a unique sensor frame that records NIRS and EEG simultaneously for the realization of our system. Based on this hybrid system, we proposed a novel analysis method that detects the occurrence of a motor imagery with the NIRS system, and classifies its type with the EEG system. An online experiment demonstrated that our hybrid system had a true positive rate of about 88%, a false positive rate of 7% with an average response time of 10.36 s. As far as we know, there is no report that explored hemodynamic brain switch for self-paced motor imagery based BCI with hybrid EEG and NIRS system. From our experimental results, our hybrid system showed enough reliability for using in a practical self-paced motor imagery based BCI. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Recent human evolution has shaped geographical differences in susceptibility to disease

    PubMed Central

    2011-01-01

    Background Searching for associations between genetic variants and complex diseases has been a very active area of research for over two decades. More than 51,000 potential associations have been studied and published, a figure that keeps increasing, especially with the recent explosion of array-based Genome-Wide Association Studies. Even if the number of true associations described so far is high, many of the putative risk variants detected so far have failed to be consistently replicated and are widely considered false positives. Here, we focus on the world-wide patterns of replicability of published association studies. Results We report three main findings. First, contrary to previous results, genes associated to complex diseases present lower degrees of genetic differentiation among human populations than average genome-wide levels. Second, also contrary to previous results, the differences in replicability of disease associated-loci between Europeans and East Asians are highly correlated with genetic differentiation between these populations. Finally, highly replicated genes present increased levels of high-frequency derived alleles in European and Asian populations when compared to African populations. Conclusions Our findings highlight the heterogeneous nature of the genetic etiology of complex disease, confirm the importance of the recent evolutionary history of our species in current patterns of disease susceptibility and could cast doubts on the status as false positives of some associations that have failed to replicate across populations. PMID:21261943

  19. A true branchial fistula in the context of branchiootic syndrome: challenges of diagnosis and management.

    PubMed

    Jovic, Thomas H; Saldanha, Francesca; Kuo, Rachel; Ahmad, Tariq

    2014-09-01

    The presence of a branchial fistula with communication both internally and externally: a 'true' branchial fistula is rare, and may arise in the context of autosomal dominant conditions such as branchiootic syndrome and branchiootorenal syndrome. We discuss the case of a true branchial fistula, which recurred after initial surgical excision, in a patient with branchiootic syndrome. The residual tract was dissected in a second operation through stepladder neck incisions and removed in toto via an intraoral approach. No renal abnormalities were detected on investigation with ultrasound. Incomplete excision of a branchial sinus is likely to cause recurrence however intraoperative visualisation of the tract can can sometimes prove challenging. An combined intraoral and external approach aids delineation and tract definition when there is a true branchial fistula and can therefore facilitate a complete excision. Suspicion of an hereditary aetiology should be raised in patients with bilateral or preauricular features, or a positive family history, which may then prompt additional renal and genetic investigation. Copyright © 2014 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  20. A template-finding algorithm and a comprehensive benchmark for homology modeling of proteins

    PubMed Central

    Vallat, Brinda Kizhakke; Pillardy, Jaroslaw; Elber, Ron

    2010-01-01

    The first step in homology modeling is to identify a template protein for the target sequence. The template structure is used in later phases of the calculation to construct an atomically detailed model for the target. We have built from the Protein Data Bank a large-scale learning set that includes tens of millions of pair matches that can be either a true template or a false one. Discriminatory learning (learning from positive and negative examples) is employed to train a decision tree. Each branch of the tree is a mathematical programming model. The decision tree is tested on an independent set from PDB entries and on the sequences of CASP7. It provides significant enrichment of true templates (between 50-100 percent) when compared to PSI-BLAST. The model is further verified by building atomically detailed structures for each of the tentative true templates with modeller. The probability that a true match does not yield an acceptable structural model (within 6Å RMSD from the native structure), decays linearly as a function of the TM structural-alignment score. PMID:18300226

  1. Costs of examinations performed in a hospital laboratory in Chile.

    PubMed

    Andrade, Germán Lobos; Palma, Carolina Salas

    2018-01-01

    To determine the total average costs related to laboratory examinations performed in a hospital laboratory in Chile. Retrospective study with data from July 2014 to June 2015. 92 examinations classified in ten groups were selected according to the analysis methodology. The costs were estimated as the sum of direct and indirect laboratory costs and indirect institutional factors. The average values obtained for the costs according to examination group (in USD) were: 1.79 (clinical chemistry), 10.21 (immunoassay techniques), 13.27 (coagulation), 26.06 (high-performance liquid chromatography), 21.2 (immunological), 3.85 (gases and electrolytes), 156.48 (cytogenetic), 1.38 (urine), 4.02 (automated hematological), 4.93 (manual hematological). The value, or service fee, returned to public institutions who perform laboratory services does not adequately reflect the true total average production costs of examinations.

  2. A method for determining the weak statistical stationarity of a random process

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.; Koper, C. A., Jr.

    1978-01-01

    A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.

  3. Semantic processes leading to true and false memory formation in schizophrenia.

    PubMed

    Paz-Alonso, Pedro M; Ghetti, Simona; Ramsay, Ian; Solomon, Marjorie; Yoon, Jong; Carter, Cameron S; Ragland, J Daniel

    2013-07-01

    Encoding semantic relationships between items on word lists (semantic processing) enhances true memories, but also increases memory distortions. Episodic memory impairments in schizophrenia (SZ) are strongly driven by failures to process semantic relations, but the exact nature of these relational semantic processing deficits is not well understood. Here, we used a false memory paradigm to investigate the impact of implicit and explicit semantic processing manipulations on episodic memory in SZ. Thirty SZ and 30 demographically matched healthy controls (HC) studied Deese/Roediger-McDermott (DRM) lists of semantically associated words. Half of the lists had strong implicit semantic associations and the remainder had low strength associations. Similarly, half of the lists were presented under "standard" instructions and the other half under explicit "relational processing" instructions. After study, participants performed recall and old/new recognition tests composed of targets, critical lures, and unrelated lures. HC exhibited higher true memories and better discriminability between true and false memory compared to SZ. High, versus low, associative strength increased false memory rates in both groups. However, explicit "relational processing" instructions positively improved true memory rates only in HC. Finally, true and false memory rates were associated with severity of disorganized and negative symptoms in SZ. These results suggest that reduced processing of semantic relationships during encoding in SZ may stem from an inability to implement explicit relational processing strategies rather than a fundamental deficit in the implicit activation and retrieval of word meanings from patients' semantic lexicon. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Lay denial of knowledge for justified true beliefs.

    PubMed

    Nagel, Jennifer; Juan, Valerie San; Mar, Raymond A

    2013-12-01

    Intuitively, there is a difference between knowledge and mere belief. Contemporary philosophical work on the nature of this difference has focused on scenarios known as "Gettier cases." Designed as counterexamples to the classical theory that knowledge is justified true belief, these cases feature agents who arrive at true beliefs in ways which seem reasonable or justified, while nevertheless seeming to lack knowledge. Prior empirical investigation of these cases has raised questions about whether lay people generally share philosophers' intuitions about these cases, or whether lay intuitions vary depending on individual factors (e.g. ethnicity) or factors related to specific types of Gettier cases (e.g. cases that include apparent evidence). We report an experiment on lay attributions of knowledge and justification for a wide range of Gettier Cases and for a related class of controversial cases known as Skeptical Pressure cases, which are also thought by philosophers to elicit intuitive denials of knowledge. Although participants rated true beliefs in Gettier and Skeptical Pressure cases as being justified, they were significantly less likely to attribute knowledge for these cases than for matched True Belief cases. This pattern of response was consistent across different variations of Gettier cases and did not vary by ethnicity or gender, although attributions of justification were found to be positively related to measures of empathy. These findings therefore suggest that across demographic groups, laypeople share similar epistemic concepts with philosophers, recognizing a difference between knowledge and justified true belief. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. MultiMiTar: a novel multi objective optimization based miRNA-target prediction method.

    PubMed

    Mitra, Ramkrishna; Bandyopadhyay, Sanghamitra

    2011-01-01

    Machine learning based miRNA-target prediction algorithms often fail to obtain a balanced prediction accuracy in terms of both sensitivity and specificity due to lack of the gold standard of negative examples, miRNA-targeting site context specific relevant features and efficient feature selection process. Moreover, all the sequence, structure and machine learning based algorithms are unable to distribute the true positive predictions preferentially at the top of the ranked list; hence the algorithms become unreliable to the biologists. In addition, these algorithms fail to obtain considerable combination of precision and recall for the target transcripts that are translationally repressed at protein level. In the proposed article, we introduce an efficient miRNA-target prediction system MultiMiTar, a Support Vector Machine (SVM) based classifier integrated with a multiobjective metaheuristic based feature selection technique. The robust performance of the proposed method is mainly the result of using high quality negative examples and selection of biologically relevant miRNA-targeting site context specific features. The features are selected by using a novel feature selection technique AMOSA-SVM, that integrates the multi objective optimization technique Archived Multi-Objective Simulated Annealing (AMOSA) and SVM. MultiMiTar is found to achieve much higher Matthew's correlation coefficient (MCC) of 0.583 and average class-wise accuracy (ACA) of 0.8 compared to the others target prediction methods for a completely independent test data set. The obtained MCC and ACA values of these algorithms range from -0.269 to 0.155 and 0.321 to 0.582, respectively. Moreover, it shows a more balanced result in terms of precision and sensitivity (recall) for the translationally repressed data set as compared to all the other existing methods. An important aspect is that the true positive predictions are distributed preferentially at the top of the ranked list that makes MultiMiTar reliable for the biologists. MultiMiTar is now available as an online tool at www.isical.ac.in/~bioinfo_miu/multimitar.htm. MultiMiTar software can be downloaded from www.isical.ac.in/~bioinfo_miu/multimitar-download.htm.

  6. SU-E-J-69: Evaluation of the Lens Dose On the Cone Beam IGRT Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palomo-Llinares, R; Gimeno-Olmos, J; Carmona Meseguer, V

    Purpose: With the establishment of the IGRT as a standard technique, the extra dose that is given to the patients should be taken into account. Furthermore, it has been a recent decrease of the dose threshold in the lens, reduced to 0.5 Gy (ICRP ref 4825-3093-1464 on 21st April, 2011).The purpose of this work was to evaluate the extra dose that the lens is receive due to the Cone-Beam (CBCT) location systems in Head-and-Neck treatments. Methods: The On-Board Imaging (OBI) v 1.5 of the two Varian accelerators, one Clinac iX and one True Beam, were used to obtain the dosemore » that this OBI version give to the lens in the Head-and-Neck location treatments. All CBCT scans were acquired with the Standard Dose Head protocol (100 kVp, 80 mA, 8 ms and 200 degree of rotation).The measurements were taken with thermoluminescence (TLD) EXTRAD (Harshaw) dosimeters placed in an anthropomorphic phantom over the eye and under 3 mm of bolus material to mimic the lens position. The center of the head was placed at the isocenter. To reduce TLD energy dependence, they were calibrated at the used beam quality. Results: The average lens dose at the lens in the OBI v 1.5 systems of the Clinac iX and the True Beam is 0.071 and 0.076 cGy/CBCT, respectively. Conclusions: The extra absorbed doses that receive the eye lenses due to one CBCT acquisition with the studied protocol is far below the new ICRP recommended threshold for the lens. However, the addition effect of several CBCT acquisition during the whole treatment should be taken into account.« less

  7. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  8. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  9. How (un) ethical are you?

    PubMed

    Banaji, Mahzarin R; Bazerman, Max H; Chugh, Dolly

    2003-12-01

    Answer true or false: "I am an ethical manager." If you answered "true," here's an Uncomfortable fact: You're probably wrong. Most of us believe we can objectively size up a job candidate or a venture deal and reach a fair and rational conclusion that's in our, and our organization's, best interests. But more than two decades of psychological research indicates that most of us harbor unconscious biases that are often at odds with our consciously held beliefs. The flawed judgments arising from these biases are ethically problematic and undermine managers' fundamental work--to recruit and retain superior talent, boost individual and team performance, and collaborate effectively with partners. This article explores four related sources of unintentional unethical decision making. If you're surprised that a female colleague has poor people skills, you are displaying implicit bias--judging according to unconscious stereotypes rather than merit. Companies that give bonuses to employees who recommend their friends for open positions are encouraging ingroup bias--favoring people in their own circles. If you think you're better than the average worker in your company (and who doesn't?), you may be displaying the common tendency to overclaim credit. And although many conflicts of interest are overt, many more are subtle. Who knows, for instance, whether the promise of quick and certain payment figures into an attorney's recommendation to settle a winnable case rather than go to trial? How can you counter these biases if they're unconscious? Traditional ethics training is not enough. But by gathering better data, ridding the work environment of stereotypical cues, and broadening your mind-set when you make decisions, you can go a long way toward bringing your unconscious biases to light and submitting them to your conscious will.

  10. Effects of exposure estimation errors on estimated exposure-response relations for PM2.5.

    PubMed

    Cox, Louis Anthony Tony

    2018-07-01

    Associations between fine particulate matter (PM2.5) exposure concentrations and a wide variety of undesirable outcomes, from autism and auto theft to elderly mortality, suicide, and violent crime, have been widely reported. Influential articles have argued that reducing National Ambient Air Quality Standards for PM2.5 is desirable to reduce these outcomes. Yet, other studies have found that reducing black smoke and other particulate matter by as much as 70% and dozens of micrograms per cubic meter has not detectably affected all-cause mortality rates even after decades, despite strong, statistically significant positive exposure concentration-response (C-R) associations between them. This paper examines whether this disconnect between association and causation might be explained in part by ignored estimation errors in estimated exposure concentrations. We use EPA air quality monitor data from the Los Angeles area of California to examine the shapes of estimated C-R functions for PM2.5 when the true C-R functions are assumed to be step functions with well-defined response thresholds. The estimated C-R functions mistakenly show risk as smoothly increasing with concentrations even well below the response thresholds, thus incorrectly predicting substantial risk reductions from reductions in concentrations that do not affect health risks. We conclude that ignored estimation errors obscure the shapes of true C-R functions, including possible thresholds, possibly leading to unrealistic predictions of the changes in risk caused by changing exposures. Instead of estimating improvements in public health per unit reduction (e.g., per 10 µg/m 3 decrease) in average PM2.5 concentrations, it may be essential to consider how interventions change the distributions of exposure concentrations. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Multiswarm comprehensive learning particle swarm optimization for solving multiobjective optimization problems.

    PubMed

    Yu, Xiang; Zhang, Xueqing

    2017-01-01

    Comprehensive learning particle swarm optimization (CLPSO) is a powerful state-of-the-art single-objective metaheuristic. Extending from CLPSO, this paper proposes multiswarm CLPSO (MSCLPSO) for multiobjective optimization. MSCLPSO involves multiple swarms, with each swarm associated with a separate original objective. Each particle's personal best position is determined just according to the corresponding single objective. Elitists are stored externally. MSCLPSO differs from existing multiobjective particle swarm optimizers in three aspects. First, each swarm focuses on optimizing the associated objective using CLPSO, without learning from the elitists or any other swarm. Second, mutation is applied to the elitists and the mutation strategy appropriately exploits the personal best positions and elitists. Third, a modified differential evolution (DE) strategy is applied to some extreme and least crowded elitists. The DE strategy updates an elitist based on the differences of the elitists. The personal best positions carry useful information about the Pareto set, and the mutation and DE strategies help MSCLPSO discover the true Pareto front. Experiments conducted on various benchmark problems demonstrate that MSCLPSO can find nondominated solutions distributed reasonably over the true Pareto front in a single run.

  12. Two particle tracking and detection in a single Gaussian beam optical trap.

    PubMed

    Praveen, P; Yogesha; Iyengar, Shruthi S; Bhattacharya, Sarbari; Ananthamurthy, Sharath

    2016-01-20

    We have studied in detail the situation wherein two microbeads are trapped axially in a single-beam Gaussian intensity profile optical trap. We find that the corner frequency extracted from a power spectral density analysis of intensity fluctuations recorded on a quadrant photodetector (QPD) is dependent on the detection scheme. Using forward- and backscattering detection schemes with single and two laser wavelengths along with computer simulations, we conclude that fluctuations detected in backscattering bear true position information of the bead encountered first in the beam propagation direction. Forward scattering, on the other hand, carries position information of both beads with substantial contribution from the bead encountered first along the beam propagation direction. Mie scattering analysis further reveals that the interference term from the scattering of the two beads contributes significantly to the signal, precluding the ability to resolve the positions of the individual beads in forward scattering. In QPD-based detection schemes, detection through backscattering, thereby, is imperative to track the true displacements of axially trapped microbeads for possible studies on light-mediated interbead interactions.

  13. An ontology-driven, case-based clinical decision support model for removable partial denture design

    NASA Astrophysics Data System (ADS)

    Chen, Qingxiao; Wu, Ji; Li, Shusen; Lyu, Peijun; Wang, Yong; Li, Miao

    2016-06-01

    We present the initial work toward developing a clinical decision support model for specific design of removable partial dentures (RPDs) in dentistry. We developed an ontological paradigm to represent knowledge of a patient’s oral conditions and denture component parts. During the case-based reasoning process, a cosine similarity algorithm was applied to calculate similarity values between input patients and standard ontology cases. A group of designs from the most similar cases were output as the final results. To evaluate this model, the output designs of RPDs for 104 randomly selected patients were compared with those selected by professionals. An area under the curve of the receiver operating characteristic (AUC-ROC) was created by plotting true-positive rates against the false-positive rate at various threshold settings. The precision at position 5 of the retrieved cases was 0.67 and at the top of the curve it was 0.96, both of which are very high. The mean average of precision (MAP) was 0.61 and the normalized discounted cumulative gain (NDCG) was 0.74 both of which confirmed the efficient performance of our model. All the metrics demonstrated the efficiency of our model. This methodology merits further research development to match clinical applications for designing RPDs. This paper is organized as follows. After the introduction and description of the basis for the paper, the evaluation and results are presented in Section 2. Section 3 provides a discussion of the methodology and results. Section 4 describes the details of the ontology, similarity algorithm, and application.

  14. Observer performance in semi-automated microbleed detection

    NASA Astrophysics Data System (ADS)

    Kuijf, Hugo J.; Brundel, Manon; de Bresser, Jeroen; Viergever, Max A.; Biessels, Geert Jan; Geerlings, Mirjam I.; Vincken, Koen L.

    2013-03-01

    Cerebral microbleeds are small bleedings in the human brain, detectable with MRI. Microbleeds are associated with vascular disease and dementia. The number of studies involving microbleed detection is increasing rapidly. Visual rating is the current standard for detection, but is a time-consuming process, especially at high-resolution 7.0 T MR images, has limited reproducibility and is highly observer dependent. Recently, multiple techniques have been published for the semi-automated detection of microbleeds, attempting to overcome these problems. In the present study, a 7.0 T dual-echo gradient echo MR image was acquired in 18 participants with microbleeds from the SMART study. Two experienced observers identified 54 microbleeds in these participants, using a validated visual rating scale. The radial symmetry transform (RST) can be used for semi-automated detection of microbleeds in 7.0 T MR images. In the present study, the results of the RST were assessed by two observers and 47 microbleeds were identified: 35 true positives and 12 extra positives (microbleeds that were missed during visual rating). Hence, after scoring a total number of 66 microbleeds could be identified in the 18 participants. The use of the RST increased the average sensitivity of observers from 59% to 69%. More importantly, inter-observer agreement (ICC and Dice's coefficient) increased from 0.85 and 0.64 to 0.98 and 0.96, respectively. Furthermore, the required rating time was reduced from 30 to 2 minutes per participant. By fine-tuning the RST, sensitivities up to 90% can be achieved, at the cost of extra false positives.

  15. An ontology-driven, case-based clinical decision support model for removable partial denture design.

    PubMed

    Chen, Qingxiao; Wu, Ji; Li, Shusen; Lyu, Peijun; Wang, Yong; Li, Miao

    2016-06-14

    We present the initial work toward developing a clinical decision support model for specific design of removable partial dentures (RPDs) in dentistry. We developed an ontological paradigm to represent knowledge of a patient's oral conditions and denture component parts. During the case-based reasoning process, a cosine similarity algorithm was applied to calculate similarity values between input patients and standard ontology cases. A group of designs from the most similar cases were output as the final results. To evaluate this model, the output designs of RPDs for 104 randomly selected patients were compared with those selected by professionals. An area under the curve of the receiver operating characteristic (AUC-ROC) was created by plotting true-positive rates against the false-positive rate at various threshold settings. The precision at position 5 of the retrieved cases was 0.67 and at the top of the curve it was 0.96, both of which are very high. The mean average of precision (MAP) was 0.61 and the normalized discounted cumulative gain (NDCG) was 0.74 both of which confirmed the efficient performance of our model. All the metrics demonstrated the efficiency of our model. This methodology merits further research development to match clinical applications for designing RPDs. This paper is organized as follows. After the introduction and description of the basis for the paper, the evaluation and results are presented in Section 2. Section 3 provides a discussion of the methodology and results. Section 4 describes the details of the ontology, similarity algorithm, and application.

  16. Digital and social media opportunities for dietary behaviour change.

    PubMed

    McGloin, Aileen F; Eslami, Sara

    2015-05-01

    The way that people communicate, consume media and seek and receive information is changing. Forty per cent of the world's population now has an internet connection, the average global social media penetration is 39% and 1·5 billion people have internet access via mobile phone. This large-scale move in population use of digital, social and mobile media presents an unprecedented opportunity to connect with individuals on issues concerning health. The present paper aims to investigate these opportunities in relation to dietary behaviour change. Several aspects of the digital environment could support behaviour change efforts, including reach, engagement, research, segmentation, accessibility and potential to build credibility, trust, collaboration and advocacy. There are opportunities to influence behaviour online using similar techniques to traditional health promotion programmes; to positively affect health-related knowledge, skills and self-efficacy. The abundance of data on citizens' digital behaviours, whether through search behaviour, global positioning system tracking, or via demographics and interests captured through social media profiles, offer exciting opportunities for effectively targeting relevant health messages. The digital environment presents great possibilities but also great challenges. Digital communication is uncontrolled, multi-way and co-created and concerns remain in relation to inequalities, privacy, misinformation and lack of evaluation. Although web-based, social-media-based and mobile-based studies tend to show positive results for dietary behaviour change, methodologies have yet to be developed that go beyond basic evaluation criteria and move towards true measures of behaviour change. Novel approaches are necessary both in the digital promotion of behaviour change and in its measurement.

  17. Comparison of one-tier and two-tier newborn screening metrics for congenital adrenal hyperplasia.

    PubMed

    Sarafoglou, Kyriakie; Banks, Kathryn; Gaviglio, Amy; Hietala, Amy; McCann, Mark; Thomas, William

    2012-11-01

    Newborn screening (NBS) for the classic forms of congenital adrenal hyperplasia (CAH) is mandated in all states in the United States. Compared with other NBS disorders, the false-positive rate (FPR) of CAH screening remains high and has not been significantly improved by adjusting 17α-hydroxyprogesterone cutoff values for birth weight and/or gestational age. Minnesota was the first state to initiate, and only 1 of 4 states currently performing, second-tier steroid profiling for CAH. False-negative rates (FNRs) for CAH are not well known. This is a population-based study of all Minnesota infants (769,834) born 1999-2009, grouped by screening protocol (one-tier with repeat screen, January 1999 to May 2004; two-tier with second-tier steroid profiling, June 2004 to December 2009). FPR, FNR, and positive predictive value (PPV) were calculated per infant, rather than per sample, and compared between protocols. Overall, 15 false-negatives (4 salt-wasting, 11 simple-virilizing) and 45 true-positives were identified from 1999 to 2009. With two-tier screening, FNR was 32%, FPR increased to 0.065%, and PPV decreased to 8%, but these changes were not statistically significant. Second-tier steroid profiling obviated repeat screens of borderline results (355 per year average). In comparing the 2 screening protocols, the FPR of CAH NBS remains high, the PPV remains low, and false-negatives occur more frequently than has been reported. Physicians should be cautioned that a negative NBS does not necessarily rule out classic CAH; therefore, any patient for whom there is clinical concern for CAH should receive immediate diagnostic testing.

  18. The Average Hazard Ratio - A Good Effect Measure for Time-to-event Endpoints when the Proportional Hazard Assumption is Violated?

    PubMed

    Rauch, Geraldine; Brannath, Werner; Brückner, Matthias; Kieser, Meinhard

    2018-05-01

    In many clinical trial applications, the endpoint of interest corresponds to a time-to-event endpoint. In this case, group differences are usually expressed by the hazard ratio. Group differences are commonly assessed by the logrank test, which is optimal under the proportional hazard assumption. However, there are many situations in which this assumption is violated. Especially in applications were a full population and several subgroups or a composite time-to-first-event endpoint and several components are considered, the proportional hazard assumption usually does not simultaneously hold true for all test problems under investigation. As an alternative effect measure, Kalbfleisch and Prentice proposed the so-called 'average hazard ratio'. The average hazard ratio is based on a flexible weighting function to modify the influence of time and has a meaningful interpretation even in the case of non-proportional hazards. Despite this favorable property, it is hardly ever used in practice, whereas the standard hazard ratio is commonly reported in clinical trials regardless of whether the proportional hazard assumption holds true or not. There exist two main approaches to construct corresponding estimators and tests for the average hazard ratio where the first relies on weighted Cox regression and the second on a simple plug-in estimator. The aim of this work is to give a systematic comparison of these two approaches and the standard logrank test for different time-toevent settings with proportional and nonproportional hazards and to illustrate the pros and cons in application. We conduct a systematic comparative study based on Monte-Carlo simulations and by a real clinical trial example. Our results suggest that the properties of the average hazard ratio depend on the underlying weighting function. The two approaches to construct estimators and related tests show very similar performance for adequately chosen weights. In general, the average hazard ratio defines a more valid effect measure than the standard hazard ratio under non-proportional hazards and the corresponding tests provide a power advantage over the common logrank test. As non-proportional hazards are often met in clinical practice and the average hazard ratio tests often outperform the common logrank test, this approach should be used more routinely in applications. Schattauer GmbH.

  19. Quality assurance in mammography: artifact analysis.

    PubMed

    Hogge, J P; Palmer, C H; Muller, C C; Little, S T; Smith, D C; Fatouros, P P; de Paredes, E S

    1999-01-01

    Evaluation of mammograms for artifacts is essential for mammographic quality assurance. A variety of mammographic artifacts (i.e., variations in mammographic density not caused by true attenuation differences) can occur and can create pseudolesions or mask true abnormalities. Many artifacts are readily identified, whereas others present a true diagnostic challenge. Factors that create artifacts may be related to the processor (eg, static, dirt or excessive developer buildup on the rollers, excessive roller pressure, damp film, scrapes and scratches, incomplete fixing, power failure, contaminated developer), the technologist (eg, improper film handling and loading, improper use of the mammography unit and related equipment, positioning and darkroom errors), the mammography unit (eg, failure of the collimation mirror to rotate, grid inhomogeneity, failure of the reciprocating grid to move, material in the tube housing, compression failure, improper alignment of the compression paddle with the Bucky tray, defective compression paddle), or the patient (e.g., motion, superimposed objects or substances [jewelry, body parts, clothing, hair, implanted medical devices, foreign bodies, substances on the skin]). Familiarity with the broad range of artifacts and the measures required to eliminate them is vital. Careful attention to darkroom cleanliness, care in film handling, regularly scheduled processor maintenance and chemical replenishment, daily quality assurance activities, and careful attention to detail during patient positioning and mammography can reduce or eliminate most mammographic artifacts.

  20. A Rasch scaling validation of a 'core' near-death experience.

    PubMed

    Lange, Rense; Greyson, Bruce; Houran, James

    2004-05-01

    For those with true near-death experiences (NDEs), Greyson's (1983, 1990) NDE Scale satisfactorily fits the Rasch rating scale model, thus yielding a unidimensional measure with interval-level scaling properties. With increasing intensity, NDEs reflect peace, joy and harmony, followed by insight and mystical or religious experiences, while the most intense NDEs involve an awareness of things occurring in a different place or time. The semantics of this variable are invariant across True-NDErs' gender, current age, age at time of NDE, and latency and intensity of the NDE, thus identifying NDEs as 'core' experiences whose meaning is unaffected by external variables, regardless of variations in NDEs' intensity. Significant qualitative and quantitative differences were observed between True-NDErs and other respondent groups, mostly revolving around the differential emphasis on paranormal/mystical/religious experiences vs. standard reactions to threat. The findings further suggest that False-Positive respondents reinterpret other profound psychological states as NDEs. Accordingly, the Rasch validation of the typology proposed by Greyson (1983) also provides new insights into previous research, including the possibility of embellishment over time (as indicated by the finding of positive, as well as negative, latency effects) and the potential roles of religious affiliation and religiosity (as indicated by the qualitative differences surrounding paranormal/mystical/religious issues).

  1. Moving-Bank Multiple Model Adaptive Estimation and Control Applied to a Large Flexible Space Structure

    DTIC Science & Technology

    1990-12-01

    was determined from the difference between the 24-state matrix product, HtP (t’)HT, and the six-state matrix product, HfPf (tT)HT’. For this...The true position for node 7, which represents the rigid body position of the structure, is not damped and can be interpreted as a rigid body...application, considering the same issues as explored in this research. Continue with a physical interpretation of the structure positions for determining the

  2. Recall Latencies, Confidence, and Output Positions of True and False Memories: Implications for Recall and Metamemory Theories

    ERIC Educational Resources Information Center

    Jou, Jerwen

    2008-01-01

    Recall latency, recall accuracy rate, and recall confidence were examined in free recall as a function of recall output serial position using a modified Deese-Roediger-McDermott paradigm to test a strength-based theory against the dual-retrieval process theory of recall output sequence. The strength theory predicts the item output sequence to be…

  3. Fifteen Years Later: Has Positive Programming Become the Expected Technology for Addressing Problem Behavior? A Commentary on Homer et. al. (1990)

    ERIC Educational Resources Information Center

    Snell, Martha E.

    2005-01-01

    The author found it very satisfying to reread "Toward a technology of 'nonaversive' behavioral support," written in 1990 by Rob Horner and seven of his colleagues. Their predictions of the critical themes for advancing positive behavior support (PBS) ring true. Fifteen years have passed since the publication of this article, and much has happened…

  4. The slow decay and quick revival of self-deception

    PubMed Central

    Chance, Zoë; Gino, Francesca; Norton, Michael I.; Ariely, Dan

    2015-01-01

    People demonstrate an impressive ability to self-deceive, distorting misbehavior to reflect positively on themselves—for example, by cheating on a test and believing that their inflated performance reflects their true ability. But what happens to self-deception when self-deceivers must face reality, such as when taking another test on which they cannot cheat? We find that self-deception diminishes over time only when self-deceivers are repeatedly confronted with evidence of their true ability (Study 1); this learning, however, fails to make them less susceptible to future self-deception (Study 2). PMID:26347666

  5. Sensitivity and specificity of HAT Sero-K-SeT, a rapid diagnostic test for serodiagnosis of sleeping sickness caused by Trypanosoma brucei gambiense: a case-control study.

    PubMed

    Büscher, Philippe; Mertens, Pascal; Leclipteux, Thierry; Gilleman, Quentin; Jacquet, Diane; Mumba-Ngoyi, Dieudonné; Pyana, Patient Pati; Boelaert, Marleen; Lejon, Veerle

    2014-06-01

    Human African trypanosomiasis (HAT) is a life-threatening infection affecting rural populations in sub-Saharan Africa. Large-scale population screening by antibody detection with the Card Agglutination Test for Trypanosomiasis (CATT)/Trypanosoma brucei (T b) gambiense helped reduce the number of reported cases of gambiense HAT to fewer than 10 000 in 2011. Because low case numbers lead to decreased cost-effectiveness of such active screening, we aimed to assess diagnostic accuracy of a rapid serodiagnostic test (HAT Sero-K-SeT) applicable in primary health-care centres. In our case-control study, we assessed participants older than 11 years who presented for HAT Sero-K-SeT and CATT/T b gambiense at primary care centres or to mobile teams (and existing patients with confirmed disease status at these centres) in Bandundu Province, DR Congo. We defined cases as patients with trypanosomes that had been identified in lymph node aspirate, blood, or cerebrospinal fluid. During screening, we recruited controls without previous history of HAT or detectable trypanosomes in blood or lymph who resided in the same area as the cases. We assessed diagnostic accuracy of three antibody detection tests for gambiense HAT: HAT Sero-K-SeT and CATT/T b gambiense (done with venous blood at the primary care centres) and immune trypanolysis (done with plasma at the Institute of Tropical Medicine, Antwerp, Belgium). Between June 6, 2012, and Feb 25, 2013, we included 134 cases and 356 controls. HAT Sero-K-SeT had a sensitivity of 0·985 (132 true positives, 95% CI 0·947-0·996) and a specificity of 0·986 (351 true negatives, 0·968-0·994), which did not differ significantly from CATT/T b gambiense (sensitivity 95% CI 0·955, 95% CI 0·906-0·979 [128 true positives] and specificity 0·972, 0·949-0·985 [346 true negatives]) or immune trypanolysis (sensitivity 0·985, 0·947-0·996 [132 true positives] and specificity 0·980, 0·960-0·990 [349 true negatives]). The diagnostic accuracy of HAT Sero-K-SeT is adequate for T b gambiense antibody detection in local health centres and could be used for active screening whenever a cold chain and electricity supply are unavailable and CATT/T b gambiense cannot be done. Copyright © 2014 Büscher et al. Open Access article distributed under the terms of CC BY-NC-ND. Published by .. All rights reserved.

  6. Prediction of Cortical Defect Using C-Reactive Protein and Urine Sodium to Potassium Ratio in Infants with Febrile Urinary Tract Infection

    PubMed Central

    Jung, Su Jin

    2016-01-01

    Purpose We investigated whether C-reactive protein (CRP) levels, urine protein-creatinine ratio (uProt/Cr), and urine electrolytes can be useful for discriminating acute pyelonephritis (APN) from other febrile illnesses or the presence of a cortical defect on 99mTc dimercaptosuccinic acid (DMSA) scanning (true APN) from its absence in infants with febrile urinary tract infection (UTI). Materials and Methods We examined 150 infants experiencing their first febrile UTI and 100 controls with other febrile illnesses consecutively admitted to our hospital from January 2010 to December 2012. Blood (CRP, electrolytes, Cr) and urine tests [uProt/Cr, electrolytes, and sodium-potassium ratio (uNa/K)] were performed upon admission. All infants with UTI underwent DMSA scans during admission. All data were compared between infants with UTI and controls and between infants with or without a cortical defect on DMSA scans. Using multiple logistic regression analysis, the ability of the parameters to predict true APN was analyzed. Results CRP levels and uProt/Cr were significantly higher in infants with true APN than in controls. uNa levels and uNa/K were significantly lower in infants with true APN than in controls. CRP levels and uNa/K were relevant factors for predicting true APN. The method using CRP levels, u-Prot/Cr, u-Na levels, and uNa/K had a sensitivity of 94%, specificity of 65%, positive predictive value of 60%, and negative predictive value of 95% for predicting true APN. Conclusion We conclude that these parameters are useful for discriminating APN from other febrile illnesses or discriminating true APN in infants with febrile UTI. PMID:26632389

  7. Pressure to cooperate: is positive reward interdependence really needed in cooperative learning?

    PubMed

    Buchs, Céline; Gilles, Ingrid; Dutrévis, Marion; Butera, Fabrizio

    2011-03-01

    BACKGROUND. Despite extensive research on cooperative learning, the debate regarding whether or not its effectiveness depends on positive reward interdependence has not yet found clear evidence. AIMS. We tested the hypothesis that positive reward interdependence, as compared to reward independence, enhances cooperative learning only if learners work on a 'routine task'; if the learners work on a 'true group task', positive reward interdependence induces the same level of learning as reward independence. SAMPLE. The study involved 62 psychology students during regular workshops. METHOD. Students worked on two psychology texts in cooperative dyads for three sessions. The type of task was manipulated through resource interdependence: students worked on either identical (routine task) or complementary (true group task) information. Students expected to be assessed with a Multiple Choice Test (MCT) on the two texts. The MCT assessment type was introduced according to two reward interdependence conditions, either individual (reward independence) or common (positive reward interdependence). A follow-up individual test took place 4 weeks after the third session of dyadic work to examine individual learning. RESULTS. The predicted interaction between the two types of interdependence was significant, indicating that students learned more with positive reward interdependence than with reward independence when they worked on identical information (routine task), whereas students who worked on complementary information (group task) learned the same with or without reward interdependence. CONCLUSIONS. This experiment sheds light on the conditions under which positive reward interdependence enhances cooperative learning, and suggests that creating a real group task allows to avoid the need for positive reward interdependence. © 2010 The British Psychological Society.

  8. Experiences from the testing of a theory for modelling groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    2002-01-01

    Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.

  9. Experience gained in testing a theory for modelling groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    2002-01-01

    Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift, and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.

  10. Global Surface Temperature Change and Uncertainties Since 1861

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    The objective of this talk is to analyze the warming trend and its uncertainties of the global and hemi-spheric surface temperatures. By the method of statistical optimal averaging scheme, the land surface air temperature and sea surface temperature observational data are used to compute the spatial average annual mean surface air temperature. The optimal averaging method is derived from the minimization of the mean square error between the true and estimated averages and uses the empirical orthogonal functions. The method can accurately estimate the errors of the spatial average due to observational gaps and random measurement errors. In addition, quantified are three independent uncertainty factors: urbanization, change of the in situ observational practices and sea surface temperature data corrections. Based on these uncertainties, the best linear fit to annual global surface temperature gives an increase of 0.61 +/- 0.16 C between 1861 and 2000. This lecture will also touch the topics on the impact of global change on nature and environment. as well as the latest assessment methods for the attributions of global change.

  11. A mathematical framework to quantify the masking effect associated with the confidence intervals of measures of disproportionality

    PubMed Central

    Maignen, François; Hauben, Manfred; Dogné, Jean-Michel

    2017-01-01

    Background: The lower bound of the 95% confidence interval of measures of disproportionality (Lower95CI) is widely used in signal detection. Masking is a statistical issue by which true signals of disproportionate reporting are hidden by the presence of other medicines. The primary objective of our study is to develop and validate a mathematical framework for assessing the masking effect of Lower95CI. Methods: We have developed our new algorithm based on the masking ratio (MR) developed for the measures of disproportionality. A MR for the Lower95CI (MRCI) is proposed. A simulation study to validate this algorithm was also conducted. Results: We have established the existence of a very close mathematical relation between MR and MRCI. For a given drug–event pair, the same product will be responsible for the highest masking effect with the measure of disproportionality and its Lower95CI. The extent of masking is likely to be very similar across the two methods. An important proportion of identical drug–event associations affected by the presence of an important masking effect is revealed by the unmasking exercise, whether the proportional reporting ratio (PRR) or its confidence interval are used. Conclusion: The detection of the masking effect of Lower95CI can be automated. The real benefits of this unmasking in terms of new true-positive signals (rate of true-positive/false-positive) or time gained by the revealing of signals using this method have not been fully assessed. These benefits should be demonstrated in the context of prospective studies. PMID:28845231

  12. Multicenter accuracy and interobserver agreement of spot sign identification in acute intracerebral hemorrhage.

    PubMed

    Huynh, Thien J; Flaherty, Matthew L; Gladstone, David J; Broderick, Joseph P; Demchuk, Andrew M; Dowlatshahi, Dar; Meretoja, Atte; Davis, Stephen M; Mitchell, Peter J; Tomlinson, George A; Chenkin, Jordan; Chia, Tze L; Symons, Sean P; Aviv, Richard I

    2014-01-01

    Rapid, accurate, and reliable identification of the computed tomography angiography spot sign is required to identify patients with intracerebral hemorrhage for trials of acute hemostatic therapy. We sought to assess the accuracy and interobserver agreement for spot sign identification. A total of 131 neurology, emergency medicine, and neuroradiology staff and fellows underwent imaging certification for spot sign identification before enrolling patients in 3 trials targeting spot-positive intracerebral hemorrhage for hemostatic intervention (STOP-IT, SPOTLIGHT, STOP-AUST). Ten intracerebral hemorrhage cases (spot-positive/negative ratio, 1:1) were presented for evaluation of spot sign presence, number, and mimics. True spot positivity was determined by consensus of 2 experienced neuroradiologists. Diagnostic performance, agreement, and differences by training level were analyzed. Mean accuracy, sensitivity, and specificity for spot sign identification were 87%, 78%, and 96%, respectively. Overall sensitivity was lower than specificity (P<0.001) because of true spot signs incorrectly perceived as spot mimics. Interobserver agreement for spot sign presence was moderate (k=0.60). When true spots were correctly identified, 81% correctly identified the presence of single or multiple spots. Median time needed to evaluate the presence of a spot sign was 1.9 minutes (interquartile range, 1.2-3.1 minutes). Diagnostic performance, interobserver agreement, and time needed for spot sign evaluation were similar among staff physicians and fellows. Accuracy for spot identification is high with opportunity for improvement in spot interpretation sensitivity and interobserver agreement particularly through greater reliance on computed tomography angiography source data and awareness of limitations of multiplanar images. Further prospective study is needed.

  13. A stochastic model to determine the economic value of changing diagnostic test characteristics for identification of cattle for treatment of bovine respiratory disease.

    PubMed

    Theurer, M E; White, B J; Larson, R L; Schroeder, T C

    2015-03-01

    Bovine respiratory disease is an economically important syndrome in the beef industry, and diagnostic accuracy is important for optimal disease management. The objective of this study was to determine whether improving diagnostic sensitivity or specificity was of greater economic value at varied levels of respiratory disease prevalence by using Monte Carlo simulation. Existing literature was used to populate model distributions of published sensitivity, specificity, and performance (ADG, carcass weight, yield grade, quality grade, and mortality risk) differences among calves based on clinical respiratory disease status. Data from multiple cattle feeding operations were used to generate true ranges of respiratory disease prevalence and associated mortality. Input variables were combined into a single model that calculated estimated net returns for animals by diagnostic category (true positive, false positive, false negative, and true negative) based on the prevalence, sensitivity, and specificity for each iteration. Net returns for each diagnostic category were multiplied by the proportion of animals in each diagnostic category to determine group profitability. Apparent prevalence was categorized into low (<15%) and high (≥15%) groups. For both apparent prevalence categories, increasing specificity created more rapid, positive change in net returns than increasing sensitivity. Improvement of diagnostic specificity, perhaps through a confirmatory test interpreted in series or pen-level diagnostics, can increase diagnostic value more than improving sensitivity. Mortality risk was the primary driver for net returns. The results from this study are important for determining future research priorities to analyze diagnostic techniques for bovine respiratory disease and provide a novel way for modeling diagnostic tests.

  14. Epidemiology of angina pectoris: role of natural language processing of the medical record

    PubMed Central

    Pakhomov, Serguei; Hemingway, Harry; Weston, Susan A.; Jacobsen, Steven J.; Rodeheffer, Richard; Roger, Véronique L.

    2007-01-01

    Background The diagnosis of angina is challenging as it relies on symptom descriptions. Natural language processing (NLP) of the electronic medical record (EMR) can provide access to such information contained in free text that may not be fully captured by conventional diagnostic coding. Objective To test the hypothesis that NLP of the EMR improves angina pectoris (AP) ascertainment over diagnostic codes. Methods Billing records of in- and out-patients were searched for ICD-9 codes for AP, chronic ischemic heart disease and chest pain. EMR clinical reports were searched electronically for 50 specific non-negated natural language synonyms to these ICD-9 codes. The two methods were compared to a standardized assessment of angina by Rose questionnaire for three diagnostic levels: unspecified chest pain, exertional chest pain, and Rose angina. Results Compared to the Rose questionnaire, the true positive rate of EMR-NLP for unspecified chest pain was 62% (95%CI:55–67) vs. 51% (95%CI:44–58) for diagnostic codes (p<0.001). For exertional chest pain, the EMR-NLP true positive rate was 71% (95%CI:61–80) vs. 62% (95%CI:52–73) for diagnostic codes (p=0.10). Both approaches had 88% (95%CI:65–100) true positive rate for Rose angina. The EMR-NLP method consistently identified more patients with exertional chest pain over 28-month follow-up. Conclusion EMR-NLP method improves the detection of unspecified and exertional chest pain cases compared to diagnostic codes. These findings have implications for epidemiological and clinical studies of angina pectoris. PMID:17383310

  15. The influence of radiographic viewing perspective and demographics on the Critical Shoulder Angle

    PubMed Central

    Suter, Thomas; Popp, Ariane Gerber; Zhang, Yue; Zhang, Chong; Tashjian, Robert Z.; Henninger, Heath B.

    2014-01-01

    Background Accurate assessment of the critical shoulder angle (CSA) is important in clinical evaluation of degenerative rotator cuff tears. This study analyzed the influence of radiographic viewing perspective on the CSA, developed a classification system to identify malpositioned radiographs, and assessed the relationship between the CSA and demographic factors. Methods Glenoid height, width and retroversion were measured on 3D CT reconstructions of 68 cadaver scapulae. A digitally reconstructed radiograph was aligned perpendicular to the scapular plane, and retroversion was corrected to obtain a true antero-posterior (AP) view. In 10 scapulae, incremental anteversion/retroversion and flexion/extension views were generated. The CSA was measured and a clinically applicable classification system was developed to detect views with >2° change in CSA versus true AP. Results The average CSA was 33±4°. Intra- and inter-observer reliability was high (ICC≥0.81) but decreased with increasing viewing angle. Views beyond 5° anteversion, 8° retroversion, 15° flexion and 26° extension resulted in >2° deviation of the CSA compared to true AP. The classification system was capable of detecting aberrant viewing perspectives with sensitivity of 95% and specificity of 53%. Correlations between glenoid size and CSA were small (R≤0.3), and CSA did not vary by gender (p=0.426) or side (p=0.821). Conclusions The CSA was most susceptible to malposition in ante/retroversion. Deviations as little as 5° in anteversion resulted in a CSA >2° from true AP. A new classification system refines the ability to collect true AP radiographs of the scapula. The CSA was unaffected by demographic factors. PMID:25591458

  16. The Use of Satellite Imagery for Domestic Law Enforcement

    DTIC Science & Technology

    2013-12-01

    AND SUBTITLE THE USE OF SATELLITE IMAGERY FOR DOMESTIC LAW ENFORCEMENT 5. FUNDING NUMBERS 6. AUTHOR( S ) Raymond M. Schillinger...Postgraduate School is a true honor. The opportunity to attend the Center for Homeland Security and Defense is one of those pinnacles that will be hard to...information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and

  17. 21 CFR 173.315 - Chemicals used in washing or to assist in the peeling of fruits and vegetables.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Chemicals used in washing or to assist in the... alkyl alcohols consisting of: α-alkyl (C12-C18)-omega-hydroxy-poly (oxyethylene) (7.5-8.5 moles)/poly (oxypropylene) block copolymer having an average molecular weight of 810; α-alkyl (C12-C18)-omega-hydroxy-poly...

  18. Constant Group Velocity Ultrasonic Guided Wave Inspection for Corrosion and Erosion Monitoring in Pipes

    NASA Astrophysics Data System (ADS)

    Instanes, Geir; Pedersen, Audun; Toppe, Mads; Nagy, Peter B.

    2009-03-01

    This paper describes a novel ultrasonic guided wave inspection technique for the monitoring of internal corrosion and erosion in pipes, which exploits the fundamental flexural mode to measure the average wall thickness over the inspection path. The inspection frequency is chosen so that the group velocity of the fundamental flexural mode is essentially constant throughout the wall thickness range of interest, while the phase velocity is highly dispersive and changes in a systematic way with varying wall thickness in the pipe. Although this approach is somewhat less accurate than the often used transverse resonance methods, it smoothly integrates the wall thickness over the whole propagation length, therefore it is very robust and can tolerate large and uneven thickness variations from point to point. The constant group velocity (CGV) method is capable of monitoring the true average of the wall thickness over the inspection length with an accuracy of 1% even in the presence of one order of magnitude larger local variations. This method also eliminates spurious variations caused by changing temperature, which can cause fairly large velocity variations, but do not significantly influence the dispersion as measured by the true phase angle in the vicinity of the CGV point. The CGV guided wave CEM method was validated in both laboratory and field tests.

  19. The Effect of Technological Devices on Cervical Lordosis

    PubMed Central

    Öğrenci, Ahmet; Koban, Orkun; Yaman, Onur; Dalbayrak, Sedat; Yılmaz, Mesut

    2018-01-01

    PURPOSE: There is a need for cervical flexion and even cervical hyperflexion for the use of technological devices, especially mobile phones. We investigated the effect of this use on the cervical lordosis angle. MATERIAL AND METHODS: A group of 156 patients who applied with only neck pain between 2013–2016 and had no additional problems were included. Patients are specifically questioned about mobile phone, tablet, and other devices usage. The value obtained by multiplying the year of usage and the average usage (hour) in daily life was determined as the total usage value (an average hour per day x year: hy). Cervical lordosis angles were statistically compared with the total time of use. RESULTS: In the general ROC analysis, the cut-off value was found to be 20.5 hy. When the cut-off value is tested, the overall accuracy is very good with 72.4%. The true estimate of true risk and non-risk is quite high. The ROC analysis is statistically significant. CONCLUSION: The use of computing devices, especially mobile telephones, and the increase in the flexion of the cervical spine indicate that cervical vertebral problems will increase even in younger people in future. Also, to using with attention at this point, ergonomic devices must also be developed. PMID:29610602

  20. Model-based segmentation of abdominal aortic aneurysms in CTA images

    NASA Astrophysics Data System (ADS)

    de Bruijne, Marleen; van Ginneken, Bram; Niessen, Wiro J.; Loog, Marco; Viergever, Max A.

    2003-05-01

    Segmentation of thrombus in abdominal aortic aneurysms is complicated by regions of low boundary contrast and by the presence of many neighboring structures in close proximity to the aneurysm wall. We present an automated method that is similar to the well known Active Shape Models (ASM), combining a three-dimensional shape model with a one-dimensional boundary appearance model. Our contribution is twofold: we developed a non-parametric appearance modeling scheme that effectively deals with a highly varying background, and we propose a way of generalizing models of curvilinear structures from small training sets. In contrast with the conventional ASM approach, the new appearance model trains on both true and false examples of boundary profiles. The probability that a given image profile belongs to the boundary is obtained using k nearest neighbor (kNN) probability density estimation. The performance of this scheme is compared to that of original ASMs, which minimize the Mahalanobis distance to the average true profile in the training set. The generalizability of the shape model is improved by modeling the objects axis deformation independent of its cross-sectional deformation. A leave-one-out experiment was performed on 23 datasets. Segmentation using the kNN appearance model significantly outperformed the original ASM scheme; average volume errors were 5.9% and 46% respectively.

  1. The frequency-difference and frequency-sum acoustic-field autoproducts.

    PubMed

    Worthmann, Brian M; Dowling, David R

    2017-06-01

    The frequency-difference and frequency-sum autoproducts are quadratic products of solutions of the Helmholtz equation at two different frequencies (ω + and ω - ), and may be constructed from the Fourier transform of any time-domain acoustic field. Interestingly, the autoproducts may carry wave-field information at the difference (ω + - ω - ) and sum (ω + + ω - ) frequencies even though these frequencies may not be present in the original acoustic field. This paper provides analytical and simulation results that justify and illustrate this possibility, and indicate its limitations. The analysis is based on the inhomogeneous Helmholtz equation and its solutions while the simulations are for a point source in a homogeneous half-space bounded by a perfectly reflecting surface. The analysis suggests that the autoproducts have a spatial phase structure similar to that of a true acoustic field at the difference and sum frequencies if the in-band acoustic field is a plane or spherical wave. For multi-ray-path environments, this phase structure similarity persists in portions of the autoproduct fields that are not suppressed by bandwidth averaging. Discrepancies between the bandwidth-averaged autoproducts and true out-of-band acoustic fields (with potentially modified boundary conditions) scale inversely with the product of the bandwidth and ray-path arrival time differences.

  2. Position-Related Differences in Selected Morphological Body Characteristics of Top-Level Female Handball Players.

    PubMed

    Bon, Marta; Pori, Primoz; Sibila, Marko

    2015-09-01

    The study aimed to establish the main morphological characteristics of Slovenian junior and senior female national handball team players. Morphological characteristics of various player subgroups (goalkeepers, wings, back players and pivots) were also determined so as to establish whether they had distinct profiles. The subjects were 87 handball players who were members of the Slovenian junior and senior female national teams in the period from 2003 to 2009. A standardised anthropometric protocol was used to assess the subjects' morphological characteristics. The measurements included 23 different anthropometric measures. First, basic statistical characteristics of anthropometric measures were obtained for all subjects together and then for each group separately. Somatotypes were determined using Heath-Carter's method. Endomorphic, mesomorphic and ectomorphic components were calculated by computer on the basis of formulas. In order to determine differences in the body composition and anthropometric data of the subjects playing in different positions, a one-way analysis of variance was employed. The results show that, on average, the wings differed the most from the other player groups in terms of their morphological body characteristics. The wings differed most prominently from the other player groups in terms of their morphological body parameters as they were significantly smaller and had a statistically significantly lower body mass than the other groups. In terms of transversal measures of the skeleton and the circumferences, the wings significantly differed mainly from the pivots and goalkeepers and less from the backs. The goalkeepers were the tallest, with high values of body mass and low values of transversal measures compared to P. Their skin folds were the most pronounced among all the groups on average and their share of subcutaneous fat in total body mass was the highest. Consequently, their endomorphic component of the somatotype was pronounced. Players in the Pivot position were significantly taller than the Wplayers but were not significantly different from G and B. They had high values of body mass which were significantly higher than that of W but did not differ significantly from the body mass values of B and G. The average values of their circumferences were the highest among all the player groups and the same is true for transversal measures of the skeleton. It is very interesting that, compared to the players in other playing positions, they achieved low values of subcutaneous fat. Their values of the somatotype revealed an endo-mesomorphic somatotype, with a pronounced mesomorphic component. Back players were tall and had the lowest share of subcutaneous fat of all the player groups. Significant differences were established mainly in terms of the structure of the lower extremities. The values of the somato-type characteristics were very balanced between all three components. The results of our study confirm that groups of handball players occupying different positions differed amongst themselves in terms of many measurements. This is a result of the specific requirements of handball play which are to be met by particular players. The tallest players should thus be oriented to back player positions. As regards pivots, the coaches must, besides body height, consider robustness. For goal-keepers, body height is very important; however, the robustness criteria are slightly lower. For wings, body height is not a decisive factor and smaller players can also occupy this position. Both of the above (also taking other criteria into account) facilitate coaches' decisions when orienting players into their playing positions.

  3. Drafting: Current Trends and Future Practices

    ERIC Educational Resources Information Center

    Jensen, C.

    1976-01-01

    Various research findings are reported on drafting trends which the author feels should be incorporated into teaching drafting: (1) true position and geometric tolerancing, (2) decimal and metric dimensioning, (3) functional drafting, (4) automated drafting, and (5) drawing reproductions. (BP)

  4. Peripheries of epicycles in the Grahalāghava

    NASA Astrophysics Data System (ADS)

    Rao, S. Balachandra; Vanaja, V.; Shailaja, M.

    2017-12-01

    For finding the true positions of the Sun, the Moon and the five planets the Indian classical astronomical texts use the concept of the manda epicycle which accounts for the equation of the centre. In addition, in the case of the five planets (Mercury, Venus, Mars, Jupiter and Saturn) another equation called śīghraphala and the corresponding śīghra epicycle are adopted. This correction corresponds to the transformation of the true heliocentric longitude to the true geocentric longitude in modern astronomy. In some of the popularly used handbooks (karaṇa) instead of giving the mathematical expressions for the above said equations, their discrete numerical values, at intervals of 15 degrees, are given. In the present paper using the data of discrete numerical values we build up continuous functions of periodic terms for the manda and śīghra equations. Further, we obtain the critical points and the maximum values for these two equations.

  5. On Using Taylor's Hypothesis for Three-Dimensional Mixing Layers

    NASA Technical Reports Server (NTRS)

    LeBoeuf, Richard L.; Mehta, Rabindra D.

    1995-01-01

    In the present study, errors in using Taylor's hypothesis to transform measurements obtained in a temporal (or phase) frame onto a spatial one were evaluated. For the first time, phase-averaged ('real') spanwise and streamwise vorticity data measured on a three-dimensional grid were compared directly to those obtained using Taylor's hypothesis. The results show that even the qualitative features of the spanwise and streamwise vorticity distributions given by the two techniques can be very different. This is particularly true in the region of the spanwise roller pairing. The phase-averaged spanwise and streamwise peak vorticity levels given by Taylor's hypothesis are typically lower (by up to 40%) compared to the real measurements.

  6. Risk factors for bovine tuberculosis in low incidence regions related to the movements of cattle.

    PubMed

    Gates, M Carolyn; Volkova, Victoriya V; Woolhouse, Mark E J

    2013-11-09

    Bovine tuberculosis (bTB) remains difficult to eradicate from low incidence regions partly due to the imperfect sensitivity and specificity of routine intradermal tuberculin testing. Herds with unconfirmed reactors that are incorrectly classified as bTB-negative may be at risk of spreading disease, while those that are incorrectly classified as bTB-positive may be subject to costly disease eradication measures. This analysis used data from Scotland in the period leading to Officially Tuberculosis Free recognition (1) to investigate the risks associated with the movements of cattle from herds with different bTB risk classifications and (2) to identify herd demographic characteristics that may aid in the interpretation of tuberculin testing results. From 2002 to 2009, for every herd with confirmed bTB positive cattle identified through routine herd testing, there was an average of 2.8 herds with at least one unconfirmed positive reactor and 18.9 herds with unconfirmed inconclusive reactors. Approximately 75% of confirmed bTB positive herds were detected through cattle with no known movements outside Scotland. At the animal level, cattle that were purchased from Scottish herds with unconfirmed positive reactors and a recent history importing cattle from endemic bTB regions were significantly more likely to react positively on routine intradermal tuberculin tests, while cattle purchased from Scottish herds with unconfirmed inconclusive reactors were significantly more likely to react inconclusively. Case-case comparisons revealed few demographic differences between herds with confirmed positive, unconfirmed positive, and unconfirmed inconclusive reactors, which highlights the difficulty in determining the true disease status of herds with unconfirmed tuberculin reactors. Overall, the risk of identifying reactors through routine surveillance decreased significantly over time, which may be partly attributable to changes in movement testing regulations and the volume of cattle imported from endemic regions. Although the most likely source of bTB infections in Scotland was cattle previously imported from endemic regions, we found indirect evidence of transmission within Scottish cattle farms and cannot rule out the possibility of low level transmission between farms. Further investigation is needed to determine whether targeting herds with unconfirmed reactors and a history of importing cattle from high risk regions would benefit control efforts.

  7. On the challenges of drawing conclusions from p-values just below 0.05

    PubMed Central

    2015-01-01

    In recent years, researchers have attempted to provide an indication of the prevalence of inflated Type 1 error rates by analyzing the distribution of p-values in the published literature. De Winter & Dodou (2015) analyzed the distribution (and its change over time) of a large number of p-values automatically extracted from abstracts in the scientific literature. They concluded there is a ‘surge of p-values between 0.041–0.049 in recent decades’ which ‘suggests (but does not prove) questionable research practices have increased over the past 25 years.’ I show the changes in the ratio of fractions of p-values between 0.041–0.049 over the years are better explained by assuming the average power has decreased over time. Furthermore, I propose that their observation that p-values just below 0.05 increase more strongly than p-values above 0.05 can be explained by an increase in publication bias (or the file drawer effect) over the years (cf. Fanelli, 2012; Pautasso, 2010, which has led to a relative decrease of ‘marginally significant’ p-values in abstracts in the literature (instead of an increase in p-values just below 0.05). I explain why researchers analyzing large numbers of p-values need to relate their assumptions to a model of p-value distributions that takes into account the average power of the performed studies, the ratio of true positives to false positives in the literature, the effects of publication bias, and the Type 1 error rate (and possible mechanisms through which it has inflated). Finally, I discuss why publication bias and underpowered studies might be a bigger problem for science than inflated Type 1 error rates, and explain the challenges when attempting to draw conclusions about inflated Type 1 error rates from a large heterogeneous set of p-values. PMID:26246976

  8. Performance of internal covariance estimators for cosmic shear correlation functions

    DOE PAGES

    Friedrich, O.; Seitz, S.; Eifler, T. F.; ...

    2015-12-31

    Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in themore » $$\\Omega_m$$-$$\\sigma_8$$ plane as measured with internally estimated covariance matrices is on average $$\\gtrsim 85\\%$$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $$\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$$ derived from internally estimated covariances is $$\\sim 90\\%$$ of the true uncertainty.« less

  9. Human Fear Chemosignaling: Evidence from a Meta-Analysis.

    PubMed

    de Groot, Jasper H B; Smeets, Monique A M

    2017-10-01

    Alarm pheromones are widely used in the animal kingdom. Notably, there are 26 published studies (N = 1652) highlighting a human capacity to communicate fear, stress, and anxiety via body odor from one person (66% males) to another (69% females). The question is whether the findings of this literature reflect a true effect, and what the average effect size is. These questions were answered by combining traditional meta-analysis with novel meta-analytical tools, p-curve analysis and p-uniform-techniques that could indicate whether findings are likely to reflect a true effect based on the distribution of P-values. A traditional random-effects meta-analysis yielded a small-to-moderate effect size (Hedges' g: 0.36, 95% CI: 0.31-0.41), p-curve analysis showed evidence diagnostic of a true effect (ps < 0.0001), and there was no evidence for publication bias. This meta-analysis did not assess the internal validity of the current studies; yet, the combined results illustrate the statistical robustness of a field in human olfaction dealing with the human capacity to communicate certain emotions (fear, stress, anxiety) via body odor. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Diagnostic validity of periapical radiography and CBCT for assessing periapical lesions that persist after endodontic surgery.

    PubMed

    Kruse, Casper; Spin-Neto, Rubens; Reibel, Jesper; Wenzel, Ann; Kirkevang, Lise-Lotte

    2017-10-01

    Traditionally, healing after surgical endodontic retreatment (SER); i.e. apicectomy with or without a retrograde filling, is assessed in periapical radiographs (PR). Recently, the use of cone beam CT (CBCT) has increased within endodontics. Generally, CBCT detects more periapical lesions than PR, but basic research on the true nature of these lesions is missing. The objective was to assess the diagnostic validity of PR and CBCT for determining inflammation in SER cases that were re-operated (SER-R) due to unsuccessful healing, using histology of the periapical lesion as reference for inflammation. Records from 149 patients, receiving SER 2004-10, were screened. In total 108 patients (119 teeth) were recalled for clinical follow-up examination, PR and CBCT, of which 74 patients (83 teeth) participated. Three observers assessed PR and CBCT as "successful healing" or "unsuccessful healing" using Rud and Molven's criteria. SER-R was offered to all non-healed teeth with expected favourable prognosis for subsequent functional retention. During SER-R, biopsy was performed and histopathology verified whether or not inflammation was present. All re-operated cases were assessed non-healed in CBCT while 11 of these were assessed successfully healed in PR. Nineteen biopsies were examined. Histopathologic diagnosis revealed 42% (teeth = 8) without periapical inflammation, 16% (teeth = 3) with mild inflammation and 42% (teeth = 8) with moderate to intense inflammation. A correct diagnosis was obtained in 58% with CBCT (true positives) and 63% with PR (true positives+true negatives). Of the re-operated teeth, 42% had no periapical inflammatory lesion, and hence no benefit from SER-R. Not all lesions observed in CBCT represented periapical inflammatory lesions.

  11. Sensitivity and specificity of the amer dizziness diagnostic scale (adds) for patients with vestibular disorders.

    PubMed

    Al Saif, Amer; Alsenany, Samira

    2015-01-01

    [Purpose] To investigate the sensitivity and specificity of a newly developed diagnostic tool, the Amer Dizziness Diagnostic Scale (ADDS), to evaluate and differentially diagnose vestibular disorder and to identify the strengths and weaknesses of the scale and its usefulness in clinical practice. [Subjects and Methods] Two hundred subjects of both genders (72 males, 128 females) aged between 18 to 60 (49.5±7.8) who had a history of vertigo and/or dizziness symptoms for this previous two weeks or less were recruited for the study. All subjects were referred by otolaryngologists, neurologists or family physicians in and around Jeddah, Kingdom of Saudi Arabia. On the first clinic visit, all the patients were evaluated once using the ADDS, following which they underwent routine testing of clinical signs and symptoms, audiometry, and a neurological examination, coupled with tests of Vestibulo-Ocular Reflex function, which often serves as the "gold standard" for determining the probability of a vestibular deficit. [Results] The results show that the ADDS strongly correlated with "true-positive" and "true-negative" responses for determining the probability of a vestibular disorder (r =0.95). A stepwise linear regression was conducted and the results indicate that the ADDS was a significant predictor of "true-positive" and "true-negative" responses in vestibular disorders (R(2) =0.90). Approximately 90% of the variability in the vestibular gold standard test was explained by its relationship to the ADDS. Moreover, the ADDS was found to have a sensitivity of 96% and a specificity of 96%. [Conclusion] This study showed that the Amer Dizziness Diagnostic Scale has high sensitivity and specificity and that it can be used as a method of differential diagnosis for patients with vestibular disorders.

  12. A Novel 'Cheese Wire' Technique for Stent Positioning Following Difficult Iliac Artery Subintimal Dissection and Aortic Re-Entry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkinson, A. F., E-mail: anthony.watkinson@rdeft.nhs.u

    2009-07-15

    Subintimal wire dissection is a well-established method for traversing difficult vascular occlusions. This technique relies on re-entry of the true lumen distal to the occlusion, which may be difficult in diseased vessels with significant calcification. This case report describes a novel 'cheese wire' technique to allow stent positioning without the use of proprietary re-entry devices.

  13. Transformation-aware Exploit Generation using a HI-CFG

    DTIC Science & Technology

    2013-05-16

    testing has many limitations of its own: it can require significant target -specific setup to perform well; it is unlikely to trigger vulnerabilities...check fails represents a potential vulnerability, but a conservative analysis can produce false positives , so we can use exploit generation to find...warnings that correspond to true positives . We can also find potentially vulnerable instructions in the course of a manual binary- level security audit

  14. Use of the polymerase chain reaction to directly detect malaria parasites in blood samples from the Venezuelan Amazon.

    PubMed

    Laserson, K F; Petralanda, I; Hamlin, D M; Almera, R; Fuentes, M; Carrasquel, A; Barker, R H

    1994-02-01

    We have examined the reproducibility, sensitivity, and specificity of detecting Plasmodium falciparum using the polymerase chain reaction (PCR) and the species-specific probe pPF14 under field conditions in the Venezuelan Amazon. Up to eight samples were field collected from each of 48 consenting Amerindians presenting with symptoms of malaria. Sample processing and analysis was performed at the Centro Amazonico para la Investigacion y Control de Enfermedades Tropicales Simon Bolivar. A total of 229 samples from 48 patients were analyzed by PCR methods using four different P. falciparum-specific probes. One P. vivax-specific probe and by conventional microscopy. Samples in which results from PCR and microscopy differed were reanalyzed at a higher sensitivity by microscopy. Results suggest that microscopy-negative, PCR-positive samples are true positives, and that microscopy-positive and PCR-negative samples are true negatives. The sensitivity of the DNA probe/PCR method was 78% and its specificity was 97%. The positive predictive value of the PCR method was 88%, and the negative predictive value was 95%. Through the analysis of multiple blood samples from each individual, the DNA probe/PCR methodology was found to have an inherent reproducibility that was highly statistically significant.

  15. Grouped fuzzy SVM with EM-based partition of sample space for clustered microcalcification detection.

    PubMed

    Wang, Huiya; Feng, Jun; Wang, Hongyu

    2017-07-20

    Detection of clustered microcalcification (MC) from mammograms plays essential roles in computer-aided diagnosis for early stage breast cancer. To tackle problems associated with the diversity of data structures of MC lesions and the variability of normal breast tissues, multi-pattern sample space learning is required. In this paper, a novel grouped fuzzy Support Vector Machine (SVM) algorithm with sample space partition based on Expectation-Maximization (EM) (called G-FSVM) is proposed for clustered MC detection. The diversified pattern of training data is partitioned into several groups based on EM algorithm. Then a series of fuzzy SVM are integrated for classification with each group of samples from the MC lesions and normal breast tissues. From DDSM database, a total of 1,064 suspicious regions are selected from 239 mammography, and the measurement of Accuracy, True Positive Rate (TPR), False Positive Rate (FPR) and EVL = TPR* 1-FPR are 0.82, 0.78, 0.14 and 0.72, respectively. The proposed method incorporates the merits of fuzzy SVM and multi-pattern sample space learning, decomposing the MC detection problem into serial simple two-class classification. Experimental results from synthetic data and DDSM database demonstrate that our integrated classification framework reduces the false positive rate significantly while maintaining the true positive rate.

  16. The Astrometric Recognition of the Solar Clementine Gnomon (1702)

    NASA Astrophysics Data System (ADS)

    Sigismondi, Costantino

    The Clementine gnomon has been built in 1702 to measure the Earth's obliquity variation. For this reason the pinhole was located in the walls of Diocletian's times (305 a. D.) in order to remain stable along the centuries, but its original form and position have been modified. We used an astrometric method to recover the original position of the pinhole: reshaping the pinhole to a circle of 1.5 cm of diameter, the positions of the Northern and Southern limbs have been compared with the ephemerides. A sistematic shift of 4.5 mm Southward of the whole solar image shows that the original pinhole was 4.5 mm North of the actual position, as the images in the Bianchini's book (1703) suggest. The oval shape of the actual pinhole is also wrong. Using a circle the larger solar spots are clearly visible. Some reference stars of the catalogue of Philippe de la Hire (1702), used originally for measuring the ecliptic latitude of the Sun, are written next to the meridian line, but after the last restauration (2000), four of them are wrongly located. Finally the deviation from the true North, of the meridian line's azimuth confirms the value recovered in 1750. This, with the local deviations of a true line, will remain as systematic error, like for all these historical instruments.

  17. Is it possible to predict office hysteroscopy failure?

    PubMed

    Cobellis, Luigi; Castaldi, Maria Antonietta; Giordano, Valentino; De Franciscis, Pasquale; Signoriello, Giuseppe; Colacurci, Nicola

    2014-10-01

    The purpose of this study was to develop a clinical tool, the HFI (Hysteroscopy Failure Index), which gives criteria to predict hysteroscopic examination failure. This was a retrospective diagnostic test study, aimed to validate the HFI, set at the Department of Gynaecology, Obstetric and Reproductive Science of the Second University of Naples, Italy. The HFI was applied to our database of 995 consecutive women, who underwent office based to assess abnormal uterine bleeding (AUB), infertility, cervical polyps, and abnormal sonographic patterns (postmenopausal endometrial thickness of more than 5mm, endometrial hyperechogenic spots, irregular endometrial line, suspect of uterine septa). Demographic characteristics, previous surgery, recurrent infections, sonographic data, Estro-Progestins, IUD and menopausal status were collected. Receiver operating characteristic (ROC) curve analysis was used to assess the ability of the model to identify patients who were correctly identified (true positives) divided by the total number of failed hysteroscopies (true positives+false negatives). Positive and Negative Likelihood Ratios with 95%CI were calculated. The HFI score is able to predict office hysteroscopy failure in 76% of cases. Moreover, the Positive likelihood ratio was 11.37 (95% CI: 8.49-15.21), and the Negative likelihood ratio was 0.33 (95% CI: 0.27-0.41). Hysteroscopy failure index was able to retrospectively predict office hysteroscopy failure. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Remembering others: using life scripts to access positive but not negative information.

    PubMed

    White, Hedy; Coppola, Harmony A; Multunas, Nichole K

    2008-01-01

    The current research extended to memories of others the life script theory of abstract, idealized mental representations of transitional experiences. Recent and earlier high school graduates rated positive and negative characteristics of popular, average, and unpopular girls from their schools. "Average" girls were rated as higher than average on possessing positive characteristics. Recent but not earlier graduates distinguished between popularity conditions on negative characteristics (negative information is not included in life scripts). For positive characteristics, earlier graduates remembered unpopular girls less favorably (perhaps using stereotypical scripts) than recent graduates remembered them (having greater access to episodic memories of individual girls). A smaller graduation time difference in the same direction resulted for average and popular girls.

  19. Electron Gun and Collector Design for 94 GHz Gyro-amplifiers.

    NASA Astrophysics Data System (ADS)

    Nguyen, K.; Danly, B.; Levush, B.; Blank, M.; True, D.; Felch, K.; Borchard, P.

    1997-05-01

    The electrical design of the magnetron injection gun and collector for high average power TE_01 gyro-amplifiers has recently been completed using the EGUN(W.B. Herrmannsfeldt, AIP Conf. Proc. 177, pp. 45-58, 1988.) and DEMEOS(R. True, AIP Conf. Proc. 297, pp. 493-499, 1993.) codes. The gun employs an optimized double-anode geometry and a radical cathode cone angle of 500 to achieve superior beam optics that are relatively insensitive to electrode misalignments and field errors. Perpendicular velocity spread of 1.6% at an perpendicular to axial velocity ratio of 1.52 is obtained for a 6 A, 65 kV beam. The 1.28" diameter collector, which also serves as the output waveguide, has an average power density of < 350 W/cm^2 for a 59 kW average power beam. Details will be presented at the conference.

  20. Empathizing-systemizing cognitive styles: Effects of sex and academic degree

    PubMed Central

    Kaganovskiy, Leon; Baron-Cohen, Simon

    2018-01-01

    This study tests if the drives to empathize (E) and systemize (S), measured by the Systemizing Quotient-Revised (SQ-R) and Empathy Quotient (EQ), show effects of sex and academic degree. The responses of 419 students from the Humanities and the Physical Sciences were analyzed in terms of the E-S theory predictions. Results confirm that there is an interaction between sex, degree and the drive to empathize relative to systemize. Female students in the Humanities on average had a stronger drive to empathize than to systemize in comparison to males in the Humanities. Male students in the Sciences on average had a stronger drive to systemize than to empathize in comparison to females in the Sciences. Finally, students in the sciences on average had a stronger drive to systemize more than to empathize, irrespective of their sex. The reverse is true for students in the Humanities. These results strongly replicate earlier findings. PMID:29579056

Top