Relating Cirrus Cloud Properties to Observed Fluxes: A Critical Assessment.
NASA Astrophysics Data System (ADS)
Vogelmann, A. M.; Ackerman, T. P.
1995-12-01
The accuracy needed in cirrus cloud scattering and microphysical properties is quantified such that the radiative effect on climate can he determined. Our ability to compute and observe these properties to within needed accuracies is assessed, with the greatest attention given to those properties that most affect the fluxes.Model calculations indicate that computing net longwave fluxes at the surface to within ±5% requires that cloud temperature be known to within as little as ±3 K in cold climates for extinction optical depths greater than two. Such accuracy could be more difficult to obtain than that needed in the values of scattering parameters. For a baseline case (defined in text), computing net shortwave fluxes at the surface to within ±5% requires accuracies in cloud ice water content that, when the optical depth is greater than 1.25, are beyond the accuracies of current measurements. Similarly, surface shortwave flux computations require accuracies in the asymmetry parameter that are beyond our current abilities when the optical depth is greater than four. Unless simplifications are discovered, the scattering properties needed to compute cirrus cloud fluxes cannot be obtained explicitly with existing scattering algorithms because the range of crystal sizes is too great and crystal shapes are too varied to be treated computationally. Thus, bulk cirrus scattering properties might be better obtained by inverting cirrus cloud fluxes and radiances. Finally, typical aircraft broadband flux measurements are not sufficiently accurate to provide a convincing validation of calculations. In light of these findings we recommend a reexamination of the methodology used in field programs such as FIRE and suggest a complementary approach.
Estimation of suspended-sediment rating curves and mean suspended-sediment loads
Crawford, Charles G.
1991-01-01
A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.
Sex discrimination potential of buccolingual and mesiodistal tooth dimensions.
Acharya, Ashith B; Mainali, Sneedha
2008-07-01
Tooth crown dimensions are reasonably accurate predictors of sex and are useful adjuncts in sex assessment. This study explores the utility of buccolingual (BL) and mesiodistal (MD) measurements in sex differentiation when used independently. BL and MD measurements of 28 teeth (third molars excluded) were obtained from a group of 53 Nepalese subjects (22 women and 31 men) aged 19-28 years. Stepwise discriminant analyses were undertaken separately for both types of tooth crown variables and their accuracy in sex classification compared with one another. MD dimensions had recognizably greater accuracy (77.4-83%) in sex identification than BL measurements (62.3-64.2%)--results that are consistent with previous reports. However, the accuracy of MD variables is not high enough to warrant their exclusive use in odontometric sex assessment--higher accuracy levels have been obtained when both types of dimensions were used concurrently, implying that BL variables contribute to sex assessment to some extent. Hence, it is inferred that optimal results in dental sex assessment are obtained when both MD and BL variables are used together.
Alternative magnetic flux leakage modalities for pipeline inspection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katragadda, G.; Lord, W.; Sun, Y.S.
1996-05-01
Increasing quality consciousness is placing higher demands on the accuracy and reliability of inspection systems used in defect detection and characterization. Nondestructive testing techniques often rely on using multi-transducer approaches to obtain greater defect sensitivity. This paper investigates the possibility of taking advantage of alternative modalities associated with the standard magnetic flux leakage tool to obtain additional defect information, while still using a single excitation source.
Adams, Allison; Santi, Angelo
2011-03-01
Following training to match 2- and 8-sec durations of feederlight to red and green comparisons with a 0-sec baseline delay, pigeons were allowed to choose to take a memory test or to escape the memory test. The effects of sample omission, increases in retention interval, and variation in trial spacing on selection of the escape option and accuracy were studied. During initial testing, escaping the test did not increase as the task became more difficult, and there was no difference in accuracy between chosen and forced memory tests. However, with extended training, accuracy for chosen tests was significantly greater than for forced tests. In addition, two pigeons exhibited higher accuracy on chosen tests than on forced tests at the short retention interval and greater escape rates at the long retention interval. These results have not been obtained in previous studies with pigeons when the choice to take the test or to escape the test is given before test stimuli are presented. It appears that task-specific methodological factors may determine whether a particular species will exhibit the two behavioral effects that were initially proposed as potentially indicative of metacognition.
Lidar-revised geologic map of the Poverty Bay 7.5' quadrangle, King and Pierce Counties, Washington
Tabor, Rowland W.; Booth, Derek B.; Troost, Kathy Goetz
2014-01-01
In 2003, the Puget Sound Lidar Consortium obtained a lidar-derived digital elevation model (DEM) for the Puget Sound region including all of the Poverty Bay 7.5' quadrangle. For a brief description of lidar (LIght Detection And Ranging) and this data acquisition program, see Haugerud and others (2003). This new DEM has a horizontal resolution and accuracy of 6 ft (2 m) and vertical accuracy of approximately 1 ft (0.3 m). The greater resolution and accuracy of the lidar DEM have facilitated a new interpretation of the geology, especially the distribution and relative age of some surficial deposits.
Accuracy of Nonverbal Communication as Determinant of Interpersonal Expectancy Effects
ERIC Educational Resources Information Center
Zuckerman, Miron; And Others
1978-01-01
The person perception paradigm was used to address the effects of experimenters' ability to encode nonverbal cues and subjects' ability to decode nonverbal cues on magnitude of expectancy effects. Greater expectancy effects were obtained when experimenters were better encoders and subjects were better decoders of nonverbal cues. (Author)
High Accuracy Temperature Measurements Using RTDs with Current Loop Conditioning
NASA Technical Reports Server (NTRS)
Hill, Gerald M.
1997-01-01
To measure temperatures with a greater degree of accuracy than is possible with thermocouples, RTDs (Resistive Temperature Detectors) are typically used. Calibration standards use specialized high precision RTD probes with accuracies approaching 0.001 F. These are extremely delicate devices, and far too costly to be used in test facility instrumentation. Less costly sensors which are designed for aeronautical wind tunnel testing are available and can be readily adapted to probes, rakes, and test rigs. With proper signal conditioning of the sensor, temperature accuracies of 0.1 F is obtainable. For reasons that will be explored in this paper, the Anderson current loop is the preferred method used for signal conditioning. This scheme has been used in NASA Lewis Research Center's 9 x 15 Low Speed Wind Tunnel, and is detailed.
Older and Younger Adults’ Accuracy in Discerning Health and Competence in Older and Younger Faces
Zebrowitz, Leslie A.; Franklin, Robert G.; Boshyan, Jasmine; Luevano, Victor; Agrigoroaei, Stefan; Milosavljevic, Bosiljka; Lachman, Margie E.
2015-01-01
We examined older and younger adults’ accuracy judging the health and competence of faces. Accuracy differed significantly from chance and varied with face age but not rater age. Health ratings were more accurate for older than younger faces, with the reverse for competence ratings. Accuracy was greater for low attractive younger faces, but not for low attractive older faces. Greater accuracy judging older faces’ health was paralleled by greater validity of attractiveness and looking older as predictors of their health. Greater accuracy judging younger faces’ competence was paralleled by greater validity of attractiveness and a positive expression as predictors of their competence. Although the ability to recognize variations in health and cognitive ability is preserved in older adulthood, the effects of face age on accuracy and the different effects of attractiveness across face age may alter social interactions across the life span. PMID:25244467
Dhyani, Manish; Vij, Abhinav; Bhan, Atul K.; Halpern, Elkan F.; Méndez-Navarro, Jorge; Corey, Kathleen E.; Chung, Raymond T.
2015-01-01
Purpose To evaluate the accuracy of shear-wave elastography (SWE) for staging liver fibrosis in patients with diffuse liver disease (including patients with hepatitis C virus [HCV]) and to determine the relative accuracy of SWE measurements obtained from different hepatic acquisition sites for staging liver fibrosis. Materials and Methods The institutional review board approved this single-institution prospective study, which was performed between January 2010 and March 2013 in 136 consecutive patients who underwent SWE before their scheduled liver biopsy (age range, 18–76 years; mean age, 49 years; 70 men, 66 women). Informed consent was obtained from all patients. SWE measurements were obtained at four sites in the liver. Biopsy specimens were reviewed in a blinded manner by a pathologist using METAVIR criteria. SWE measurements and biopsy results were compared by using the Spearman correlation and receiver operating characteristic (ROC) curve analysis. Results SWE values obtained at the upper right lobe showed the highest correlation with estimation of fibrosis (r = 0.41, P < .001). Inflammation and steatosis did not show any correlation with SWE values except for values from the left lobe, which showed correlation with steatosis (r = 0.24, P = .004). The area under the ROC curve (AUC) in the differentiation of stage F2 fibrosis or greater, stage F3 fibrosis or greater, and stage F4 fibrosis was 0.77 (95% confidence interval [CI]: 0.68, 0.86), 0.82 (95% CI: 0.75, 0.91), and 0.82 (95% CI: 0.70, 0.95), respectively, for all subjects who underwent liver biopsy. The corresponding AUCs for the subset of patients with HCV were 0.80 (95% CI: 0.67, 0.92), 0.82 (95% CI: 0.70, 0.95), and 0.89 (95% CI: 0.73, 1.00). The adjusted AUCs for differentiating stage F2 or greater fibrosis in patients with chronic liver disease and those with HCV were 0.84 and 0.87, respectively. Conclusion SWE estimates of liver stiffness obtained from the right upper lobe showed the best correlation with liver fibrosis severity and can potentially be used as a noninvasive test to differentiate intermediate degrees of liver fibrosis in patients with liver disease. © RSNA, 2014 Online supplemental material is available for this article. PMID:25393946
Processing of emotional reactivity and emotional memory over sleep
Baran, Bengi; Pace-Schott, Edward F.; Ericson, Callie; Spencer, Rebecca M. C.
2012-01-01
Sleep enhances memories, particularly emotional memories. As such, it has been suggested that sleep deprivation may reduce post-traumatic stress disorder. This presumes that emotional memory consolidation is paralleled by a reduction in emotional reactivity, an association that has not yet been examined. In the present experiment, we utilized an incidental memory task in humans and obtained valence and arousal ratings during two sessions separated either by 12 hours of daytime wake or 12 hours including overnight sleep. Recognition accuracy was greater following sleep relative to wake for both negative and neutral pictures. While emotional reactivity to negative pictures was greatly reduced over wake, the negative emotional response was relatively preserved over sleep. Moreover, protection of emotional reactivity was associated with greater time in REM sleep. Recognition accuracy, however, was not associated with REM. Thus, we provide the first evidence that sleep enhances emotional memory while preserving emotional reactivity. PMID:22262901
Assessment of progressively delayed prompts on guided skill learning in rats.
Reid, Alliston K; Futch, Sara E; Ball, Katherine M; Knight, Aubrey G; Tucker, Martha
2017-03-01
We examined the controlling factors that allow a prompted skill to become autonomous in a discrete-trials implementation of Touchette's (1971) progressively delayed prompting procedure, but our subjects were rats rather than children with disabilities. Our prompted skill was a left-right lever-press sequence guided by two panel lights. We manipulated (a) the effectiveness of the guiding lights prompt and (b) the presence or absence of a progressively delayed prompt in four groups of rats. The less effective prompt yielded greater autonomy than the more effective prompt. The ability of the progressively delayed prompt procedure to produce behavioral autonomy depended upon characteristics of the obtained delay (trial duration) rather than on the pending prompt. Sequence accuracy was reliably higher in unprompted trials than in prompted trials, and this difference was maintained in the 2 groups that received no prompts but yielded equivalent trial durations. Overall sequence accuracy decreased systematically as trial duration increased. Shorter trials and their greater accuracy were correlated with higher overall reinforcement rates for faster responding. Waiting for delayed prompts (even if no actual prompt was provided) was associated with lower overall reinforcement rate by decreasing accuracy and by lengthening trials. These findings extend results from previous studies regarding the controlling factors in delayed prompting procedures applied to children with disabilities.
Basic technique for solid lesions: Cytology, core, or both?
Hébert-Magee, Shantel
2014-01-01
This chapter highlights key fundamentals relevant to post-procurement tissue handling of materials obtains by aspiration and/or biopsy and details the subtle techniques that can significantly impact patient management and practice patterns. A basic knowledge of tissue handling and processing is imperative for endosonographers who attempt to achieve a greater than 95% diagnostic accuracy with their tissue-acquisition procedures. PMID:24949408
Theoferometer for High Accuracy Optical Alignment and Metrology
NASA Technical Reports Server (NTRS)
Toland, Ronald; Leviton, Doug; Koterba, Seth
2004-01-01
The accurate measurement of the orientation of optical parts and systems is a pressing problem for upcoming space missions, such as stellar interferometers, requiring the knowledge and maintenance of positions to the sub-arcsecond level. Theodolites, the devices commonly used to make these measurements, cannot provide the needed level of accuracy. This paper describes the design, construction, and testing of an interferometer system to fill the widening gap between future requirements and current capabilities. A Twyman-Green interferometer mounted on a 2 degree of freedom rotation stage is able to obtain sub-arcsecond, gravity-referenced tilt measurements of a sample alignment cube. Dubbed a 'theoferometer,' this device offers greater ease-of-use, accuracy, and repeatability than conventional methods, making it a suitable 21st-century replacement for the theodolite.
Dang, Mia; Ramsaran, Kalinda D; Street, Melissa E; Syed, S Noreen; Barclay-Goddard, Ruth; Stratford, Paul W; Miller, Patricia A
2011-01-01
To estimate the predictive accuracy and clinical usefulness of the Chedoke-McMaster Stroke Assessment (CMSA) predictive equations. A longitudinal prognostic study using historical data obtained from 104 patients admitted post cerebrovascular accident was undertaken. Data were abstracted for all patients undergoing rehabilitation post stroke who also had documented admission and discharge CMSA scores. Published predictive equations were used to determine predicted outcomes. To determine the accuracy and clinical usefulness of the predictive model, shrinkage coefficients and predictions with 95% confidence bands were calculated. Complete data were available for 74 patients with a mean age of 65.3±12.4 years. The shrinkage values for the six Impairment Inventory (II) dimensions varied from -0.05 to 0.09; the shrinkage value for the Activity Inventory (AI) was 0.21. The error associated with predictive values was greater than ±1.5 stages for the II dimensions and greater than ±24 points for the AI. This study shows that the large error associated with the predictions (as defined by the confidence band) for the CMSA II and AI limits their clinical usefulness as a predictive measure. Further research to establish predictive models using alternative statistical procedures is warranted.
Impacts of land use/cover classification accuracy on regional climate simulations
NASA Astrophysics Data System (ADS)
Ge, Jianjun; Qi, Jiaguo; Lofgren, Brent M.; Moore, Nathan; Torbick, Nathan; Olson, Jennifer M.
2007-03-01
Land use/cover change has been recognized as a key component in global change. Various land cover data sets, including historically reconstructed, recently observed, and future projected, have been used in numerous climate modeling studies at regional to global scales. However, little attention has been paid to the effect of land cover classification accuracy on climate simulations, though accuracy assessment has become a routine procedure in land cover production community. In this study, we analyzed the behavior of simulated precipitation in the Regional Atmospheric Modeling System (RAMS) over a range of simulated classification accuracies over a 3 month period. This study found that land cover accuracy under 80% had a strong effect on precipitation especially when the land surface had a greater control of the atmosphere. This effect became stronger as the accuracy decreased. As shown in three follow-on experiments, the effect was further influenced by model parameterizations such as convection schemes and interior nudging, which can mitigate the strength of surface boundary forcings. In reality, land cover accuracy rarely obtains the commonly recommended 85% target. Its effect on climate simulations should therefore be considered, especially when historically reconstructed and future projected land covers are employed.
Laser-based Relative Navigation Using GPS Measurements for Spacecraft Formation Flying
NASA Astrophysics Data System (ADS)
Lee, Kwangwon; Oh, Hyungjik; Park, Han-Earl; Park, Sang-Young; Park, Chandeok
2015-12-01
This study presents a precise relative navigation algorithm using both laser and Global Positioning System (GPS) measurements in real time. The measurement model of the navigation algorithm between two spacecraft is comprised of relative distances measured by laser instruments and single differences of GPS pseudo-range measurements in spherical coordinates. Based on the measurement model, the Extended Kalman Filter (EKF) is applied to smooth the pseudo-range measurements and to obtain the relative navigation solution. While the navigation algorithm using only laser measurements might become inaccurate because of the limited accuracy of spacecraft attitude estimation when the distance between spacecraft is rather large, the proposed approach is able to provide an accurate solution even in such cases by employing the smoothed GPS pseudo-range measurements. Numerical simulations demonstrate that the errors of the proposed algorithm are reduced by more than about 12% compared to those of an algorithm using only laser measurements, as the accuracy of angular measurements is greater than 0.001° at relative distances greater than 30 km.
NASA Technical Reports Server (NTRS)
Wieland, Frederick; Santos, Michel; Krueger, William; Houston, Vincent E.
2011-01-01
With the expected worldwide increase of air traffic during the coming decade, both the Federal Aviation Administration's (FAA's) Next Generation Air Transportation System (NextGen), as well as Eurocontrol's Single European Sky ATM Research (SESAR) program have, as part of their plans, air traffic management (ATM) solutions that can increase performance without requiring time-consuming and expensive infrastructure changes. One such solution involves the ability of both controllers and flight crews to deliver aircraft to the runway with greater accuracy than they can today. Previous research has shown that time-based spacing techniques, wherein the controller assigns a time spacing to each pair of arriving aircraft, can achieve this goal by providing greater runway delivery accuracy and producing a concomitant increase in system-wide performance. The research described herein focuses on one specific application of time-based spacing, called Airborne Precision Spacing (APS), which has evolved over the past ten years. This research furthers APS understanding by studying its performance with realistic wind conditions obtained from atmospheric sounding data and with realistic wind forecasts obtained from the Rapid Update Cycle (RUC) short-range weather forecast. In addition, this study investigates APS performance with limited surveillance range, as provided by the Automatic Dependent Surveillance-Broadcast (ADS-B) system, and with an algorithm designed to improve APS performance when ADS-B surveillance data is unavailable. The results presented herein quantify the runway threshold delivery accuracy of APS under these conditions, and also quantify resulting workload metrics such as the number of speed changes required to maintain spacing.
Performance of Airborne Precision Spacing Under Realistic Wind Conditions
NASA Technical Reports Server (NTRS)
Wieland, Frederick; Santos, Michel; Krueger, William; Houston, Vincent E.
2011-01-01
With the expected worldwide increase of air traffic during the coming decade, both the Federal Aviation Administration s (FAA s) Next Generation Air Transportation System (NextGen), as well as Eurocontrol s Single European Sky ATM Research (SESAR) program have, as part of their plans, air traffic management solutions that can increase performance without requiring time-consuming and expensive infrastructure changes. One such solution involves the ability of both controllers and flight crews to deliver aircraft to the runway with greater accuracy than is possible today. Previous research has shown that time-based spacing techniques, wherein the controller assigns a time spacing to each pair of arriving aircraft, is one way to achieve this goal by providing greater runway delivery accuracy that produces a concomitant increase in system-wide performance. The research described herein focuses on a specific application of time-based spacing, called Airborne Precision Spacing (APS), which has evolved over the past ten years. This research furthers APS understanding by studying its performance with realistic wind conditions obtained from atmospheric sounding data and with realistic wind forecasts obtained from the Rapid Update Cycle (RUC) short-range weather forecast. In addition, this study investigates APS performance with limited surveillance range, as provided by the Automatic Dependent Surveillance-Broadcast (ADS-B) system, and with an algorithm designed to improve APS performance when an ADS-B signal is unavailable. The results presented herein quantify the runway threshold delivery accuracy of APS un-der these conditions, and also quantify resulting workload metrics such as the number of speed changes required to maintain spacing.
Rapid insights from remote sensing in the geosciences
NASA Astrophysics Data System (ADS)
Plaza, Antonio
2015-03-01
The growing availability of capacity computing for atomistic materials modeling has encouraged the use of high-accuracy computationally intensive interatomic potentials, such as SNAP. These potentials also happen to scale well on petascale computing platforms. SNAP has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The computational cost per atom is much greater than that of simpler potentials such as Lennard-Jones or EAM, while the communication cost remains modest. We discuss a variety of strategies for implementing SNAP in the LAMMPS molecular dynamics package. We present scaling results obtained running SNAP on three different classes of machine: a conventional Intel Xeon CPU cluster; the Titan GPU-based system; and the combined Sequoia and Vulcan BlueGene/Q. The growing availability of capacity computing for atomistic materials modeling has encouraged the use of high-accuracy computationally intensive interatomic potentials, such as SNAP. These potentials also happen to scale well on petascale computing platforms. SNAP has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The computational cost per atom is much greater than that of simpler potentials such as Lennard-Jones or EAM, while the communication cost remains modest. We discuss a variety of strategies for implementing SNAP in the LAMMPS molecular dynamics package. We present scaling results obtained running SNAP on three different classes of machine: a conventional Intel Xeon CPU cluster; the Titan GPU-based system; and the combined Sequoia and Vulcan BlueGene/Q. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corp., for the U.S. Dept. of Energy's National Nuclear Security Admin. under Contract DE-AC04-94AL85000.
Accuracy of Carotid Duplex Criteria in Diagnosis of Significant Carotid Stenosis in Asian Patients.
Dharmasaroja, Pornpatr A; Uransilp, Nattaphol; Watcharakorn, Arvemas; Piyabhan, Pritsana
2018-03-01
Extracranial carotid stenosis can be diagnosed by velocity criteria of carotid duplex. Whether they are accurately applied to define severity of internal carotid artery (ICA) stenosis in Asian patients needs to be proved. The purpose of this study was to evaluate the accuracy of 2 carotid duplex velocity criteria in defining significant carotid stenosis. Carotid duplex studies and magnetic resonance angiography were reviewed. Criteria 1 was recommended by the Society of Radiologists in Ultrasound; moderate stenosis (50%-69%): peak systolic velocity (PSV) 125-230 cm/s, diastolic velocity (DV) 40-100 cm/s; severe stenosis (>70%): PSV greater than 230 cm/s, DV greater than 100 cm/s. Criteria 2 used PSV greater than 140 cm/s, DV less than 110 cm/s to define moderate stenosis (50%-75%) and PSV greater than 140 cm/s, DV greater than 110 cm/s for severe stenosis (76%-95%). A total of 854 ICA segments were reviewed. There was moderate stenosis in 72 ICAs, severe stenosis in 50 ICAs, and occlusion in 78 ICAs. Criteria 2 had slightly lower sensitivity, whereas higher specificity and accuracy than criteria 1 were observed in detecting moderate stenosis (criteria 1: sensitivity 95%, specificity 83%, accuracy 84%; criteria 2: sensitivity 92%, specificity 92%, and accuracy 92%). However, in detection of severe ICA stenosis, no significant difference in sensitivity, specificity, and accuracy was found (criteria 1: sensitivity 82%, specificity 99.57%, accuracy 98%; criteria 2: sensitivity 86%, specificity 99.68%, and accuracy 99%). In the subgroup of moderate stenosis, the criteria using ICA PSV greater than 140 cm/s had higher specificity and accuracy than the criteria using ICA PSV 125-230 cm/s. However, there was no significant difference in detection of severe stenosis or occlusion of ICA. Copyright © 2018 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Genomic Prediction of Gene Bank Wheat Landraces.
Crossa, José; Jarquín, Diego; Franco, Jorge; Pérez-Rodríguez, Paulino; Burgueño, Juan; Saint-Pierre, Carolina; Vikram, Prashant; Sansaloni, Carolina; Petroli, Cesar; Akdemir, Deniz; Sneller, Clay; Reynolds, Matthew; Tattaris, Maria; Payne, Thomas; Guzman, Carlos; Peña, Roberto J; Wenzl, Peter; Singh, Sukhwinder
2016-07-07
This study examines genomic prediction within 8416 Mexican landrace accessions and 2403 Iranian landrace accessions stored in gene banks. The Mexican and Iranian collections were evaluated in separate field trials, including an optimum environment for several traits, and in two separate environments (drought, D and heat, H) for the highly heritable traits, days to heading (DTH), and days to maturity (DTM). Analyses accounting and not accounting for population structure were performed. Genomic prediction models include genotype × environment interaction (G × E). Two alternative prediction strategies were studied: (1) random cross-validation of the data in 20% training (TRN) and 80% testing (TST) (TRN20-TST80) sets, and (2) two types of core sets, "diversity" and "prediction", including 10% and 20%, respectively, of the total collections. Accounting for population structure decreased prediction accuracy by 15-20% as compared to prediction accuracy obtained when not accounting for population structure. Accounting for population structure gave prediction accuracies for traits evaluated in one environment for TRN20-TST80 that ranged from 0.407 to 0.677 for Mexican landraces, and from 0.166 to 0.662 for Iranian landraces. Prediction accuracy of the 20% diversity core set was similar to accuracies obtained for TRN20-TST80, ranging from 0.412 to 0.654 for Mexican landraces, and from 0.182 to 0.647 for Iranian landraces. The predictive core set gave similar prediction accuracy as the diversity core set for Mexican collections, but slightly lower for Iranian collections. Prediction accuracy when incorporating G × E for DTH and DTM for Mexican landraces for TRN20-TST80 was around 0.60, which is greater than without the G × E term. For Iranian landraces, accuracies were 0.55 for the G × E model with TRN20-TST80. Results show promising prediction accuracies for potential use in germplasm enhancement and rapid introgression of exotic germplasm into elite materials. Copyright © 2016 Crossa et al.
Genomic Prediction of Gene Bank Wheat Landraces
Crossa, José; Jarquín, Diego; Franco, Jorge; Pérez-Rodríguez, Paulino; Burgueño, Juan; Saint-Pierre, Carolina; Vikram, Prashant; Sansaloni, Carolina; Petroli, Cesar; Akdemir, Deniz; Sneller, Clay; Reynolds, Matthew; Tattaris, Maria; Payne, Thomas; Guzman, Carlos; Peña, Roberto J.; Wenzl, Peter; Singh, Sukhwinder
2016-01-01
This study examines genomic prediction within 8416 Mexican landrace accessions and 2403 Iranian landrace accessions stored in gene banks. The Mexican and Iranian collections were evaluated in separate field trials, including an optimum environment for several traits, and in two separate environments (drought, D and heat, H) for the highly heritable traits, days to heading (DTH), and days to maturity (DTM). Analyses accounting and not accounting for population structure were performed. Genomic prediction models include genotype × environment interaction (G × E). Two alternative prediction strategies were studied: (1) random cross-validation of the data in 20% training (TRN) and 80% testing (TST) (TRN20-TST80) sets, and (2) two types of core sets, “diversity” and “prediction”, including 10% and 20%, respectively, of the total collections. Accounting for population structure decreased prediction accuracy by 15–20% as compared to prediction accuracy obtained when not accounting for population structure. Accounting for population structure gave prediction accuracies for traits evaluated in one environment for TRN20-TST80 that ranged from 0.407 to 0.677 for Mexican landraces, and from 0.166 to 0.662 for Iranian landraces. Prediction accuracy of the 20% diversity core set was similar to accuracies obtained for TRN20-TST80, ranging from 0.412 to 0.654 for Mexican landraces, and from 0.182 to 0.647 for Iranian landraces. The predictive core set gave similar prediction accuracy as the diversity core set for Mexican collections, but slightly lower for Iranian collections. Prediction accuracy when incorporating G × E for DTH and DTM for Mexican landraces for TRN20-TST80 was around 0.60, which is greater than without the G × E term. For Iranian landraces, accuracies were 0.55 for the G × E model with TRN20-TST80. Results show promising prediction accuracies for potential use in germplasm enhancement and rapid introgression of exotic germplasm into elite materials. PMID:27172218
Dang, Mia; Ramsaran, Kalinda D.; Street, Melissa E.; Syed, S. Noreen; Barclay-Goddard, Ruth; Miller, Patricia A.
2011-01-01
ABSTRACT Purpose: To estimate the predictive accuracy and clinical usefulness of the Chedoke–McMaster Stroke Assessment (CMSA) predictive equations. Method: A longitudinal prognostic study using historical data obtained from 104 patients admitted post cerebrovascular accident was undertaken. Data were abstracted for all patients undergoing rehabilitation post stroke who also had documented admission and discharge CMSA scores. Published predictive equations were used to determine predicted outcomes. To determine the accuracy and clinical usefulness of the predictive model, shrinkage coefficients and predictions with 95% confidence bands were calculated. Results: Complete data were available for 74 patients with a mean age of 65.3±12.4 years. The shrinkage values for the six Impairment Inventory (II) dimensions varied from −0.05 to 0.09; the shrinkage value for the Activity Inventory (AI) was 0.21. The error associated with predictive values was greater than ±1.5 stages for the II dimensions and greater than ±24 points for the AI. Conclusions: This study shows that the large error associated with the predictions (as defined by the confidence band) for the CMSA II and AI limits their clinical usefulness as a predictive measure. Further research to establish predictive models using alternative statistical procedures is warranted. PMID:22654239
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wehrschuetz, M., E-mail: martin.wehrschuetz@klinikum-graz.at; Aschauer, M.; Portugaller, H.
The purpose of this study was to assess interobserver variability and accuracy in the evaluation of renal artery stenosis (RAS) with gadolinium-enhanced MR angiography (MRA) and digital subtraction angiography (DSA) in patients with hypertension. The authors found that source images are more accurate than maximum intensity projection (MIP) for depicting renal artery stenosis. Two independent radiologists reviewed MRA and DSA from 38 patients with hypertension. Studies were postprocessed to display images in MIP and source images. DSA was the standard for comparison in each patient. For each main renal artery, percentage stenosis was estimated for any stenosis detected by themore » two radiologists. To calculate sensitivity, specificity and accuracy, MRA studies and stenoses were categorized as normal, mild (1-39%), moderate (40-69%) or severe ({>=}70%), or occluded. DSA stenosis estimates of 70% or greater were considered hemodynamically significant. Analysis of variance demonstrated that MIP estimates of stenosis were greater than source image estimates for both readers. Differences in estimates for MIP versus DSA reached significance in one reader. The interobserver variance for MIP, source images and DSA was excellent (0.80< {kappa}{<=} 0.90). The specificity of source images was high (97%) but less for MIP (87%); average accuracy was 92% for MIP and 98% for source images. In this study, source images are significantly more accurate than MIP images in one reader with a similar trend was observed in the second reader. The interobserver variability was excellent. When renal artery stenosis is a consideration, high accuracy can only be obtained when source images are examined.« less
Phase noise in pulsed Doppler lidar and limitations on achievable single-shot velocity accuracy
NASA Technical Reports Server (NTRS)
Mcnicholl, P.; Alejandro, S.
1992-01-01
The smaller sampling volumes afforded by Doppler lidars compared to radars allows for spatial resolutions at and below some sheer and turbulence wind structure scale sizes. This has brought new emphasis on achieving the optimum product of wind velocity and range resolutions. Several recent studies have considered the effects of amplitude noise, reduction algorithms, and possible hardware related signal artifacts on obtainable velocity accuracy. We discuss here the limitation on this accuracy resulting from the incoherent nature and finite temporal extent of backscatter from aerosols. For a lidar return from a hard (or slab) target, the phase of the intermediate frequency (IF) signal is random and the total return energy fluctuates from shot to shot due to speckle; however, the offset from the transmitted frequency is determinable with an accuracy subject only to instrumental effects and the signal to noise ratio (SNR), the noise being determined by the LO power in the shot noise limited regime. This is not the case for a return from a media extending over a range on the order of or greater than the spatial extent of the transmitted pulse, such as from atmospheric aerosols. In this case, the phase of the IF signal will exhibit a temporal random walk like behavior. It will be uncorrelated over times greater than the pulse duration as the transmitted pulse samples non-overlapping volumes of scattering centers. Frequency analysis of the IF signal in a window similar to the transmitted pulse envelope will therefore show shot-to-shot frequency deviations on the order of the inverse pulse duration reflecting the random phase rate variations. Like speckle, these deviations arise from the incoherent nature of the scattering process and diminish if the IF signal is averaged over times greater than a single range resolution cell (here the pulse duration). Apart from limiting the high SNR performance of a Doppler lidar, this shot-to-shot variance in velocity estimates has a practical impact on lidar design parameters. In high SNR operation, for example, a lidar's efficiency in obtaining mean wind measurements is determined by its repetition rate and not pulse energy or average power. In addition, this variance puts a practical limit on the shot-to-shot hard target performance required of a lidar.
Kim, Michele M; Zhu, Timothy C
2013-02-02
During HPPH-mediated pleural photodynamic therapy (PDT), it is critical to determine the anatomic geometry of the pleural surface quickly as there may be movement during treatment resulting in changes with the cavity. We have developed a laser scanning device for this purpose, which has the potential to obtain the surface geometry in real-time. A red diode laser with a holographic template to create a pattern and a camera with auto-focusing abilities are used to scan the cavity. In conjunction with a calibration with a known surface, we can use methods of triangulation to reconstruct the surface. Using a chest phantom, we are able to obtain a 360 degree scan of the interior in under 1 minute. The chest phantom scan was compared to an existing CT scan to determine its accuracy. The laser-camera separation can be determined through the calibration with 2mm accuracy. The device is best suited for environments that are on the scale of a chest cavity (between 10cm and 40cm). This technique has the potential to produce cavity geometry in real-time during treatment. This would enable PDT treatment dosage to be determined with greater accuracy. Works are ongoing to build a miniaturized device that moves the light source and camera via a fiber-optics bundle commonly used for endoscopy with increased accuracy.
Strong Tracking Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking.
Liu, Hua; Wu, Wen
2017-03-31
Conventional spherical simplex-radial cubature Kalman filter (SSRCKF) for maneuvering target tracking may decline in accuracy and even diverge when a target makes abrupt state changes. To overcome this problem, a novel algorithm named strong tracking spherical simplex-radial cubature Kalman filter (STSSRCKF) is proposed in this paper. The proposed algorithm uses the spherical simplex-radial (SSR) rule to obtain a higher accuracy than cubature Kalman filter (CKF) algorithm. Meanwhile, by introducing strong tracking filter (STF) into SSRCKF and modifying the predicted states' error covariance with a time-varying fading factor, the gain matrix is adjusted on line so that the robustness of the filter and the capability of dealing with uncertainty factors is improved. In this way, the proposed algorithm has the advantages of both STF's strong robustness and SSRCKF's high accuracy. Finally, a maneuvering target tracking problem with abrupt state changes is used to test the performance of the proposed filter. Simulation results show that the STSSRCKF algorithm can get better estimation accuracy and greater robustness for maneuvering target tracking.
Strong Tracking Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking
Liu, Hua; Wu, Wen
2017-01-01
Conventional spherical simplex-radial cubature Kalman filter (SSRCKF) for maneuvering target tracking may decline in accuracy and even diverge when a target makes abrupt state changes. To overcome this problem, a novel algorithm named strong tracking spherical simplex-radial cubature Kalman filter (STSSRCKF) is proposed in this paper. The proposed algorithm uses the spherical simplex-radial (SSR) rule to obtain a higher accuracy than cubature Kalman filter (CKF) algorithm. Meanwhile, by introducing strong tracking filter (STF) into SSRCKF and modifying the predicted states’ error covariance with a time-varying fading factor, the gain matrix is adjusted on line so that the robustness of the filter and the capability of dealing with uncertainty factors is improved. In this way, the proposed algorithm has the advantages of both STF’s strong robustness and SSRCKF’s high accuracy. Finally, a maneuvering target tracking problem with abrupt state changes is used to test the performance of the proposed filter. Simulation results show that the STSSRCKF algorithm can get better estimation accuracy and greater robustness for maneuvering target tracking. PMID:28362347
Energy Expenditure in Critically Ill Elderly Patients: Indirect Calorimetry vs Predictive Equations.
Segadilha, Nara L A L; Rocha, Eduardo E M; Tanaka, Lilian M S; Gomes, Karla L P; Espinoza, Rodolfo E A; Peres, Wilza A F
2017-07-01
Predictive equations (PEs) are used for estimating resting energy expenditure (REE) when the measurements obtained from indirect calorimetry (IC) are not available. This study evaluated the degree of agreement and the accuracy between the REE measured by IC (REE-IC) and REE estimated by PE (REE-PE) in mechanically ventilated elderly patients admitted to the intensive care unit (ICU). REE-IC of 97 critically ill elderly patients was compared with REE-PE by 6 PEs: Harris and Benedict (HB) multiplied by the correction factor of 1.2; European Society for Clinical Nutrition and Metabolism (ESPEN) using the minimum (ESPENmi), average (ESPENme), and maximum (ESPENma) values; Mifflin-St Jeor; Ireton-Jones (IJ); Fredrix; and Lührmann. Degree of agreement between REE-PE and REE-IC was analyzed by the interclass correlation coefficient and the Bland-Altman test. The accuracy was calculated by the percentage of male and/or female patients whose REE-PE values differ by up to ±10% in relation to REE-IC. For both sexes, there was no difference for average REE-IC in kcal/kg when the values obtained with REE-PE by corrected HB and ESPENme were compared. A high level of agreement was demonstrated by corrected HB for both sexes, with greater accuracy for women. The best accuracy in the male group was obtained with the IJ equation but with a low level of agreement. The effectiveness of PEs is limited for estimating REE of critically ill elderly patients. Nonetheless, HB multiplied by a correction factor of 1.2 can be used until a specific PE for this group of patients is developed.
Digital Receiver for Microwave Radiometry
NASA Technical Reports Server (NTRS)
Ellingson, Steven W.; Hampson, Grant A.; Johnson, Joel T.
2005-01-01
A receiver proposed for use in L-band microwave radiometry (for measuring soil moisture and sea salinity) would utilize digital signal processing to suppress interfering signals. Heretofore, radio frequency interference has made it necessary to limit such radiometry to a frequency band about 20 MHz wide, centered at .1,413 MHz. The suppression of interference in the proposed receiver would make it possible to expand the frequency band to a width of 100 MHz, thereby making it possible to obtain greater sensitivity and accuracy in measuring moisture and salinity
MC-PDFT can calculate singlet-triplet splittings of organic diradicals
NASA Astrophysics Data System (ADS)
Stoneburner, Samuel J.; Truhlar, Donald G.; Gagliardi, Laura
2018-02-01
The singlet-triplet splittings of a set of diradical organic molecules are calculated using multiconfiguration pair-density functional theory (MC-PDFT), and the results are compared with those obtained by Kohn-Sham density functional theory (KS-DFT) and complete active space second-order perturbation theory (CASPT2) calculations. We found that MC-PDFT, even with small and systematically defined active spaces, is competitive in accuracy with CASPT2, and it yields results with greater accuracy and precision than Kohn-Sham DFT with the parent functional. MC-PDFT also avoids the challenges associated with spin contamination in KS-DFT. It is also shown that MC-PDFT is much less computationally expensive than CASPT2 when applied to larger active spaces, and this illustrates the promise of this method for larger diradical organic systems.
Han, Xiao-Jing; Duan, Si-Bo; Li, Zhao-Liang
2017-02-20
An analysis of the atmospheric impact on ground brightness temperature (Tg) is performed for numerous land surface types at commonly-used frequencies (i.e., 1.4 GHz, 6.93 GHz, 10.65 GHz, 18.7 GHz, 23.8 GHz, 36.5 GHz and 89.0 GHz). The results indicate that the atmosphere has a negligible impact on Tg at 1.4 GHz for land surfaces with emissivities greater than 0.7, at 6.93 GHz for land surfaces with emissivities greater than 0.8, and at 10.65 GHz for land surfaces with emissivities greater than 0.9 if a root mean square error (RMSE) less than 1 K is desired. To remove the atmospheric effect on Tg, a generalized atmospheric correction method is proposed by parameterizing the atmospheric transmittance τ and upwelling atmospheric brightness temperature Tba↑. Better accuracies with Tg RMSEs less than 1 K are achieved at 1.4 GHz, 6.93 GHz, 10.65 GHz, 18.7 GHz and 36.5 GHz, and worse accuracies with RMSEs of 1.34 K and 4.35 K are obtained at 23.8 GHz and 89.0 GHz, respectively. Additionally, a simplified atmospheric correction method is developed when lacking sufficient input data to perform the generalized atmospheric correction method, and an emissivity-based atmospheric correction method is presented when the emissivity is known. Consequently, an appropriate atmospheric correction method can be selected based on the available data, frequency and required accuracy. Furthermore, this study provides a method to estimate τ and Tba↑ of different frequencies using the atmospheric parameters (total water vapor content in observation direction Lwv, total cloud liquid water content Lclw and mean temperature of cloud Tclw), which is important for simultaneously determining the land surface parameters using multi-frequency passive microwave satellite data.
A new method for recognizing hand configurations of Brazilian gesture language.
Costa Filho, C F F; Dos Santos, B L; de Souza, R S; Dos Santos, J R; Costa, M G F
2016-08-01
This paper describes a new method for recognizing hand configurations of the Brazilian Gesture Language - LIBRAS - using depth maps obtained with a Kinect® camera. The proposed method comprised three phases: hand segmentation, feature extraction, and classification. The segmentation phase is independent from the background and depends only on pixel depth information. Using geometric operations and numerical normalization, the feature extraction process was done independent from rotation and translation. The features are extracted employing two techniques: (2D)2LDA and (2D)2PCA. The classification is made with a novelty classifier. A robust database was constructed for classifier evaluation, with 12,200 images of LIBRAS and 200 gestures of each hand configuration. The best accuracy obtained was 95.41%, which was greater than previous values obtained in the literature.
Evaluation of the Technicon Axon analyser.
Martínez, C; Márquez, M; Cortés, M; Mercé, J; Rodriguez, J; González, F
1990-01-01
An evaluation of the Technicon Axon analyser was carried out following the guidelines of the 'Sociedad Española de Química Clínica' and the European Committee for Clinical Laboratory Standards.A photometric study revealed acceptable results at both 340 nm and 404 nm. Inaccuracy and imprecision were lower at 404 nm than at 340 nm, although poor dispersion was found at both wavelengths, even at low absorbances. Drift was negligible, the imprecision of the sample pipette delivery system was greater for small sample volumes, the reagent pipette delivery system imprecision was acceptable and the sample diluting system study showed good precision and accuracy.Twelve analytes were studied for evaluation of the analyser under routine working conditions. Satisfactory results were obtained for within-run imprecision, while coefficients of variation for betweenrun imprecision were much greater than expected. Neither specimenrelated nor specimen-independent contamination was found in the carry-over study. For all analytes assayed, when comparing patient sample results with those obtained in a Hitachi 737 analyser, acceptable relative inaccuracy was observed.
Umari, A.M.; Gorelick, S.M.
1986-01-01
It is possible to obtain analytic solutions to the groundwater flow and solute transport equations if space variables are discretized but time is left continuous. From these solutions, hydraulic head and concentration fields for any future time can be obtained without ' marching ' through intermediate time steps. This analytical approach involves matrix exponentiation and is referred to as the Matrix Exponential Time Advancement (META) method. Two algorithms are presented for the META method, one for symmetric and the other for non-symmetric exponent matrices. A numerical accuracy indicator, referred to as the matrix condition number, was defined and used to determine the maximum number of significant figures that may be lost in the META method computations. The relative computational and storage requirements of the META method with respect to the time marching method increase with the number of nodes in the discretized problem. The potential greater accuracy of the META method and the associated greater reliability through use of the matrix condition number have to be weighed against this increased relative computational and storage requirements of this approach as the number of nodes becomes large. For a particular number of nodes, the META method may be computationally more efficient than the time-marching method, depending on the size of time steps used in the latter. A numerical example illustrates application of the META method to a sample ground-water-flow problem. (Author 's abstract)
Short-arc orbit determination using coherent X-band ranging data
NASA Technical Reports Server (NTRS)
Thurman, S. W.; Mcelrath, T. P.; Pollmeier, V. M.
1992-01-01
The use of X-band frequencies in ground-spacecraft and spacecraft-ground telecommunication links for current and future robotic interplanetary missions makes it possible to perform ranging measurements of greater accuracy than previously obtained. It is shown that ranging data of sufficient accuracy, when acquired from multiple stations, can sense the geocentric angular position of a distant spacecraft. The application of high-accuracy S/X-band and X-band ranging to orbit determination with relatively short data arcs is investigated in planetary approach and encounter scenarios. Actual trajectory solutions for the Ulysses spacecraft constructed from S/X-band ranging and Doppler data are presented; error covariance calculations are used to predict the performance of X-band ranging and Doppler data. The Ulysses trajectory solutions indicate that the aim point for the spacecraft's February 1992 Jupiter encounter was predicted to a geocentric accuracy of 0.20 to 0.23/microrad. Explicit modeling of range bias parameters for each station pass is shown to largely remove systematic ground system calibration errors and transmission media effects from the Ulysses range measurements, which would otherwise corrupt the angle finding capabilities of the data. The Ulysses solutions were found to be reasonably consistent with the theoretical results, which suggest that angular accuracies of 0.08 to 0.1/microrad are achievable with X-band ranging.
Yang, Yanqiang; Zhang, Chunxi; Lu, Jiazhen
2017-01-16
Strapdown inertial navigation system/celestial navigation system (SINS/CNS) integrated navigation is a fully autonomous and high precision method, which has been widely used to improve the hitting accuracy and quick reaction capability of near-Earth flight vehicles. The installation errors between SINS and star sensors have been one of the main factors that restrict the actual accuracy of SINS/CNS. In this paper, an integration algorithm based on the star vector observations is derived considering the star sensor installation error. Then, the star sensor installation error is accurately estimated based on Kalman Filtering (KF). Meanwhile, a local observability analysis is performed on the rank of observability matrix obtained via linearization observation equation, and the observable conditions are presented and validated. The number of star vectors should be greater than or equal to 2, and the times of posture adjustment also should be greater than or equal to 2. Simulations indicate that the star sensor installation error could be readily observable based on the maneuvering condition; moreover, the attitude errors of SINS are less than 7 arc-seconds. This analysis method and conclusion are useful in the ballistic trajectory design of near-Earth flight vehicles.
Evaluation of the Sparton tight-tolerance AXBT
NASA Technical Reports Server (NTRS)
Boyd, Janice D.; Linzell, Robert S.
1993-01-01
Forty-six near-simultaneous pairs of conductivity - temperature - depth (CTD) and Sparton 'tight tolerance' air expendable bathythermograph (AXBT) temperature profiles were obtained in summer 1991 from a location in the Sargasso Sea. The data were analyzed to assess the temperature and depth accuracies of the Sparton AXBTs. The tight-tolerance criterion was not achieved using the manufacturer's equations but may have been achieved using customized equations computed from the CTD data. The temperature data from the customized equations had a one standard deviation error of 0.13 C. A customized elapsed fall time-to-depth conversion equation was found to be z = 1.620t - 2.2384 x 10(exp -4) t(exp 2) + 1.291 x 10(exp -7) t(exp 3), with z the depth in meters and t the elapsed fall time after probe release in seconds. The standard deviation of the depth error was about 5 m; a rule of thumb for estimating maximum bounds on the depth error below 100 m could be expressed as +/-2% of depth or +/- 10 m, whichever is greater. This equation gave greater depth accuracy than either the manufacturer's supplied equation or the navy standard equation.
Probability or Reasoning: Current Thinking and Realistic Strategies for Improved Medical Decisions
2017-01-01
A prescriptive model approach in decision making could help achieve better diagnostic accuracy in clinical practice through methods that are less reliant on probabilistic assessments. Various prescriptive measures aimed at regulating factors that influence heuristics and clinical reasoning could support clinical decision-making process. Clinicians could avoid time-consuming decision-making methods that require probabilistic calculations. Intuitively, they could rely on heuristics to obtain an accurate diagnosis in a given clinical setting. An extensive literature review of cognitive psychology and medical decision-making theory was performed to illustrate how heuristics could be effectively utilized in daily practice. Since physicians often rely on heuristics in realistic situations, probabilistic estimation might not be a useful tool in everyday clinical practice. Improvements in the descriptive model of decision making (heuristics) may allow for greater diagnostic accuracy. PMID:29209469
Probability or Reasoning: Current Thinking and Realistic Strategies for Improved Medical Decisions.
Nantha, Yogarabindranath Swarna
2017-11-01
A prescriptive model approach in decision making could help achieve better diagnostic accuracy in clinical practice through methods that are less reliant on probabilistic assessments. Various prescriptive measures aimed at regulating factors that influence heuristics and clinical reasoning could support clinical decision-making process. Clinicians could avoid time-consuming decision-making methods that require probabilistic calculations. Intuitively, they could rely on heuristics to obtain an accurate diagnosis in a given clinical setting. An extensive literature review of cognitive psychology and medical decision-making theory was performed to illustrate how heuristics could be effectively utilized in daily practice. Since physicians often rely on heuristics in realistic situations, probabilistic estimation might not be a useful tool in everyday clinical practice. Improvements in the descriptive model of decision making (heuristics) may allow for greater diagnostic accuracy.
Ruiz-Felter, Roxanna; Cooperson, Solaman J; Bedore, Lisa M; Peña, Elizabeth D
2016-07-01
Although some investigations of phonological development have found that segmental accuracy is comparable in monolingual children and their bilingual peers, there is evidence that language use affects segmental accuracy in both languages. To investigate the influence of age of first exposure to English and the amount of current input-output on phonological accuracy in English and Spanish in early bilingual Spanish-English kindergarteners. Also whether parent and teacher ratings of the children's intelligibility are correlated with phonological accuracy and the amount of experience with each language. Data for 91 kindergarteners (mean age = 5;6 years) were selected from a larger dataset focusing on Spanish-English bilingual language development. All children were from Central Texas, spoke a Mexican Spanish dialect and were learning American English. Children completed a single-word phonological assessment with separate forms for English and Spanish. The assessment was analyzed for segmental accuracy: percentage of consonants and vowels correct and percentage of early-, middle- and late-developing (EML) sounds correct were calculated. Children were more accurate on vowel production than consonant production and showed a decrease in accuracy from early to middle to late sounds. The amount of current input-output explained more of the variance in phonological accuracy than age of first English exposure. Although greater current input-output of a language was associated with greater accuracy in that language, English-dominant children were only significantly more accurate in English than Spanish on late sounds, whereas Spanish-dominant children were only significantly more accurate in Spanish than English on early sounds. Higher parent and teacher ratings of intelligibility in Spanish were correlated with greater consonant accuracy in Spanish, but the same did not hold for English. Higher intelligibility ratings in English were correlated with greater current English input-output, and the same held for Spanish. Current input-output appears to be a better predictor of phonological accuracy than age of first English exposure for early bilinguals, consistent with findings on the effect of language experience on performance in other language domains in bilingual children. Although greater current input-output in a language predicts higher accuracy in that language, this interacts with sound complexity. The results highlight the utility of the EML classification in assessing bilingual children's phonology. The relationships of intelligibility ratings with current input-output and sound accuracy can shed light on the process of referral of bilingual children for speech and language services. © 2016 Royal College of Speech and Language Therapists.
Inexpensive Strobe-like Photographs
NASA Astrophysics Data System (ADS)
Medeiros, Emil L.; Tavares, Odilon A. P.; Duarte, Sérgio B.
2009-11-01
This paper reports on a technique the authors have developed to produce and analyze, at very low cost, good quality strobe-like photographs like the one shown in Figs. 1(a) and 1(b). While the concept is similar to the one described by Graney and DiNoto, the strategy described here benefits from recent advances in the fields of digital photography and related software to significantly reduce the costs, simplify the production process, and enhance the final quality of photographs of this type, as well as to obtain greater accuracy in measurements made with them.
Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J
2015-01-01
Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy. PMID:25628867
Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J
2015-01-01
Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy.
Accuracy of digital images in the detection of marginal microleakage: an in vitro study.
Alvarenga, Fábio Augusto; Andrade, Marcelo Ferrarezi; Pinelli, Camila; Rastelli, Alessanda Nara; Victorino, Keli Regina; Loffredo, Leonor de
2012-08-01
To evaluate the accuracy of Image Tool Software 3.0 (ITS 3.0) to detect marginal microleakage using the stereomicroscope as the validation criterion and ITS 3.0 as the tool under study. Class V cavities were prepared at the cementoenamel junction of 61 bovine incisors, and 53 halves of them were used. Using the stereomicroscope, microleakage was classified dichotomously: presence or absence. Next, ITS 3.0 was used to obtain measurements of the microleakage, so that 0.75 was taken as the cut-off point, and values equal to or greater than 0.75 indicated its presence, while values between 0.00 and 0.75 indicated its absence. Sensitivity and specificity were calculated by point and given as 95% confidence interval (95% CI). The accuracy of the ITS 3.0 was verified with a sensitivity of 0.95 (95% CI: 0.89 to 1.00) and a specificity of 0.92 (95% CI: 0.84 to 0.99). Digital diagnosis of marginal microleakage using ITS 3.0 was sensitive and specific.
The wisdom of crowds for visual search
Juni, Mordechai Z.; Eckstein, Miguel P.
2017-01-01
Decision-making accuracy typically increases through collective integration of people’s judgments into group decisions, a phenomenon known as the wisdom of crowds. For simple perceptual laboratory tasks, classic signal detection theory specifies the upper limit for collective integration benefits obtained by weighted averaging of people’s confidences, and simple majority voting can often approximate that limit. Life-critical perceptual decisions often involve searching large image data (e.g., medical, security, and aerial imagery), but the expected benefits and merits of using different pooling algorithms are unknown for such tasks. Here, we show that expected pooling benefits are significantly greater for visual search than for single-location perceptual tasks and the prediction given by classic signal detection theory. In addition, we show that simple majority voting obtains inferior accuracy benefits for visual search relative to averaging and weighted averaging of observers’ confidences. Analysis of gaze behavior across observers suggests that the greater collective integration benefits for visual search arise from an interaction between the foveated properties of the human visual system (high foveal acuity and low peripheral acuity) and observers’ nonexhaustive search patterns, and can be predicted by an extended signal detection theory framework with trial to trial sampling from a varying mixture of high and low target detectabilities across observers (SDT-MIX). These findings advance our theoretical understanding of how to predict and enhance the wisdom of crowds for real world search tasks and could apply more generally to any decision-making task for which the minority of group members with high expertise varies from decision to decision. PMID:28490500
Reasoning strategies with rational numbers revealed by eye tracking.
Plummer, Patrick; DeWolf, Melissa; Bassok, Miriam; Gordon, Peter C; Holyoak, Keith J
2017-07-01
Recent research has begun to investigate the impact of different formats for rational numbers on the processes by which people make relational judgments about quantitative relations. DeWolf, Bassok, and Holyoak (Journal of Experimental Psychology: General, 144(1), 127-150, 2015) found that accuracy on a relation identification task was highest when fractions were presented with countable sets, whereas accuracy was relatively low for all conditions where decimals were presented. However, it is unclear what processing strategies underlie these disparities in accuracy. We report an experiment that used eye-tracking methods to externalize the strategies that are evoked by different types of rational numbers for different types of quantities (discrete vs. continuous). Results showed that eye-movement behavior during the task was jointly determined by image and number format. Discrete images elicited a counting strategy for both fractions and decimals, but this strategy led to higher accuracy only for fractions. Continuous images encouraged magnitude estimation and comparison, but to a greater degree for decimals than fractions. This strategy led to decreased accuracy for both number formats. By analyzing participants' eye movements when they viewed a relational context and made decisions, we were able to obtain an externalized representation of the strategic choices evoked by different ontological types of entities and different types of rational numbers. Our findings using eye-tracking measures enable us to go beyond previous studies based on accuracy data alone, demonstrating that quantitative properties of images and the different formats for rational numbers jointly influence strategies that generate eye-movement behavior.
Evaluation of targeting errors in ultrasound-assisted radiotherapy
Wang, Michael; Rohling, Robert; Duzenli, Cheryl; Clark, Brenda; Archip, Neculai
2014-01-01
A method for validating the start-to-end accuracy of a 3D ultrasound-based patient positioning system for radiotherapy is described. A radiosensitive polymer gel is used to record the actual dose delivered to a rigid phantom after being positioned using 3D ultrasound guidance. Comparison of the delivered dose with the treatment plan allows accuracy of the entire radiotherapy treatment process, from simulation to 3D ultrasound guidance, and finally delivery of radiation, to be evaluated. The 3D ultrasound patient positioning system has a number of features for achieving high accuracy and reducing operator dependence. These include using tracked 3D ultrasound scans of the target anatomy acquired using a dedicated 3D ultrasound probe during both the simulation and treatment sessions, automatic 3D ultrasound-to-ultrasound registration, and use of infra-red LED (IRED) markers of the optical position sensing system for registering simulation CT to ultrasound data. The mean target localization accuracy of this system was 2.5mm for four target locations inside the phantom, compared to 1.6mm obtained using the conventional patient positioning method of laser alignment. Since the phantom is rigid, this represents the best possible set-up accuracy of the system. Thus, these results suggest that 3D ultrasound-based target localization is practically feasible and potentially capable of increasing the accuracy of patient positioning for radiotherapy in sites where day-to-day organ shifts are greater than 1mm in magnitude. PMID:18723271
Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz
2014-06-30
In this paper, a novel adaptive cooperative protocol with multiple relays using detect-and-forward (DF) over atmospheric turbulence channels with pointing errors is proposed. The adaptive DF cooperative protocol here analyzed is based on the selection of the optical path, source-destination or different source-relay links, with a greater value of fading gain or irradiance, maintaining a high diversity order. Closed-form asymptotic bit error-rate (BER) expressions are obtained for a cooperative free-space optical (FSO) communication system with Nr relays, when the irradiance of the transmitted optical beam is susceptible to either a wide range of turbulence conditions, following a gamma-gamma distribution of parameters α and β, or pointing errors, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. A greater robustness for different link distances and pointing errors is corroborated by the obtained results if compared with similar cooperative schemes or equivalent multiple-input multiple-output (MIMO) systems. Simulation results are further demonstrated to confirm the accuracy and usefulness of the derived results.
Saatchi, Mahdi; McClure, Mathew C; McKay, Stephanie D; Rolf, Megan M; Kim, JaeWoo; Decker, Jared E; Taxis, Tasia M; Chapple, Richard H; Ramey, Holly R; Northcutt, Sally L; Bauck, Stewart; Woodward, Brent; Dekkers, Jack C M; Fernando, Rohan L; Schnabel, Robert D; Garrick, Dorian J; Taylor, Jeremy F
2011-11-28
Genomic selection is a recently developed technology that is beginning to revolutionize animal breeding. The objective of this study was to estimate marker effects to derive prediction equations for direct genomic values for 16 routinely recorded traits of American Angus beef cattle and quantify corresponding accuracies of prediction. Deregressed estimated breeding values were used as observations in a weighted analysis to derive direct genomic values for 3570 sires genotyped using the Illumina BovineSNP50 BeadChip. These bulls were clustered into five groups using K-means clustering on pedigree estimates of additive genetic relationships between animals, with the aim of increasing within-group and decreasing between-group relationships. All five combinations of four groups were used for model training, with cross-validation performed in the group not used in training. Bivariate animal models were used for each trait to estimate the genetic correlation between deregressed estimated breeding values and direct genomic values. Accuracies of direct genomic values ranged from 0.22 to 0.69 for the studied traits, with an average of 0.44. Predictions were more accurate when animals within the validation group were more closely related to animals in the training set. When training and validation sets were formed by random allocation, the accuracies of direct genomic values ranged from 0.38 to 0.85, with an average of 0.65, reflecting the greater relationship between animals in training and validation. The accuracies of direct genomic values obtained from training on older animals and validating in younger animals were intermediate to the accuracies obtained from K-means clustering and random clustering for most traits. The genetic correlation between deregressed estimated breeding values and direct genomic values ranged from 0.15 to 0.80 for the traits studied. These results suggest that genomic estimates of genetic merit can be produced in beef cattle at a young age but the recurrent inclusion of genotyped sires in retraining analyses will be necessary to routinely produce for the industry the direct genomic values with the highest accuracy.
2011-01-01
Background Genomic selection is a recently developed technology that is beginning to revolutionize animal breeding. The objective of this study was to estimate marker effects to derive prediction equations for direct genomic values for 16 routinely recorded traits of American Angus beef cattle and quantify corresponding accuracies of prediction. Methods Deregressed estimated breeding values were used as observations in a weighted analysis to derive direct genomic values for 3570 sires genotyped using the Illumina BovineSNP50 BeadChip. These bulls were clustered into five groups using K-means clustering on pedigree estimates of additive genetic relationships between animals, with the aim of increasing within-group and decreasing between-group relationships. All five combinations of four groups were used for model training, with cross-validation performed in the group not used in training. Bivariate animal models were used for each trait to estimate the genetic correlation between deregressed estimated breeding values and direct genomic values. Results Accuracies of direct genomic values ranged from 0.22 to 0.69 for the studied traits, with an average of 0.44. Predictions were more accurate when animals within the validation group were more closely related to animals in the training set. When training and validation sets were formed by random allocation, the accuracies of direct genomic values ranged from 0.38 to 0.85, with an average of 0.65, reflecting the greater relationship between animals in training and validation. The accuracies of direct genomic values obtained from training on older animals and validating in younger animals were intermediate to the accuracies obtained from K-means clustering and random clustering for most traits. The genetic correlation between deregressed estimated breeding values and direct genomic values ranged from 0.15 to 0.80 for the traits studied. Conclusions These results suggest that genomic estimates of genetic merit can be produced in beef cattle at a young age but the recurrent inclusion of genotyped sires in retraining analyses will be necessary to routinely produce for the industry the direct genomic values with the highest accuracy. PMID:22122853
Use of noncrystallographic symmetry for automated model building at medium to low resolution.
Wiegels, Tim; Lamzin, Victor S
2012-04-01
A novel method is presented for the automatic detection of noncrystallographic symmetry (NCS) in macromolecular crystal structure determination which does not require the derivation of molecular masks or the segmentation of density. It was found that throughout structure determination the NCS-related parts may be differently pronounced in the electron density. This often results in the modelling of molecular fragments of variable length and accuracy, especially during automated model-building procedures. These fragments were used to identify NCS relations in order to aid automated model building and refinement. In a number of test cases higher completeness and greater accuracy of the obtained structures were achieved, specifically at a crystallographic resolution of 2.3 Å or poorer. In the best case, the method allowed the building of up to 15% more residues automatically and a tripling of the average length of the built fragments.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1994-01-01
We continue our investigation of overcoming Gibbs phenomenon, i.e., to obtain exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. We show that if we are given the first N Gegenbauer expansion coefficients, based on the Gegenbauer polynomials C(sub k)(sup mu)(x) with the weight function (1 - x(exp 2))(exp mu - 1/2) for any constant mu is greater than or equal to 0, of an L(sub 1) function f(x), we can construct an exponentially convergent approximation to the point values of f(x) in any subinterval in which the function is analytic. The proof covers the cases of Chebyshev or Legendre partial sums, which are most common in applications.
NASA Astrophysics Data System (ADS)
de Siqueira, A. F.; Cabrera, F. C.; Pagamisse, A.; Job, A. E.
2014-12-01
This study consolidates multi-level starlet segmentation (MLSS) and multi-level starlet optimal segmentation (MLSOS) techniques for photomicrograph segmentation, based on starlet wavelet detail levels to separate areas of interest in an input image. Several segmentation levels can be obtained using MLSS; after that, Matthews correlation coefficient is used to choose an optimal segmentation level, giving rise to MLSOS. In this paper, MLSOS is employed to estimate the concentration of gold nanoparticles with diameter around 47 nm, reduced on natural rubber membranes. These samples were used for the construction of SERS/SERRS substrates and in the study of the influence of natural rubber membranes with incorporated gold nanoparticles on the physiology of Leishmania braziliensis. Precision, recall, and accuracy are used to evaluate the segmentation performance, and MLSOS presents an accuracy greater than 88 % for this application.
NASA Astrophysics Data System (ADS)
Lee, Joohwi; Kim, Sun Hyung; Styner, Martin
2016-03-01
The delineation of rodent brain structures is challenging due to low-contrast multiple cortical and subcortical organs that are closely interfacing to each other. Atlas-based segmentation has been widely employed due to its ability to delineate multiple organs at the same time via image registration. The use of multiple atlases and subsequent label fusion techniques has further improved the robustness and accuracy of atlas-based segmentation. However, the accuracy of atlas-based segmentation is still prone to registration errors; for example, the segmentation of in vivo MR images can be less accurate and robust against image artifacts than the segmentation of post mortem images. In order to improve the accuracy and robustness of atlas-based segmentation, we propose a multi-object, model-based, multi-atlas segmentation method. We first establish spatial correspondences across atlases using a set of dense pseudo-landmark particles. We build a multi-object point distribution model using those particles in order to capture inter- and intra- subject variation among brain structures. The segmentation is obtained by fitting the model into a subject image, followed by label fusion process. Our result shows that the proposed method resulted in greater accuracy than comparable segmentation methods, including a widely used ANTs registration tool.
Bergamini, Elena; Ligorio, Gabriele; Summa, Aurora; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria
2014-10-09
Magnetic and inertial measurement units are an emerging technology to obtain 3D orientation of body segments in human movement analysis. In this respect, sensor fusion is used to limit the drift errors resulting from the gyroscope data integration by exploiting accelerometer and magnetic aiding sensors. The present study aims at investigating the effectiveness of sensor fusion methods under different experimental conditions. Manual and locomotion tasks, differing in time duration, measurement volume, presence/absence of static phases, and out-of-plane movements, were performed by six subjects, and recorded by one unit located on the forearm or the lower trunk, respectively. Two sensor fusion methods, representative of the stochastic (Extended Kalman Filter) and complementary (Non-linear observer) filtering, were selected, and their accuracy was assessed in terms of attitude (pitch and roll angles) and heading (yaw angle) errors using stereophotogrammetric data as a reference. The sensor fusion approaches provided significantly more accurate results than gyroscope data integration. Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s. These results were independent from the specific sensor fusion method used. Practice guidelines for improving the outcome accuracy are provided.
Developmental Changes in Cross-Situational Word Learning: The Inverse Effect of Initial Accuracy
ERIC Educational Resources Information Center
Fitneva, Stanka A.; Christiansen, Morten H.
2017-01-01
Intuitively, the accuracy of initial word-referent mappings should be positively correlated with the outcome of learning. Yet recent evidence suggests an inverse effect of initial accuracy in adults, whereby greater accuracy of initial mappings is associated with poorer outcomes in a cross-situational learning task. Here, we examine the impact of…
Surgical Management of Perineural Spread of Head and Neck Cancers.
Solares, C Arturo; Mason, Eric; Panizza, Benedict J
2016-04-01
The surgical management of perineural spread of head and neck cancers has become an integral part in the contemporary treatment of this pathology. We now understand that tumour spreads within the epineurium and in a continuous fashion. We also can rely on the accuracy of magnetic resonance neurography in detecting and defining the extent of disease. With modern skull base techniques and a greater understanding of the anatomy in this region, specific operations can be designed to help eradicate disease. We review the current approaches and techniques used that enable us to better obtain tumour free margins and hence improve survival.
Calibration of the Multi-Spectral Solar Telescope Array multilayer mirrors and XUV filters
NASA Technical Reports Server (NTRS)
Allen, Maxwell J.; Willis, Thomas D.; Kankelborg, Charles C.; O'Neal, Ray H.; Martinez-Galarce, Dennis S.; Deforest, Craig E.; Jackson, Lisa; Lindblom, Joakim; Walker, Arthur B. C., Jr.; Barbee, Troy W., Jr.
1993-01-01
The Multi-Spectral Solar Telescope Array (MSSTA), a rocket-borne solar observatory, was successfully flown in May, 1991, obtaining solar images in eight XUV and FUV bands with 12 compact multilayer telescopes. Extensive measurements have recently been carried out on the multilayer telescopes and thin film filters at the Stanford Synchrotron Radiation Laboratory. These measurements are the first high spectral resolution calibrations of the MSSTA instruments. Previous measurements and/or calculations of telescope throughputs have been confirmed with greater accuracy. Results are presented on Mo/Si multilayer bandpass changes with time and experimental potassium bromide and tellurium filters.
Miranda, Geraldo Elias; Wilkinson, Caroline; Roughley, Mark; Beaini, Thiago Leite; Melani, Rodolfo Francisco Haltenhoff
2018-01-01
Facial reconstruction is a technique that aims to reproduce the individual facial characteristics based on interpretation of the skull, with the objective of recognition leading to identification. The aim of this paper was to evaluate the accuracy and recognition level of three-dimensional (3D) computerized forensic craniofacial reconstruction (CCFR) performed in a blind test on open-source software using computed tomography (CT) data from live subjects. Four CCFRs were produced by one of the researchers, who was provided with information concerning the age, sex, and ethnic group of each subject. The CCFRs were produced using Blender® with 3D models obtained from the CT data and templates from the MakeHuman® program. The evaluation of accuracy was carried out in CloudCompare, by geometric comparison of the CCFR to the subject 3D face model (obtained from the CT data). A recognition level was performed using the Picasa® recognition tool with a frontal standardized photography, images of the subject CT face model and the CCFR. Soft-tissue depth and nose, ears and mouth were based on published data, observing Brazilian facial parameters. The results were presented from all the points that form the CCFR model, with an average for each comparison between 63% and 74% with a distance -2.5 ≤ x ≤ 2.5 mm from the skin surface. The average distances were 1.66 to 0.33 mm and greater distances were observed around the eyes, cheeks, mental and zygomatic regions. Two of the four CCFRs were correctly matched by the Picasa® tool. Free software programs are capable of producing 3D CCFRs with plausible levels of accuracy and recognition and therefore indicate their value for use in forensic applications.
Wilkinson, Caroline; Roughley, Mark; Beaini, Thiago Leite; Melani, Rodolfo Francisco Haltenhoff
2018-01-01
Facial reconstruction is a technique that aims to reproduce the individual facial characteristics based on interpretation of the skull, with the objective of recognition leading to identification. The aim of this paper was to evaluate the accuracy and recognition level of three-dimensional (3D) computerized forensic craniofacial reconstruction (CCFR) performed in a blind test on open-source software using computed tomography (CT) data from live subjects. Four CCFRs were produced by one of the researchers, who was provided with information concerning the age, sex, and ethnic group of each subject. The CCFRs were produced using Blender® with 3D models obtained from the CT data and templates from the MakeHuman® program. The evaluation of accuracy was carried out in CloudCompare, by geometric comparison of the CCFR to the subject 3D face model (obtained from the CT data). A recognition level was performed using the Picasa® recognition tool with a frontal standardized photography, images of the subject CT face model and the CCFR. Soft-tissue depth and nose, ears and mouth were based on published data, observing Brazilian facial parameters. The results were presented from all the points that form the CCFR model, with an average for each comparison between 63% and 74% with a distance -2.5 ≤ x ≤ 2.5 mm from the skin surface. The average distances were 1.66 to 0.33 mm and greater distances were observed around the eyes, cheeks, mental and zygomatic regions. Two of the four CCFRs were correctly matched by the Picasa® tool. Free software programs are capable of producing 3D CCFRs with plausible levels of accuracy and recognition and therefore indicate their value for use in forensic applications. PMID:29718983
Yang, Yanqiang; Zhang, Chunxi; Lu, Jiazhen
2017-01-01
Strapdown inertial navigation system/celestial navigation system (SINS/CNS) integrated navigation is a fully autonomous and high precision method, which has been widely used to improve the hitting accuracy and quick reaction capability of near-Earth flight vehicles. The installation errors between SINS and star sensors have been one of the main factors that restrict the actual accuracy of SINS/CNS. In this paper, an integration algorithm based on the star vector observations is derived considering the star sensor installation error. Then, the star sensor installation error is accurately estimated based on Kalman Filtering (KF). Meanwhile, a local observability analysis is performed on the rank of observability matrix obtained via linearization observation equation, and the observable conditions are presented and validated. The number of star vectors should be greater than or equal to 2, and the times of posture adjustment also should be greater than or equal to 2. Simulations indicate that the star sensor installation error could be readily observable based on the maneuvering condition; moreover, the attitude errors of SINS are less than 7 arc-seconds. This analysis method and conclusion are useful in the ballistic trajectory design of near-Earth flight vehicles. PMID:28275211
Whited, Matthew C; Schneider, Kristin L; Appelhans, Bradley M; Ma, Yunsheng; Waring, Molly E; DeBiasse, Michele A; Busch, Andrew M; Oleski, Jessica L; Merriam, Philip A; Olendzki, Barbara C; Crawford, Sybil L; Ockene, Ira S; Lemon, Stephenie C; Pagoto, Sherry L
2014-01-01
An elevation in symptoms of depression has previously been associated with greater accuracy of reported dietary intake, however this association has not been investigated among individuals with a diagnosis of major depressive disorder. The purpose of this study was to investigate reporting accuracy of dietary intake among a group of women with major depressive disorder in order to determine if reporting accuracy is similarly associated with depressive symptoms among depressed women. Reporting accuracy of dietary intake was calculated based on three 24-hour phone-delivered dietary recalls from the baseline phase of a randomized trial of weight loss treatment for 161 obese women with major depressive disorder. Regression models indicated that higher severity of depressive symptoms was associated with greater reporting accuracy, even when controlling for other factors traditionally associated with reporting accuracy (coefficient = 0.01 95% CI = 0.01 - 0.02). Seventeen percent of the sample was classified as low energy reporters. Reporting accuracy of dietary intake increases along with depressive symptoms, even among individuals with major depressive disorder. These results suggest that any study investigating associations between diet quality and depression should also include an index of reporting accuracy of dietary intake as accuracy varies with the severity of depressive symptoms.
Borba, Alexandre Meireles; Haupt, Dustin; de Almeida Romualdo, Leiliane Teresinha; da Silva, André Luis Fernandes; da Graça Naclério-Homem, Maria; Miloro, Michael
2016-09-01
Virtual surgical planning (VSP) has become routine practice in orthognathic treatment planning; however, most surgeons do not perform the planning without technical assistance, nor do they routinely evaluate the accuracy of the postoperative outcomes. The purpose of the present study was to propose a reproducible method that would allow surgeons to have an improved understanding of VSP orthognathic planning and to compare the planned surgical movements with the results obtained. A retrospective cohort of bimaxillary orthognathic surgery cases was used to evaluate the variability between the predicted and obtained movements using craniofacial landmarks and McNamara 3-dimensional cephalometric analysis from computed tomography scans. The demographic data (age, gender, and skeletal deformity type) were gathered from the medical records. The data analysis included the level of variability from the predicted to obtained surgical movements as assessed by the mean and standard deviation. For the overall sample, statistical analysis was performed using the 1-sample t test. The statistical analysis between the Class II and III patient groups used an unpaired t test. The study sample consisted of 50 patients who had undergone bimaxillary orthognathic surgery. The overall evaluation of the mean values revealed a discrepancy between the predicted and obtained values of less than 2.0 ± 2.0 mm for all maxillary landmarks, although some mandibular landmarks were greater than this value. An evaluation of the influence of gender and deformity type on the accuracy of surgical movements did not demonstrate statistical significance for most landmarks (P > .05). The method provides a reproducible tool for surgeons who use orthognathic VSP to perform routine evaluation of the postoperative outcomes, permitting the identification of specific variables that could assist in improving the accuracy of surgical planning and execution. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Analysis of spatial distribution of land cover maps accuracy
NASA Astrophysics Data System (ADS)
Khatami, R.; Mountrakis, G.; Stehman, S. V.
2017-12-01
Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain yielded similar AUC; iv) for the larger sample size (i.e., very dense spatial sample) and per-class predictions, the spatial domain yielded larger AUC; v) increasing the sample size improved accuracy predictions with a greater benefit accruing to the spatial domain; and vi) the function used for interpolation had the smallest effect on AUC.
Fitzpatrick, Katherine A.
1975-01-01
Accuracy analyses for the land use maps of the Central Atlantic Regional Ecological Test Site were performed for a 1-percent sample of the area. Researchers compared Level II land use maps produced at three scales, 1:24,000, 1:100,000, and 1:250,000 from high-altitude photography, with each other and with point data obtained in the field. They employed the same procedures to determine the accuracy of the Level I land use maps produced at 1:250,000 from high-altitude photography and color composite ERTS imagery. The accuracy of the Level II maps was 84.9 percent at 1:24,000, 77.4 percent at 1:100,000, and 73.0 percent at 1:250,000. The accuracy of the Level I 1:250,000 maps produced from high-altitude aircraft photography was 76.5 percent and for those produced from ERTS imagery was 69.5 percent The cost of Level II land use mapping at 1:24,000 was found to be high ($11.93 per km2 ). The cost of mapping at 1:100,000 ($1.75) was about 2 times as expensive as mapping at 1:250,000 ($.88), and the accuracy increased by only 4.4 percent. Level I land use maps, when mapped from highaltitude photography, were about 4 times as expensive as the maps produced from ERTS imagery, although the accuracy is 7.0 percent greater. The Level I land use category that is least accurately mapped from ERTS imagery is urban and built-up land in the non-urban areas; in the urbanized areas, built-up land is more reliably mapped.
3.0-T functional brain imaging: a 5-year experience.
Scarabino, T; Giannatempo, G M; Popolizio, T; Tosetti, M; d'Alesio, V; Esposito, F; Di Salle, F; Di Costanzo, A; Bertolino, A; Maggialetti, A; Salvolini, U
2007-02-01
The aim of this paper is to illustrate the technical, methodological and diagnostic features of functional imaging (comprising spectroscopy, diffusion, perfusion and cortical activation techniques) and its principal neuroradiological applications on the basis of the experience gained by the authors in the 5 years since the installation of a high-field magnetic resonance (MR) magnet. These MR techniques are particularly effective at 3.0 Tesla (T) owing to their high signal, resolution and sensitivity, reduced scanning times and overall improved diagnostic ability. In particular, the high-field strength enhances spectroscopic analysis due to a greater signal-to-noise ratio (SNR) and improved spectral, space and time resolution, resulting in the ability to obtain high-resolution spectroscopic studies not only of the more common metabolites, but also--and especially--of those which, due to their smaller concentrations, are difficult to detect using 1.5-T systems. All of these advantages can be obtained with reduced acquisition times. In diffusion studies, the high-field strength results in greater SNR, because 3.0-T magnets enable increased spatial resolution, which enhances accuracy. They also allow exploration in greater detail of more complex phenomena (such as diffusion tensor and tractography), which are not clearly depicted on 1.5-T systems. The most common perfusion study (with intravenous injection of a contrast agent) benefits from the greater SNR and higher magnetic susceptibility by achieving dramatically improved signal changes, and thus greater reliability, using smaller doses of contrast agent. Functional MR imaging (fMRI) is without doubt the modality in which high-field strength has had the greatest impact. Images acquired with the blood-oxygen-level-dependent (BOLD) technique benefit from the greater SNR afforded by 3.0-T magnets and from their stronger magnetic susceptibility effects, providing higher signal and spatial resolution. This enhances reliability of the localisation of brain functions, making it possible to map additional areas, even in the millimetre and submillimetre scale. The data presented and results obtained to date show that 3.0-T morphofunctional imaging can become the standard for high-resolution investigation of brain disease.
Concept Mapping Improves Metacomprehension Accuracy among 7th Graders
ERIC Educational Resources Information Center
Redford, Joshua S.; Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.
2012-01-01
Two experiments explored concept map construction as a useful intervention to improve metacomprehension accuracy among 7th grade students. In the first experiment, metacomprehension was marginally better for a concept mapping group than for a rereading group. In the second experiment, metacomprehension accuracy was significantly greater for a…
Hao, Pengyu; Wang, Li; Niu, Zheng
2015-01-01
A range of single classifiers have been proposed to classify crop types using time series vegetation indices, and hybrid classifiers are used to improve discriminatory power. Traditional fusion rules use the product of multi-single classifiers, but that strategy cannot integrate the classification output of machine learning classifiers. In this research, the performance of two hybrid strategies, multiple voting (M-voting) and probabilistic fusion (P-fusion), for crop classification using NDVI time series were tested with different training sample sizes at both pixel and object levels, and two representative counties in north Xinjiang were selected as study area. The single classifiers employed in this research included Random Forest (RF), Support Vector Machine (SVM), and See 5 (C 5.0). The results indicated that classification performance improved (increased the mean overall accuracy by 5%~10%, and reduced standard deviation of overall accuracy by around 1%) substantially with the training sample number, and when the training sample size was small (50 or 100 training samples), hybrid classifiers substantially outperformed single classifiers with higher mean overall accuracy (1%~2%). However, when abundant training samples (4,000) were employed, single classifiers could achieve good classification accuracy, and all classifiers obtained similar performances. Additionally, although object-based classification did not improve accuracy, it resulted in greater visual appeal, especially in study areas with a heterogeneous cropping pattern. PMID:26360597
The Enigmatic Cornea and Intraocular Lens Calculations: The LXXIII Edward Jackson Memorial Lecture.
Koch, Douglas D
2016-11-01
To review the progress and challenges in obtaining accurate corneal power measurements for intraocular lens (IOL) calculations. Personal perspective, review of literature, case presentations, and personal data. Through literature review findings, case presentations, and data from the author's center, the types of corneal measurement errors that can occur in IOL calculation are categorized and described, along with discussion of future options to improve accuracy. Advances in IOL calculation technology and formulas have greatly increased the accuracy of IOL calculations. Recent reports suggest that over 90% of normal eyes implanted with IOLs may achieve accuracy to within 0.5 diopter (D) of the refractive target. Though errors in estimation of corneal power can cause IOL calculation errors in eyes with normal corneas, greater difficulties in measuring corneal power are encountered in eyes with diseased, scarred, and postsurgical corneas. For these corneas, problematic issues are quantifying anterior corneal power and measuring posterior corneal power and astigmatism. Results in these eyes are improving, but 2 examples illustrate current limitations: (1) spherical accuracy within 0.5 D is achieved in only 70% of eyes with post-refractive surgery corneas, and (2) astigmatism accuracy within 0.5 D is achieved in only 80% of eyes implanted with toric IOLs. Corneal power measurements are a major source of error in IOL calculations. New corneal imaging technology and IOL calculation formulas have improved outcomes and hold the promise of ongoing progress. Copyright © 2016 Elsevier Inc. All rights reserved.
Stiles, Joan; Stern, Catherine; Appelbaum, Mark; Nass, Ruth; Trauner, Doris; Hesselink, John
2008-01-01
Selective deficits in visuospatial processing are present early in development among children with perinatal focal brain lesions (PL). Children with right hemisphere PL (RPL) are impaired in configural processing, while children with left hemisphere PL (LPL) are impaired in featural processing. Deficits associated with LPL are less pervasive than those observed with RPL, but this difference may reflect the structure of the tasks used for assessment. Many of the tasks used to date may place greater demands on configural processing, thus highlighting this deficit in the RPL group. This study employed a task designed to place comparable demands on configural and featural processing, providing the opportunity to obtain within-task evidence of differential deficit. Sixty-two 5- to 14-year-old children (19 RPL, 19 LPL, and 24 matched controls) reproduced from memory a series of hierarchical forms (large forms composed of small forms). Global- and local-level reproduction accuracy was scored. Controls were equally accurate on global- and local-level reproduction. Children with RPL were selectively impaired on global accuracy, and children with LPL on local accuracy, thus documenting a double dissociation in global-local processing.
Application of Template Matching for Improving Classification of Urban Railroad Point Clouds
Arastounia, Mostafa; Oude Elberink, Sander
2016-01-01
This study develops an integrated data-driven and model-driven approach (template matching) that clusters the urban railroad point clouds into three classes of rail track, contact cable, and catenary cable. The employed dataset covers 630 m of the Dutch urban railroad corridors in which there are four rail tracks, two contact cables, and two catenary cables. The dataset includes only geometrical information (three dimensional (3D) coordinates of the points) with no intensity data and no RGB data. The obtained results indicate that all objects of interest are successfully classified at the object level with no false positives and no false negatives. The results also show that an average 97.3% precision and an average 97.7% accuracy at the point cloud level are achieved. The high precision and high accuracy of the rail track classification (both greater than 96%) at the point cloud level stems from the great impact of the employed template matching method on excluding the false positives. The cables also achieve quite high average precision (96.8%) and accuracy (98.4%) due to their high sampling and isolated position in the railroad corridor. PMID:27973452
Sub-pixel localisation of passive micro-coil fiducial markers in interventional MRI.
Rea, Marc; McRobbie, Donald; Elhawary, Haytham; Tse, Zion T H; Lamperth, Michael; Young, Ian
2009-04-01
Electromechanical devices enable increased accuracy in surgical procedures, and the recent development of MRI-compatible mechatronics permits the use of MRI for real-time image guidance. Integrated imaging of resonant micro-coil fiducials provides an accurate method of tracking devices in a scanner with increased flexibility compared to gradient tracking. Here we report on the ability of ten different image-processing algorithms to track micro-coil fiducials with sub-pixel accuracy. Five algorithms: maximum pixel, barycentric weighting, linear interpolation, quadratic fitting and Gaussian fitting were applied both directly to the pixel intensity matrix and to the cross-correlation matrix obtained by 2D convolution with a reference image. Using images of a 3 mm fiducial marker and a pixel size of 1.1 mm, intensity linear interpolation, which calculates the position of the fiducial centre by interpolating the pixel data to find the fiducial edges, was found to give the best performance for minimal computing power; a maximum error of 0.22 mm was observed in fiducial localisation for displacements up to 40 mm. The inherent standard deviation of fiducial localisation was 0.04 mm. This work enables greater accuracy to be achieved in passive fiducial tracking.
Influence of non-level walking on pedometer accuracy.
Leicht, Anthony S; Crowther, Robert G
2009-05-01
The YAMAX Digiwalker pedometer has been previously confirmed as a valid and reliable monitor during level walking, however, little is known about its accuracy during non-level walking activities or between genders. Subsequently, this study examined the influence of non-level walking and gender on pedometer accuracy. Forty-six healthy adults completed 3-min bouts of treadmill walking at their normal walking pace during 11 inclines (0-10%) while another 123 healthy adults completed walking up and down 47 stairs. During walking, participants wore a YAMAX Digiwalker SW-700 pedometer with the number of steps taken and registered by the pedometer recorded. Pedometer difference (steps registered-steps taken), net error (% of steps taken), absolute error (absolute % of steps taken) and gender were examined by repeated measures two-way ANOVA and Tukey's post hoc tests. During incline walking, pedometer accuracy indices were similar between inclines and gender except for a significantly greater step difference (-7+/-5 steps vs. 1+/-4 steps) and net error (-2.4+/-1.8% for 9% vs. 0.4+/-1.2% for 2%). Step difference and net error were significantly greater during stair descent compared to stair ascent while absolute error was significantly greater during stair ascent compared to stair descent. The current study demonstrated that the YAMAX Digiwalker SW-700 pedometer exhibited good accuracy during incline walking up to 10% while it overestimated steps taken during stair ascent/descent with greater overestimation during stair descent. Stair walking activity should be documented in field studies as the YAMAX Digiwalker SW-700 pedometer overestimates this activity type.
Brain structure and verbal function across adulthood while controlling for cerebrovascular risks.
Sanfratello, L; Lundy, S L; Qualls, C; Knoefel, J E; Adair, J C; Caprihan, A; Stephen, J M; Aine, C J
2017-04-08
The development and decline of brain structure and function throughout adulthood is a complex issue, with cognitive aging trajectories influenced by a host of factors including cerebrovascular risk. Neuroimaging studies of age-related cognitive decline typically reveal a linear decrease in gray matter (GM) volume/density in frontal regions across adulthood. However, white matter (WM) tracts mature later than GM, particularly in regions necessary for executive functions and memory. Therefore, it was predicted that a middle-aged group (MC: 35-45 years) would perform best on a verbal working memory task and reveal greater regional WM integrity, compared with both young (YC: 18-25 years) and elder groups (EC: 60+ years). Diffusion tensor imaging (DTI) and magnetoencephalography (MEG) were obtained from 80 healthy participants. Objective measures of cerebrovascular risk and cognition were also obtained. As predicted, MC revealed best verbal working memory accuracy overall indicating some maturation of brain function between YC and MC. However, contrary to the prediction fractional anisotropy values (FA), a measure of WM integrity, were not greater in MC (i.e., there were no significant differences in FA between YC and MC but both groups showed greater FA than EC). An overall multivariate model for MEG ROIs showed greater peak amplitudes for MC and YC, compared with EC. Subclinical cerebrovascular risk factors (systolic blood pressure and blood glucose) were negatively associated with FA in frontal callosal, limbic, and thalamic radiation regions which correlated with executive dysfunction and slower processing speed, suggesting their contribution to age-related cognitive decline. Hum Brain Mapp, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
A study on rational function model generation for TerraSAR-X imagery.
Eftekhari, Akram; Saadatseresht, Mohammad; Motagh, Mahdi
2013-09-09
The Rational Function Model (RFM) has been widely used as an alternative to rigorous sensor models of high-resolution optical imagery in photogrammetry and remote sensing geometric processing. However, not much work has been done to evaluate the applicability of the RF model for Synthetic Aperture Radar (SAR) image processing. This paper investigates how to generate a Rational Polynomial Coefficient (RPC) for high-resolution TerraSAR-X imagery using an independent approach. The experimental results demonstrate that the RFM obtained using the independent approach fits the Range-Doppler physical sensor model with an accuracy of greater than 10-3 pixel. Because independent RPCs indicate absolute errors in geolocation, two methods can be used to improve the geometric accuracy of the RFM. In the first method, Ground Control Points (GCPs) are used to update SAR sensor orientation parameters, and the RPCs are calculated using the updated parameters. Our experiment demonstrates that by using three control points in the corners of the image, an accuracy of 0.69 pixels in range and 0.88 pixels in the azimuth direction is achieved. For the second method, we tested the use of an affine model for refining RPCs. In this case, by applying four GCPs in the corners of the image, the accuracy reached 0.75 pixels in range and 0.82 pixels in the azimuth direction.
A Study on Rational Function Model Generation for TerraSAR-X Imagery
Eftekhari, Akram; Saadatseresht, Mohammad; Motagh, Mahdi
2013-01-01
The Rational Function Model (RFM) has been widely used as an alternative to rigorous sensor models of high-resolution optical imagery in photogrammetry and remote sensing geometric processing. However, not much work has been done to evaluate the applicability of the RF model for Synthetic Aperture Radar (SAR) image processing. This paper investigates how to generate a Rational Polynomial Coefficient (RPC) for high-resolution TerraSAR-X imagery using an independent approach. The experimental results demonstrate that the RFM obtained using the independent approach fits the Range-Doppler physical sensor model with an accuracy of greater than 10−3 pixel. Because independent RPCs indicate absolute errors in geolocation, two methods can be used to improve the geometric accuracy of the RFM. In the first method, Ground Control Points (GCPs) are used to update SAR sensor orientation parameters, and the RPCs are calculated using the updated parameters. Our experiment demonstrates that by using three control points in the corners of the image, an accuracy of 0.69 pixels in range and 0.88 pixels in the azimuth direction is achieved. For the second method, we tested the use of an affine model for refining RPCs. In this case, by applying four GCPs in the corners of the image, the accuracy reached 0.75 pixels in range and 0.82 pixels in the azimuth direction. PMID:24021971
Dustfall Effect on Hyperspectral Inversion of Chlorophyll Content - a Laboratory Experiment
NASA Astrophysics Data System (ADS)
Chen, Yuteng; Ma, Baodong; Li, Xuexin; Zhang, Song; Wu, Lixin
2018-04-01
Dust pollution is serious in many areas of China. It is of great significance to estimate chlorophyll content of vegetation accurately by hyperspectral remote sensing for assessing the vegetation growth status and monitoring the ecological environment in dusty areas. By using selected vegetation indices including Medium Resolution Imaging Spectrometer Terrestrial Chlorophyll Index (MTCI) Double Difference Index (DD) and Red Edge Position Index (REP), chlorophyll inversion models were built to study the accuracy of hyperspectral inversion of chlorophyll content based on a laboratory experiment. The results show that: (1) REP exponential model has the most stable accuracy for inversion of chlorophyll content in dusty environment. When dustfall amount is less than 80 g/m2, the inversion accuracy based on REP is stable with the variation of dustfall amount. When dustfall amount is greater than 80 g/m2, the inversion accuracy is slightly fluctuation. (2) Inversion accuracy of DD is worst among three models. (3) MTCI logarithm model has high inversion accuracy when dustfall amount is less than 80 g/m2; When dustfall amount is greater than 80 g/m2, inversion accuracy decreases regularly and inversion accuracy of modified MTCI (mMTCI) increases significantly. The results provide experimental basis and theoretical reference for hyperspectral remote sensing inversion of chlorophyll content.
NASA Technical Reports Server (NTRS)
Rignot, Eric; Williams, Cynthia; Way, Jobea; Viereck, Leslie
1993-01-01
A maximum a posteriori Bayesian classifier for multifrequency polarimetric SAR data is used to perform a supervised classification of forest types in the floodplains of Alaska. The image classes include white spruce, balsam poplar, black spruce, alder, non-forests, and open water. The authors investigate the effect on classification accuracy of changing environmental conditions, and of frequency and polarization of the signal. The highest classification accuracy (86 percent correctly classified forest pixels, and 91 percent overall) is obtained combining L- and C-band frequencies fully polarimetric on a date where the forest is just recovering from flooding. The forest map compares favorably with a vegetation map assembled from digitized aerial photos which took five years for completion, and address the state of the forest in 1978, ignoring subsequent fires, changes in the course of the river, clear-cutting of trees, and tree growth. HV-polarization is the most useful polarization at L- and C-band for classification. C-band VV (ERS-1 mode) and L-band HH (J-ERS-1 mode) alone or combined yield unsatisfactory classification accuracies. Additional data acquired in the winter season during thawed and frozen days yield classification accuracies respectively 20 percent and 30 percent lower due to a greater confusion between conifers and deciduous trees. Data acquired at the peak of flooding in May 1991 also yield classification accuracies 10 percent lower because of dominant trunk-ground interactions which mask out finer differences in radar backscatter between tree species. Combination of several of these dates does not improve classification accuracy. For comparison, panchromatic optical data acquired by SPOT in the summer season of 1991 are used to classify the same area. The classification accuracy (78 percent for the forest types and 90 percent if open water is included) is lower than that obtained with AIRSAR although conifers and deciduous trees are better separated due to the presence of leaves on the deciduous trees. Optical data do not separate black spruce and white spruce as well as SAR data, cannot separate alder from balsam poplar, and are of course limited by the frequent cloud cover in the polar regions. Yet, combining SPOT and AIRSAR offers better chances to identify vegetation types independent of ground truth information using a combination of NDVI indexes from SPOT, biomass numbers from AIRSAR, and a segmentation map from either one.
The Effects of High- and Low-Anxiety Training on the Anticipation Judgments of Elite Performers.
Alder, David; Ford, Paul R; Causer, Joe; Williams, A Mark
2016-02-01
We examined the effects of high- versus low-anxiety conditions during video-based training of anticipation judgments using international-level badminton players facing serves and the transfer to high-anxiety and field-based conditions. Players were assigned to a high-anxiety training (HA), low-anxiety training (LA) or control group (CON) in a pretraining-posttest design. In the pre- and posttest, players anticipated serves from video and on court under high- and low-anxiety conditions. In the video-based high-anxiety pretest, anticipation response accuracy was lower and final fixations shorter when compared with the low-anxiety pretest. In the low-anxiety posttest, HA and LA demonstrated greater accuracy of judgments and longer final fixations compared with pretest and CON. In the high-anxiety posttest, HA maintained accuracy when compared with the low-anxiety posttest, whereas LA had lower accuracy. In the on-court posttest, the training groups demonstrated greater accuracy of judgments compared with the pretest and CON.
Accurate Measurement of Small Airways on Low-Dose Thoracic CT Scans in Smokers
Conradi, Susan H.; Atkinson, Jeffrey J.; Zheng, Jie; Schechtman, Kenneth B.; Senior, Robert M.; Gierada, David S.
2013-01-01
Background: Partial volume averaging and tilt relative to the scan plane on transverse images limit the accuracy of airway wall thickness measurements on CT scan, confounding assessment of the relationship between airway remodeling and clinical status in COPD. The purpose of this study was to assess the effect of partial volume averaging and tilt corrections on airway wall thickness measurement accuracy and on relationships between airway wall thickening and clinical status in COPD. Methods: Airway wall thickness measurements in 80 heavy smokers were obtained on transverse images from low-dose CT scan using the open-source program Airway Inspector. Measurements were corrected for partial volume averaging and tilt effects using an attenuation- and geometry-based algorithm and compared with functional status. Results: The algorithm reduced wall thickness measurements of smaller airways to a greater degree than larger airways, increasing the overall range. When restricted to analyses of airways with an inner diameter < 3.0 mm, for a theoretical airway of 2.0 mm inner diameter, the wall thickness decreased from 1.07 ± 0.07 to 0.29 ± 0.10 mm, and the square root of the wall area decreased from 3.34 ± 0.15 to 1.58 ± 0.29 mm, comparable to histologic measurement studies. Corrected measurements had higher correlation with FEV1, differed more between BMI, airflow obstruction, dyspnea, and exercise capacity (BODE) index scores, and explained a greater proportion of FEV1 variability in multivariate models. Conclusions: Correcting for partial volume averaging improves accuracy of airway wall thickness estimation, allowing direct measurement of the small airways to better define their role in COPD. PMID:23172175
Ghumman, Abul Razzaq; Al-Salamah, Ibrahim Saleh; AlSaleem, Saleem Saleh; Haider, Husnain
2017-02-01
Geomorphological instantaneous unit hydrograph (GIUH) usually uses geomorphologic parameters of catchment estimated from digital elevation model (DEM) for rainfall-runoff modeling of ungauged watersheds with limited data. Higher resolutions (e.g., 5 or 10 m) of DEM play an important role in the accuracy of rainfall-runoff models; however, such resolutions are expansive to obtain and require much greater efforts and time for preparation of inputs. In this research, a modeling framework is developed to evaluate the impact of lower resolutions (i.e., 30 and 90 m) of DEM on the accuracy of Clark GIUH model. Observed rainfall-runoff data of a 202-km 2 catchment in a semiarid region was used to develop direct runoff hydrographs for nine rainfall events. Geographical information system was used to process both the DEMs. Model accuracy and errors were estimated by comparing the model results with the observed data. The study found (i) high model efficiencies greater than 90% for both the resolutions, and (ii) that the efficiency of Clark GIUH model does not significantly increase by enhancing the resolution of the DEM from 90 to 30 m. Thus, it is feasible to use lower resolutions (i.e., 90 m) of DEM in the estimation of peak runoff in ungauged catchments with relatively less efforts. Through sensitivity analysis (Monte Carlo simulations), the kinematic wave parameter and stream length ratio are found to be the most significant parameters in velocity and peak flow estimations, respectively; thus, they need to be carefully estimated for calculation of direct runoff in ungauged watersheds using Clark GIUH model.
The Speckle Toolbox: A Powerful Data Reduction Tool for CCD Astrometry
NASA Astrophysics Data System (ADS)
Harshaw, Richard; Rowe, David; Genet, Russell
2017-01-01
Recent advances in high-speed low-noise CCD and CMOS cameras, coupled with breakthroughs in data reduction software that runs on desktop PCs, has opened the domain of speckle interferometry and high-accuracy CCD measurements of double stars to amateurs, allowing them to do useful science of high quality. This paper describes how to use a speckle interferometry reduction program, the Speckle Tool Box (STB), to achieve this level of result. For over a year the author (Harshaw) has been using STB (and its predecessor, Plate Solve 3) to obtain measurements of double stars based on CCD camera technology for pairs that are either too wide (the stars not sharing the same isoplanatic patch, roughly 5 arc-seconds in diameter) or too faint to image in the coherence time required for speckle (usually under 40ms). This same approach - using speckle reduction software to measure CCD pairs with greater accuracy than possible with lucky imaging - has been used, it turns out, for several years by the U. S. Naval Observatory.
Confined turbulent swirling recirculating flow predictions. Ph.D. Thesis. Final Report
NASA Technical Reports Server (NTRS)
Abujelala, M. T.; Lilley, D. G.
1985-01-01
The capability and the accuracy of the STARPIC computer code in predicting confined turbulent swirling recirculating flows is presented. Inlet flow boundary conditions were demonstrated to be extremely important in simulating a flowfield via numerical calculations. The degree of swirl strength and expansion ratio have strong effects on the characteristics of swirling flow. In a nonswirling flow, a large corner recirculation zone exists in the flowfield with an expansion ratio greater than one. However, as the degree of inlet swirl increases, the size of this zone decreases and a central recirculation zone appears near the inlet. Generally, the size of the central zone increased with swirl strength and expansion ratio. Neither the standard k-epsilon turbulence mode nor its previous extensions show effective capability for predicting confined turbulent swirling recirculating flows. However, either reduced optimum values of three parameters in the mode or the empirical C sub mu formulation obtained via careful analysis of available turbulence measurements, can provide more acceptable accuracy in the prediction of these swirling flows.
A comparison of altimeter and gravimetric geoids in the Tonga Trench and Indian Ocean areas
NASA Technical Reports Server (NTRS)
Rapp, R. H.
1980-01-01
Geoids computed from GEOS-3 altimeter data are compared with gravimetric geoids computed by various techniques for 30 x 30 deg areas in the Tonga Trench and the Indian Ocean. The gravimetric geoids were calculated using the standard Stokes integration with the Molodenskii truncation procedure, the modified Stokes integration suggested by Ostach (1970) and Meissl (1971) with modified Molodenskii truncation functions, and three sets of potential coefficients including one complete to degree 180. It is found that the modified Stokes procedure with a cap size of 10 deg provides better results when used with a combined altimeter terrestrial anomaly field data set. Excellent agreement at the plus or minus 1 m level is obtained between the altimeter and gravimetric geoid using the combined data set, with the modified Stokes procedure having a greater accuracy. Coefficients derived from the 180 x 180 solution are found to be of an accuracy comparable to that of the modified Stokes method, however to require six times less computational effort.
Aircraft to aircraft intercomparison during SEMAPHORE
NASA Astrophysics Data System (ADS)
Lambert, Dominique; Durand, Pierre
1998-10-01
During the Structure des Echanges Mer-Atmosphère, Propriétés des Hétérogénéités Océaniques: Recherche Expérimentale (SEMAPHORE) experiment, performed in the Azores region in 1993, two French research aircraft were simultaneously used for in situ measurements in the atmospheric boundary layer. We present the results obtained from one intercomparison flight between the two aircraft. The mean parameters generally agree well, although the temperature has to be slightly shifted in order to be in agreement for the two aircraft. A detailed comparison of the turbulence parameters revealed no bias. The agreement is good for variances and is satisfactory for fluxes and skewness. A thorough study of the errors involved in flux computation revealed that the greatest accuracy is obtained for latent heat flux. Errors in sensible heat flux are considerably greater, and the worst results are obtained for momentum flux. The latter parameter, however, is more accurate than expected from previous parameterizations.
A neural network based model for urban noise prediction.
Genaro, N; Torija, A; Ramos-Ridao, A; Requena, I; Ruiz, D P; Zamorano, M
2010-10-01
Noise is a global problem. In 1972 the World Health Organization (WHO) classified noise as a pollutant. Since then, most industrialized countries have enacted laws and local regulations to prevent and reduce acoustic environmental pollution. A further aim is to alert people to the dangers of this type of pollution. In this context, urban planners need to have tools that allow them to evaluate the degree of acoustic pollution. Scientists in many countries have modeled urban noise, using a wide range of approaches, but their results have not been as good as expected. This paper describes a model developed for the prediction of environmental urban noise using Soft Computing techniques, namely Artificial Neural Networks (ANN). The model is based on the analysis of variables regarded as influential by experts in the field and was applied to data collected on different types of streets. The results were compared to those obtained with other models. The study found that the ANN system was able to predict urban noise with greater accuracy, and thus, was an improvement over those models. The principal component analysis (PCA) was also used to try to simplify the model. Although there was a slight decline in the accuracy of the results, the values obtained were also quite acceptable.
Doyle-Thomas, Krissy A.R.; Goldberg, Jeremy; Szatmari, Peter; Hall, Geoffrey B.C.
2013-01-01
Despite successful performance on some audiovisual emotion tasks, hypoactivity has been observed in frontal and temporal integration cortices in individuals with autism spectrum disorders (ASD). Little is understood about the neurofunctional network underlying this ability in individuals with ASD. Research suggests that there may be processing biases in individuals with ASD, based on their ability to obtain meaningful information from the face and/or the voice. This functional magnetic resonance imaging study examined brain activity in teens with ASD (n = 18) and typically developing controls (n = 16) during audiovisual and unimodal emotion processing. Teens with ASD had a significantly lower accuracy when matching an emotional face to an emotion label. However, no differences in accuracy were observed between groups when matching an emotional voice or face-voice pair to an emotion label. In both groups brain activity during audiovisual emotion matching differed significantly from activity during unimodal emotion matching. Between-group analyses of audiovisual processing revealed significantly greater activation in teens with ASD in a parietofrontal network believed to be implicated in attention, goal-directed behaviors, and semantic processing. In contrast, controls showed greater activity in frontal and temporal association cortices during this task. These results suggest that in the absence of engaging integrative emotional networks during audiovisual emotion matching, teens with ASD may have recruited the parietofrontal network as an alternate compensatory system. PMID:23750139
Skinner, Kenneth D.
2011-01-01
High-quality elevation data in riverine environments are important for fisheries management applications and the accuracy of such data needs to be determined for its proper application. The Experimental Advanced Airborne Research LiDAR (Light Detection and Ranging)-or EAARL-system was used to obtain topographic and bathymetric data along the Deadwood and South Fork Boise Rivers in west-central Idaho. The EAARL data were post-processed into bare earth and bathymetric raster and point datasets. Concurrently with the EAARL surveys, real-time kinematic global positioning system surveys were made in three areas along each of the rivers to assess the accuracy of the EAARL elevation data in different hydrogeomorphic settings. The accuracies of the EAARL-derived raster elevation values, determined in open, flat terrain, to provide an optimal vertical comparison surface, had root mean square errors ranging from 0.134 to 0.347 m. Accuracies in the elevation values for the stream hydrogeomorphic settings had root mean square errors ranging from 0.251 to 0.782 m. The greater root mean square errors for the latter data are the result of complex hydrogeomorphic environments within the streams, such as submerged aquatic macrophytes and air bubble entrainment; and those along the banks, such as boulders, woody debris, and steep slopes. These complex environments reduce the accuracy of EAARL bathymetric and topographic measurements. Steep banks emphasize the horizontal location discrepancies between the EAARL and ground-survey data and may not be good representations of vertical accuracy. The EAARL point to ground-survey comparisons produced results with slightly higher but similar root mean square errors than those for the EAARL raster to ground-survey comparisons, emphasizing the minimized horizontal offset by using interpolated values from the raster dataset at the exact location of the ground-survey point as opposed to an actual EAARL point within a 1-meter distance. The average error for the wetted stream channel surface areas was -0.5 percent, while the average error for the wetted stream channel volume was -8.3 percent. The volume of the wetted river channel was underestimated by an average of 31 percent in half of the survey areas, and overestimated by an average of 14 percent in the remainder of the survey areas. The EAARL system is an efficient way to obtain topographic and bathymetric data in large areas of remote terrain. The elevation accuracy of the EAARL system varies throughout the area depending upon the hydrogeomorphic setting, preventing the use of a single accuracy value to describe the EAARL system. The elevation accuracy variations should be kept in mind when using the data, such as for hydraulic modeling or aquatic habitat assessments.
Accuracy of tablet splitting and liquid measurements: an examination of who, what and how.
Abu-Geras, Dana; Hadziomerovic, Dunja; Leau, Andrew; Khan, Ramzan Nazim; Gudka, Sajni; Locher, Cornelia; Razaghikashani, Maryam; Lim, Lee Yong
2017-05-01
To examine factors that might affect the ability of patients to accurately halve tablets or measure a 5-ml liquid dose. Eighty-eight participants split four different placebo tablets by hand and using a tablet splitter, while 85 participants measured 5 ml of water, 0.5% methylcellulose (MC) and 1% MC using a syringe and dosing cup. Accuracy of manipulation was determined by mass measurements. The general population was less able than pharmacy students to break tablets into equal parts, although age, gender and prior experience were insignificant factors. Greater accuracy of tablet halving was observed with tablet splitter, with scored tablets split more equally than unscored tablets. Tablet size did not affect the accuracy of splitting. However, >25% of small scored tablets failed to be split by hand, and 41% of large unscored tablets were split into >2 portions in the tablet splitter. In liquid measurement, the syringe provided more accurate volume measurements than the dosing cup, with higher accuracy observed for the more viscous MC solutions than water. Formulation characteristics and manipulation technique have greater influences on the accuracy of medication modification and should be considered in off-label drug use in vulnerable populations. © 2016 Royal Pharmaceutical Society.
Youssef, Joseph El; Engle, Julia M.; Massoud, Ryan G.; Ward, W. Kenneth
2010-01-01
Abstract Background A cause of suboptimal accuracy in amperometric glucose sensors is the presence of a background current (current produced in the absence of glucose) that is not accounted for. We hypothesized that a mathematical correction for the estimated background current of a commercially available sensor would lead to greater accuracy compared to a situation in which we assumed the background current to be zero. We also tested whether increasing the frequency of sensor calibration would improve sensor accuracy. Methods This report includes analysis of 20 sensor datasets from seven human subjects with type 1 diabetes. Data were divided into a training set for algorithm development and a validation set on which the algorithm was tested. A range of potential background currents was tested. Results Use of the background current correction of 4 nA led to a substantial improvement in accuracy (improvement of absolute relative difference or absolute difference of 3.5–5.5 units). An increase in calibration frequency led to a modest accuracy improvement, with an optimum at every 4 h. Conclusions Compared to no correction, a correction for the estimated background current of a commercially available glucose sensor led to greater accuracy and better detection of hypoglycemia and hyperglycemia. The accuracy-optimizing scheme presented here can be implemented in real time. PMID:20879968
Integrating conventional and inverse representation for face recognition.
Xu, Yong; Li, Xuelong; Yang, Jian; Lai, Zhihui; Zhang, David
2014-10-01
Representation-based classification methods are all constructed on the basis of the conventional representation, which first expresses the test sample as a linear combination of the training samples and then exploits the deviation between the test sample and the expression result of every class to perform classification. However, this deviation does not always well reflect the difference between the test sample and each class. With this paper, we propose a novel representation-based classification method for face recognition. This method integrates conventional and the inverse representation-based classification for better recognizing the face. It first produces conventional representation of the test sample, i.e., uses a linear combination of the training samples to represent the test sample. Then it obtains the inverse representation, i.e., provides an approximation representation of each training sample of a subject by exploiting the test sample and training samples of the other subjects. Finally, the proposed method exploits the conventional and inverse representation to generate two kinds of scores of the test sample with respect to each class and combines them to recognize the face. The paper shows the theoretical foundation and rationale of the proposed method. Moreover, this paper for the first time shows that a basic nature of the human face, i.e., the symmetry of the face can be exploited to generate new training and test samples. As these new samples really reflect some possible appearance of the face, the use of them will enable us to obtain higher accuracy. The experiments show that the proposed conventional and inverse representation-based linear regression classification (CIRLRC), an improvement to linear regression classification (LRC), can obtain very high accuracy and greatly outperforms the naive LRC and other state-of-the-art conventional representation based face recognition methods. The accuracy of CIRLRC can be 10% greater than that of LRC.
Zhang, Xinming; Liu, Xin; Sun, Fengbo; Li, Shouchuan; Gao, Wei; Wang, Ye
2017-02-01
To evaluate the diagnostic value of cytological greater omental milky spot examination for the diagnosis of peritoneal metastasis in gastric cancer patients. A total of 136 patients diagnosed with gastric cancer and without distant metastasis were enrolled in our study. All patients underwent laparoscopy and CH40 suspension liquid dye of peritoneal lymph nodes preoperatively as well as ascites or peritoneal lavage fluid collections and excisions of marked greater omental milky spot tissues perioperatively. According to the laparoscopic results, the patients were divided into T1-T2 stage (n = 56) without and into T3-T4 stage (n = 80) with tumor invasion into the serosal layer. Among the T1-T2-stage patients, tumor cells could be detected in peritoneal lavage fluids in 2 cases, whereas with greater omental milky spot examination, peritoneal metastasis was detected in 8 cases. Among the 80 cases in the T3-T4 stage, tumor cells could be detected in 28 cases via peritoneal lavage cytology and in 43 cases by greater omental milky spot examinations, and 4 cases had cancer cell infiltration also in nonmilky spot omental areas. The statistical analysis showed that the staging accuracy rate of exfoliative cytology examination was superior to that of the laparoscopic exploration (P < .05), but its sensitivity was significantly lower than that obtained with cytological greater omental milky spot examinations (P < .05). The laparoscopic exploration could make a preliminary diagnosis of peritoneal metastasis via serosal layer invasion detection. For further analyses, cytological examinations of greater omental milky spots were more sensitive than exfoliative cytology.
NASA Technical Reports Server (NTRS)
Kast, J. R.
1988-01-01
The Upper Atmosphere Research Satellite (UARS) is a three-axis stabilized Earth-pointing spacecraft in a low-Earth orbit. The UARS onboard computer (OBC) uses a Fourier Power Series (FPS) ephemeris representation that includes 42 position and 42 velocity coefficients per axis, with position residuals at 10-minute intervals. New coefficients and 32 hours of residuals are uploaded daily. This study evaluated two backup methods that permit the OBC to compute an approximate spacecraft ephemeris in the event that new ephemeris data cannot be uplinked for several days: (1) extending the use of the FPS coefficients previously uplinked, and (2) switching to a simple circular orbit approximation designed and tested (but not implemented) for LANDSAT-D. The FPS method provides greater accuracy during the backup period and does not require additional ground operational procedures for generating and uplinking an additional ephemeris table. The tradeoff is that the high accuracy of the FPS will be degraded slightly by adopting the longer fit period necessary to obtain backup accuracy for an extended period of time. The results for UARS show that extended use of the FPS is superior to the circular orbit approximation for short-term ephemeris backup.
EEG source localization: Sensor density and head surface coverage.
Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don
2015-12-30
The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Heart imaging: the accuracy of the 64-MSCT in the detection of coronary artery disease.
Alessandri, N; Di Matteo, A; Rondoni, G; Petrassi, M; Tufani, F; Ferrari, R; Laghi, A
2009-01-01
At present, coronary angiography represents the gold standard technique for the diagnosis of coronary artery disease. Our aim is to compare the conventional coronary angiography to the coronary 64-multislice spiral computed tomography (64-MSCT), a new and non-invasive cardiac imaging technique. The last generation of MSCT scanners show a better imaging quality, due to a greater spatial and temporal resolution. Four expert observers (two cardiologists and two radiologists) have compared the angiographic data with the accuracy of the 64-MSCT in the detection and evaluation of coronary vessels stenoses. From the data obtained, the sensibility, the specificity and the accuracy of the coronary 64-MSCT have been defined. We have enrolled 75 patients (57 male, 18 female, mean age 61.83 +/- 10.38; range 30-80 years) with known or suspected coronary artery disease. The above population has been divided into 3 groups: Group A (Gr. A) with 40 patients (mean age 60.7 +/- 12.5) affected by both non-significant and significant coronary artery disease; Group B (Gr. B) with 25 patients (mean age 60.3 +/- 14.6) who underwent to percutaneous coronary intervention (PCI); Group C (Gr. C) with 10 patients (mean age 54.20 +/- 13.7) without any coronary angiographic stenoses. All the patients underwent non-invasive exams, conventional coronary angiography and coronary 64-MSCT. The comparison of the data obtained has been carried out according to a per group analysis, per patient analysis and per segment analysis. Moreover, the accuracy of the 64-MSCT has been defined for the detection of >75%, 50-75% and <50% coronary stenoses. Coronary angiography has identified significant coronary artery disease in 75% of the patients in the Gr. A and in 73% of the patients in the Gr. B. No coronary stenoses have been detected in Gr. C. According to a per segment analysis, in Gr. A, 36% of the segments analysed have shown a coronary stenosis (37% stenoses >75%, 32% stenoses 50-75% and 31% stenoses <50%). In Gr. B, 32% of the segments have shown a coronary stenosis (33% stenoses >75%, 29% stenoses 50-75% and 38% stenoses <50%). In-stent disease has been shown in only 4 of the 29 coronary stents identified. In Gr. A, coronary 64-MSCT has confirmed the angiographic results in the 93% of cases (sensibility 93%, specificity 100%, positive predictive value 100% and negative predictive value 83%) while, in Gr. B, this confirm has been obtained only in 64% of cases (sensibility 64%, specificity 100%, positive predictive value 100% and negative predictive value 50%). In Gr. C, we have observed a complete agreement between angiographic and CT data (sensibility, specificity, positive predictive value and negative predictive value 100%). According to a per segment analysis, the angiographic results have been confirmed in 98% of cases in Gr. A (sensibility 98%, specificity 94%, positive predictive value 90% and negative predictive value 94%) but only in 55% of cases in Gr. B (sensibility 55%, specificity 90%, positive predictive value 71% and negative predictive value 81%). Moreover, only 1 of the 4 in-stent restenoses has been detected (sensibility 25%, specificity 100%, positive predictive value 100% and negative predictive value 77%). Coronary angiography has detected a greater number of coronary stenoses than the 64-MSCT. 64-MSCT has demonstrated better accuracy in the study of coronary vessels wider than 2 mm, while its accuracy is lower for smaller vessels (diameter < 2.5 mm) and for the identification of in-stent restenosis, because there is a reduced image quality for these vessels and therefore a lower accuracy in the coronary stenosis detection. Nevertheless, 64-MSCT shows high accuracy and it can be considered a comparative but not a substitutive exam of the coronary angiography. Several technical limitations of the 64-MSCT are responsible of its lower accuracy versus the conventional coronary angiography, but solving these technical problems could give us a new non-invasive imaging technique for the study of coronary stents.
A drag-free Lo-Lo satellite system for improved gravity field measurements
NASA Technical Reports Server (NTRS)
Fischell, R. E.; Pisacane, V. L.
1978-01-01
At very low altitudes, the effect of atmospheric drag results in drastically reduced orbit lifetimes and considerable uncertainty in satellite motions. The concept suggested herein employs a DISturbance COmpensation System (DISCOS) on each of a pair of satellites at very low altitudes to provide refined measurements of the earth's gravitational field. The DISCOS maintains the satellites in orbit and essentially eliminates motion uncertainties due mostly to drag and to a lesser extent from solar radiation pressure. By a closed-loop measurement of the relative rangerate between the two low satellites, one can determine the earth's gravitational field with a considerably greater accuracy than could be obtained by tracking a single satellite.
Frequency standards requirements of the NASA deep space network to support outer planet missions
NASA Technical Reports Server (NTRS)
Fliegel, H. F.; Chao, C. C.
1974-01-01
Navigation of Mariner spacecraft to Jupiter and beyond will require greater accuracy of positional determination than heretofore obtained if the full experimental capabilities of this type of spacecraft are to be utilized. Advanced navigational techniques which will be available by 1977 include Very Long Baseline Interferometry (VLBI), three-way Doppler tracking (sometimes called quasi-VLBI), and two-way Doppler tracking. It is shown that VLBI and quasi-VLBI methods depend on the same basic concept, and that they impose nearly the same requirements on the stability of frequency standards at the tracking stations. It is also shown how a realistic modelling of spacecraft navigational errors prevents overspecifying the requirements to frequency stability.
An evaluation of superminicomputers for thermal analysis
NASA Technical Reports Server (NTRS)
Storaasli, O. O.; Vidal, J. B.; Jones, G. K.
1962-01-01
The feasibility and cost effectiveness of solving thermal analysis problems on superminicomputers is demonstrated. Conventional thermal analysis and the changing computer environment, computer hardware and software used, six thermal analysis test problems, performance of superminicomputers (CPU time, accuracy, turnaround, and cost) and comparison with large computers are considered. Although the CPU times for superminicomputers were 15 to 30 times greater than the fastest mainframe computer, the minimum cost to obtain the solutions on superminicomputers was from 11 percent to 59 percent of the cost of mainframe solutions. The turnaround (elapsed) time is highly dependent on the computer load, but for large problems, superminicomputers produced results in less elapsed time than a typically loaded mainframe computer.
A well-scaling natural orbital theory
Gebauer, Ralph; Cohen, Morrel H.; Car, Roberto
2016-11-01
Here, we introduce an energy functional for ground-state electronic structure calculations. Its variables are the natural spin-orbitals of singlet many-body wave functions and their joint occupation probabilities deriving from controlled approximations to the two-particle density matrix that yield algebraic scaling in general, and Hartree–Fock scaling in its seniority-zero version. Results from the latter version for small molecular systems are compared with those of highly accurate quantum-chemical computations. The energies lie above full configuration interaction calculations, close to doubly occupied configuration interaction calculations. Their accuracy is considerably greater than that obtained from current density-functional theory approximations and from current functionals ofmore » the oneparticle density matrix.« less
A well-scaling natural orbital theory
Gebauer, Ralph; Cohen, Morrel H.; Car, Roberto
2016-01-01
We introduce an energy functional for ground-state electronic structure calculations. Its variables are the natural spin-orbitals of singlet many-body wave functions and their joint occupation probabilities deriving from controlled approximations to the two-particle density matrix that yield algebraic scaling in general, and Hartree–Fock scaling in its seniority-zero version. Results from the latter version for small molecular systems are compared with those of highly accurate quantum-chemical computations. The energies lie above full configuration interaction calculations, close to doubly occupied configuration interaction calculations. Their accuracy is considerably greater than that obtained from current density-functional theory approximations and from current functionals of the one-particle density matrix. PMID:27803328
Analysis of Temperature Maps of Selected Dawn Data Over the Surface of Vesta
NASA Technical Reports Server (NTRS)
Tosi, F.; Capria, M. T.; DeSanctis, M. C.; Palomba, E.; Grassi, D.; Capaccioni, F.; Ammannito, E.; Combe, J.-Ph.; Sunshine, J. M.; McCord, T. B.;
2012-01-01
The thermal behavior of areas of unusual albedo at the surface of Vesta can be related to physical properties that may provide some information about the origin of those materials. Dawn s Visible and Infrared Mapping Spectrometer (VIR) [1] hyperspectral cubes can be used to retrieve surface temperatures. Due to instrumental constraints, high accuracy is obtained only if temperatures are greater than 180 K. Bright and dark surface materials on Vesta are currently investigated by the Dawn team [e.g., 2 and 3 respectively]. Here we present temperature maps of several local-scale features that were observed by Dawn under different illumination conditions and different local solar times.
Sheffield, Catherine A; Kane, Michael P; Bakst, Gary; Busch, Robert S; Abelseth, Jill M; Hamilton, Robert A
2009-09-01
This study compared the accuracy and precision of four value-added glucose meters. Finger stick glucose measurements in diabetes patients were performed using the Abbott Diabetes Care (Alameda, CA) Optium, Diagnostic Devices, Inc. (Miami, FL) DDI Prodigy, Home Diagnostics, Inc. (Fort Lauderdale, FL) HDI True Track Smart System, and Arkray, USA (Minneapolis, MN) HypoGuard Assure Pro. Finger glucose measurements were compared with laboratory reference results. Accuracy was assessed by a Clarke error grid analysis (EGA), a Parkes EGA, and within 5%, 10%, 15%, and 20% of the laboratory value criteria (chi2 analysis). Meter precision was determined by calculating absolute mean differences in glucose values between duplicate samples (Kruskal-Wallis test). Finger sticks were obtained from 125 diabetes patients, of which 90.4% were Caucasian, 51.2% were female, 83.2% had type 2 diabetes, and average age of 59 years (SD 14 years). Mean venipuncture blood glucose was 151 mg/dL (SD +/-65 mg/dL; range, 58-474 mg/dL). Clinical accuracy by Clarke EGA was demonstrated in 94% of Optium, 82% of Prodigy, 61% of True Track, and 77% of the Assure Pro samples (P < 0.05 for Optium and True Track compared to all others). By Parkes EGA, the True Track was significantly less accurate than the other meters. Within 5% accuracy was achieved in 34%, 24%, 29%, and 13%, respectively (P < 0.05 for Optium, Prodigy, and Assure Pro compared to True Track). Within 10% accuracy was significantly greater for the Optium, Prodigy, and Assure Pro compared to True Track. Significantly more Optium results demonstrated within 15% and 20% accuracy compared to the other meter systems. The HDI True Track was significantly less precise than the other meter systems. The Abbott Optium was significantly more accurate than the other meter systems, whereas the HDI True Track was significantly less accurate and less precise compared to the other meter systems.
Zekveld, Adriana A; Kramer, Sophia E; Kessens, Judith M; Vlaming, Marcel S M G; Houtgast, Tammo
2009-04-01
The aim of the current study was to examine whether partly incorrect subtitles that are automatically generated by an Automatic Speech Recognition (ASR) system, improve speech comprehension by listeners with hearing impairment. In an earlier study (Zekveld et al. 2008), we showed that speech comprehension in noise by young listeners with normal hearing improves when presenting partly incorrect, automatically generated subtitles. The current study focused on the effects of age, hearing loss, visual working memory capacity, and linguistic skills on the benefit obtained from automatically generated subtitles during listening to speech in noise. In order to investigate the effects of age and hearing loss, three groups of participants were included: 22 young persons with normal hearing (YNH, mean age = 21 years), 22 middle-aged adults with normal hearing (MA-NH, mean age = 55 years) and 30 middle-aged adults with hearing impairment (MA-HI, mean age = 57 years). The benefit from automatic subtitling was measured by Speech Reception Threshold (SRT) tests (Plomp & Mimpen, 1979). Both unimodal auditory and bimodal audiovisual SRT tests were performed. In the audiovisual tests, the subtitles were presented simultaneously with the speech, whereas in the auditory test, only speech was presented. The difference between the auditory and audiovisual SRT was defined as the audiovisual benefit. Participants additionally rated the listening effort. We examined the influences of ASR accuracy level and text delay on the audiovisual benefit and the listening effort using a repeated measures General Linear Model analysis. In a correlation analysis, we evaluated the relationships between age, auditory SRT, visual working memory capacity and the audiovisual benefit and listening effort. The automatically generated subtitles improved speech comprehension in noise for all ASR accuracies and delays covered by the current study. Higher ASR accuracy levels resulted in more benefit obtained from the subtitles. Speech comprehension improved even for relatively low ASR accuracy levels; for example, participants obtained about 2 dB SNR audiovisual benefit for ASR accuracies around 74%. Delaying the presentation of the text reduced the benefit and increased the listening effort. Participants with relatively low unimodal speech comprehension obtained greater benefit from the subtitles than participants with better unimodal speech comprehension. We observed an age-related decline in the working-memory capacity of the listeners with normal hearing. A higher age and a lower working memory capacity were associated with increased effort required to use the subtitles to improve speech comprehension. Participants were able to use partly incorrect and delayed subtitles to increase their comprehension of speech in noise, regardless of age and hearing loss. This supports the further development and evaluation of an assistive listening system that displays automatically recognized speech to aid speech comprehension by listeners with hearing impairment.
Analysis of nutrition judgments using the Nutrition Facts Panel.
González-Vallejo, Claudia; Lavins, Bethany D; Carter, Kristina A
2016-10-01
Consumers' judgments and choices of the nutritional value of food products (cereals and snacks) were studied as a function of using information in the Nutrition Facts Panel (NFP, National Labeling and Education Act, 1990). Brunswik's lens model (Brunswik, 1955; Cooksey, 1996; Hammond, 1955; Stewart, 1988) served as the theoretical and analytical tool for examining the judgment process. Lens model analysis was further enriched with the criticality of predictors' technique developed by Azen, Budescu, & Reiser (2001). Judgment accuracy was defined as correspondence between consumers' judgments and the nutritional quality index, NuVal(®), obtained from an expert system. The study also examined several individual level variables (e.g., age, gender, BMI, educational level, health status, health beliefs, etc.) as predictors of lens model indices that measure judgment consistency, judgment accuracy, and knowledge of the environment. Results showed varying levels of consistency and accuracy depending on the food product, but generally the median values of the lens model statistics were moderate. Judgment consistency was higher for more educated individuals; judgment accuracy was predicted from a combination of person level characteristics, and individuals who reported having regular meals had models that were in greater agreement with the expert's model. Lens model methodology is a useful tool for understanding how individuals perceive the nutrition in foods based on the NFP label. Lens model judgment indices were generally low, highlighting that the benefits of the complex NFP label may be more modest than what has been previously assumed. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paret, Paul P; DeVoto, Douglas J; Narumanchi, Sreekant V
Sintered silver has proven to be a promising candidate for use as a die-attach and substrate-attach material in automotive power electronics components. It holds promise of greater reliability than lead-based and lead-free solders, especially at higher temperatures (less than 200 degrees Celcius). Accurate predictive lifetime models of sintered silver need to be developed and its failure mechanisms thoroughly characterized before it can be deployed as a die-attach or substrate-attach material in wide-bandgap device-based packages. We present a finite element method (FEM) modeling methodology that can offer greater accuracy in predicting the failure of sintered silver under accelerated thermal cycling. Amore » fracture mechanics-based approach is adopted in the FEM model, and J-integral/thermal cycle values are computed. In this paper, we outline the procedures for obtaining the J-integral/thermal cycle values in a computational model and report on the possible advantage of using these values as modeling parameters in a predictive lifetime model.« less
Malik, Bilal H.; Jabbour, Joey M.; Maitland, Kristen C.
2015-01-01
Automatic segmentation of nuclei in reflectance confocal microscopy images is critical for visualization and rapid quantification of nuclear-to-cytoplasmic ratio, a useful indicator of epithelial precancer. Reflectance confocal microscopy can provide three-dimensional imaging of epithelial tissue in vivo with sub-cellular resolution. Changes in nuclear density or nuclear-to-cytoplasmic ratio as a function of depth obtained from confocal images can be used to determine the presence or stage of epithelial cancers. However, low nuclear to background contrast, low resolution at greater imaging depths, and significant variation in reflectance signal of nuclei complicate segmentation required for quantification of nuclear-to-cytoplasmic ratio. Here, we present an automated segmentation method to segment nuclei in reflectance confocal images using a pulse coupled neural network algorithm, specifically a spiking cortical model, and an artificial neural network classifier. The segmentation algorithm was applied to an image model of nuclei with varying nuclear to background contrast. Greater than 90% of simulated nuclei were detected for contrast of 2.0 or greater. Confocal images of porcine and human oral mucosa were used to evaluate application to epithelial tissue. Segmentation accuracy was assessed using manual segmentation of nuclei as the gold standard. PMID:25816131
Reverse phase HPLC method for detection and quantification of lupin seed γ-conglutin.
Mane, Sharmilee; Bringans, Scott; Johnson, Stuart; Pareek, Vishnu; Utikar, Ranjeet
2017-09-15
A simple, selective and accurate reverse phase HPLC method was developed for detection and quantitation of γ-conglutin from lupin seed extract. A linear gradient of water and acetonitrile containing trifluoroacetic acid (TFA) on a reverse phase column (Agilent Zorbax 300SB C-18), with a flow rate of 0.8ml/min was able to produce a sharp and symmetric peak of γ-conglutin with a retention time at 29.16min. The identity of γ-conglutin in the peak was confirmed by mass spectrometry (MS/MS identification) and sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) analysis. The data obtained from MS/MS analysis was matched against the specified database to obtain the exact match for the protein of interest. The proposed method was validated in terms of specificity, linearity, sensitivity, precision, recovery and accuracy. The analytical parameters revealed that the validated method was capable of selectively performing a good chromatographic separation of γ-conglutin from the lupin seed extract with no interference of the matrix. The detection and quantitation limit of γ-conglutin were found to be 2.68μg/ml and 8.12μg/ml respectively. The accuracy (precision and recovery) analysis of the method was conducted under repeatable conditions on different days. Intra-day and inter-day precision values less than 0.5% and recovery greater than 97% indicated high precision and accuracy of the method for analysis of γ-conglutin. The method validation findings were reproducible and can be successfully applied for routine analysis of γ-conglutin from lupin seed extract. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Sayandev; Campbell, Emily L.; Neiner, Doinita
To date, only limited thermodynamic models describing activity coefficients of the aqueous solutions of lanthanide ions are available. This work expands the existing experimental osmotic coefficient data obtained by classical isopiestic technique for the aqueous binary trivalent lanthanide nitrate Ln(NO3)3 solutions using a combination of water activity and vapor pressure osmometry measurements. The combined osmotic coefficient database for each aqueous lanthanide nitrate at 25°C, consisting of literature available data as well as data obtained in this work, was used to test the validity of Pitzer and Bromley thermodynamic models for the accurate prediction of mean molal activity coefficients of themore » Ln(NO3)3 solutions in wide concentration ranges. The new and improved Pitzer and Bromley parameters were calculated. It was established that the Ln(NO3)3 activity coefficients in the solutions with ionic strength up to 12 mol kg-1 can be estimated by both Pitzer and single-parameter Bromley models, even though the latter provides for more accurate prediction, particularly in the lower ionic strength regime (up to 6 mol kg-1). On the other hand for the concentrated solutions, the extended three-parameter Bromley model can be employed to predict the Ln(NO3)3 activity coefficients with remarkable accuracy. The accuracy of the extended Bromley model in predicting the activity coefficients was greater than ~95% and ~90% for all solutions with the ionic strength up to 12 mol kg-1 and and 20 mol kg-1, respectively. This is the first time that the activity coefficients for concentrated lanthanide solutions have been predicted with such a remarkable accuracy.« less
Barton, Zachary J; Rodríguez-López, Joaquín
2017-03-07
We report a method of precisely positioning a Hg-based ultramicroelectrode (UME) for scanning electrochemical microscopy (SECM) investigations of any substrate. Hg-based probes are capable of performing amalgamation reactions with metal cations, which avoid unwanted side reactions and positive feedback mechanisms that can prove problematic for traditional probe positioning methods. However, prolonged collection of ions eventually leads to saturation of the amalgam accompanied by irreversible loss of Hg. In order to obtain negative feedback positioning control without risking damage to the SECM probe, we implement cyclic voltammetry probe approach surfaces (CV-PASs), consisting of CVs performed between incremental motor movements. The amalgamation current, peak stripping current, and integrated stripping charge extracted from a shared CV-PAS give three distinct probe approach curves (CV-PACs), which can be used to determine the tip-substrate gap to within 1% of the probe radius. Using finite element simulations, we establish a new protocol for fitting any CV-PAC and demonstrate its validity with experimental results for sodium and potassium ions in propylene carbonate by obtaining over 3 orders of magnitude greater accuracy and more than 20-fold greater precision than existing methods. Considering the timescales of diffusion and amalgam saturation, we also present limiting conditions for obtaining and fitting CV-PAC data. The ion-specific signals isolated in CV-PACs allow precise and accurate positioning of Hg-based SECM probes over any sample and enable the deployment of CV-PAS SECM as an analytical tool for traditionally challenging conditions.
Salmen, F S; de Oliveira, T F M; Gabrielli, M A C; Pereira Filho, V A; Real Gabrielli, M F
2018-06-01
The aim of this study was to evaluate the precision of bimaxillary surgery performed to correct vertical maxillary excess, when the procedure is sequenced with mandibular surgery first or maxillary surgery first. Thirty-two patients, divided into two groups, were included in this retrospective study. Group 1 comprised patients who received bimaxillary surgery following the classical sequence with repositioning of the maxilla first. Patients in group 2 received bimaxillary surgery, but the mandible was operated on first. The precision of the maxillomandibular repositioning was determined by comparison of the digital prediction and postoperative tracings superimposed on the cranial base. The data were tabulated and analyzed statistically. In this sample, both surgical sequences provided adequate clinical accuracy. The classical sequence, repositioning the maxilla first, resulted in greater accuracy for A-point and the upper incisor edge vertical position. Repositioning the mandible first allowed greater precision in the vertical position of pogonion. In conclusion, although both surgical sequences may be used, repositioning the mandible first will result in greater imprecision in relation to the predictive tracing than repositioning the maxilla first. The classical sequence resulted in greater accuracy in the vertical position of the maxilla, which is key for aesthetics. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Playing vs. nonplaying aerobic training in tennis: physiological and performance outcomes.
Pialoux, Vincent; Genevois, Cyril; Capoen, Arnaud; Forbes, Scott C; Thomas, Jordan; Rogowski, Isabelle
2015-01-01
This study compared the effects of playing and nonplaying high intensity intermittent training (HIIT) on physiological demands and tennis stroke performance in young tennis players. Eleven competitive male players (13.4 ± 1.3 years) completed both a playing and nonplaying HIIT session of equal distance, in random order. During each HIIT session, heart rate (HR), blood lactate, and ratings of perceived exertion (RPE) were monitored. Before and after each HIIT session, the velocity and accuracy of the serve, and forehand and backhand strokes were evaluated. The results demonstrated that both HIIT sessions achieved an average HR greater than 90% HRmax. The physiological demands (average HR) were greater during the playing session compared to the nonplaying session, despite similar lactate concentrations and a lower RPE. The results also indicate a reduction in shot velocity after both HIIT sessions; however, the playing HIIT session had a more deleterious effect on stroke accuracy. These findings suggest that 1) both HIIT sessions may be sufficient to develop maximal aerobic power, 2) playing HIIT sessions provide a greater physiological demand with a lower RPE, and 3) playing HIIT has a greater deleterious effect on stroke performance, and in particular on the accuracy component of the ground stroke performance, and should be incorporated appropriately into a periodization program in young male tennis players.
Juodzbaliene, Vilma; Darbutas, Tomas; Skurvydas, Albertas
2016-01-01
The aim of the study was to determine the effect of different muscle length and visual feedback information (VFI) on accuracy of isometric contraction of elbow flexors in men after an ischemic stroke (IS). Materials and Methods. Maximum voluntary muscle contraction force (MVMCF) and accurate determinate muscle force (20% of MVMCF) developed during an isometric contraction of elbow flexors in 90° and 60° of elbow flexion were measured by an isokinetic dynamometer in healthy subjects (MH, n = 20) and subjects after an IS during their postrehabilitation period (MS, n = 20). Results. In order to evaluate the accuracy of the isometric contraction of the elbow flexors absolute errors were calculated. The absolute errors provided information about the difference between determinate and achieved muscle force. Conclusions. There is a tendency that greater absolute errors generating determinate force are made by MH and MS subjects in case of a greater elbow flexors length despite presence of VFI. Absolute errors also increase in both groups in case of a greater elbow flexors length without VFI. MS subjects make greater absolute errors generating determinate force without VFI in comparison with MH in shorter elbow flexors length. PMID:27042670
Playing vs. Nonplaying Aerobic Training in Tennis: Physiological and Performance Outcomes
Pialoux, Vincent; Genevois, Cyril; Capoen, Arnaud; Forbes, Scott C.; Thomas, Jordan; Rogowski, Isabelle
2015-01-01
This study compared the effects of playing and nonplaying high intensity intermittent training (HIIT) on physiological demands and tennis stroke performance in young tennis players. Eleven competitive male players (13.4 ± 1.3 years) completed both a playing and nonplaying HIIT session of equal distance, in random order. During each HIIT session, heart rate (HR), blood lactate, and ratings of perceived exertion (RPE) were monitored. Before and after each HIIT session, the velocity and accuracy of the serve, and forehand and backhand strokes were evaluated. The results demonstrated that both HIIT sessions achieved an average HR greater than 90% HRmax. The physiological demands (average HR) were greater during the playing session compared to the nonplaying session, despite similar lactate concentrations and a lower RPE. The results also indicate a reduction in shot velocity after both HIIT sessions; however, the playing HIIT session had a more deleterious effect on stroke accuracy. These findings suggest that 1) both HIIT sessions may be sufficient to develop maximal aerobic power, 2) playing HIIT sessions provide a greater physiological demand with a lower RPE, and 3) playing HIIT has a greater deleterious effect on stroke performance, and in particular on the accuracy component of the ground stroke performance, and should be incorporated appropriately into a periodization program in young male tennis players. PMID:25816346
Williams, DeWayne P; Thayer, Julian F; Koenig, Julian
2016-12-01
Intraindividual reaction time variability (IIV), defined as the variability in trial-to-trial response times, is thought to serve as an index of central nervous system function. As such, greater IIV reflects both poorer executive brain function and cognitive control, in addition to lapses in attention. Resting-state vagally mediated heart rate variability (vmHRV), a psychophysiological index of self-regulatory abilities, has been linked with executive brain function and cognitive control such that those with greater resting-state vmHRV often perform better on cognitive tasks. However, research has yet to investigate the direct relationship between resting vmHRV and task IIV. The present study sought to examine this relationship in a sample of 104 young and healthy participants who first completed a 5-min resting-baseline period during which resting-state vmHRV was assessed. Participants then completed an attentional (target detection) task, where reaction time, accuracy, and trial-to-trial IIV were obtained. Results showed resting vmHRV to be significantly related to IIV, such that lower resting vmHRV predicted higher IIV on the task, even when controlling for several covariates (including mean reaction time and accuracy). Overall, our results provide further evidence for the link between resting vmHRV and cognitive control, and extend these notions to the domain of lapses in attention, as indexed by IIV. Implications and recommendations for future research on resting vmHRV and cognition are discussed. © 2016 Society for Psychophysiological Research.
NASA Astrophysics Data System (ADS)
Inamori, Takaya; Hosonuma, Takayuki; Ikari, Satoshi; Saisutjarit, Phongsatorn; Sako, Nobutada; Nakasuka, Shinichi
2015-02-01
Recently, small satellites have been employed in various satellite missions such as astronomical observation and remote sensing. During these missions, the attitudes of small satellites should be stabilized to a higher accuracy to obtain accurate science data and images. To achieve precise attitude stabilization, these small satellites should estimate their attitude rate under the strict constraints of mass, space, and cost. This research presents a new method for small satellites to precisely estimate angular rate using star blurred images by employing a mission telescope to achieve precise attitude stabilization. In this method, the angular velocity is estimated by assessing the quality of a star image, based on how blurred it appears to be. Because the proposed method utilizes existing mission devices, a satellite does not require additional precise rate sensors, which makes it easier to achieve precise stabilization given the strict constraints possessed by small satellites. The research studied the relationship between estimation accuracy and parameters used to achieve an attitude rate estimation, which has a precision greater than 1 × 10-6 rad/s. The method can be applied to all attitude sensors, which use optics systems such as sun sensors and star trackers (STTs). Finally, the method is applied to the nano astrometry satellite Nano-JASMINE, and we investigate the problems that are expected to arise with real small satellites by performing numerical simulations.
Evaluation of factors affecting CGMS calibration.
Buckingham, Bruce A; Kollman, Craig; Beck, Roy; Kalajian, Andrea; Fiallo-Scharer, Rosanna; Tansey, Michael J; Fox, Larry A; Wilson, Darrell M; Weinzimer, Stuart A; Ruedy, Katrina J; Tamborlane, William V
2006-06-01
The optimal number/timing of calibrations entered into the CGMS (Medtronic MiniMed, Northridge, CA) continuous glucose monitoring system have not been previously described. Fifty subjects with Type 1 diabetes mellitus (10-18 years old) were hospitalized in a clinical research center for approximately 24 h on two separate days. CGMS and OneTouch Ultra meter (LifeScan, Milpitas, CA) data were obtained. The CGMS was retrospectively recalibrated using the Ultra data varying the number and timing of calibrations. Resulting CGMS values were compared against laboratory reference values. There was a modest improvement in accuracy with increasing number of calibrations. The median relative absolute deviation (RAD) was 14%, 15%, 13%, and 13% when using three, four, five, and seven calibration values, respectively (P < 0.001). Corresponding percentages of CGMS-reference pairs meeting the International Organisation for Standardisation criteria were 66%, 67%, 71%, and 72% (P < 0.001). Nighttime accuracy improved when daytime calibrations (pre-lunch and pre-dinner) were removed leaving only two calibrations at 9 p.m. and 6 a.m. (median difference, -2 vs. -9 mg/dL, P < 0.001; median RAD, 12% vs. 15%, P = 0.001). Accuracy was better on visits where the average absolute rate of glucose change at the times of calibration was lower. On visits with average absolute rates <0.5, 0.5 to <1.0, 1.0 to <1.5, and >or=1.5 mg/dL/min, median RAD values were 13% versus 14% versus 17% versus 19%, respectively (P = 0.05). Although accuracy is slightly improved with more calibrations, the timing of the calibrations appears more important. Modifying the algorithm to put less weight on daytime calibrations for nighttime values and calibrating during times of relative glucose stability may have greater impact on accuracy.
Evaluation of Factors Affecting CGMS Calibration
2006-01-01
Background The optimal number/timing of calibrations entered into the Continuous Glucose Monitoring System (“CGMS”; Medtronic MiniMed, Northridge, CA) have not been previously described. Methods Fifty subjects with T1DM (10–18y) were hospitalized in a clinical research center for ~24h on two separate days. CGMS and OneTouch® Ultra® Meter (“Ultra”; LifeScan, Milpitas, CA) data were obtained. The CGMS was retrospectively recalibrated using the Ultra data varying the number and timing of calibrations. Resulting CGMS values were compared against laboratory reference values. Results There was a modest improvement in accuracy with increasing number of calibrations. The median relative absolute deviation (RAD) was 14%, 15%, 13% and 13% when using 3, 4, 5 and 7 calibration values, respectively (p<0.001). Corresponding percentages of CGMS-reference pairs meeting the ISO criteria were 66%, 67%, 71% and 72% (p<0.001). Nighttime accuracy improved when daytime calibrations (pre-lunch and pre-dinner) were removed leaving only two calibrations at 9p.m. and 6a.m. (median difference: −2 vs. −9mg/dL, p<0.001; median RAD: 12% vs. 15%, p=0.001). Accuracy was better on visits where the average absolute rate of glucose change at the times of calibration was lower. On visits with average absolute rates <0.5, 0.5-<1.0, 1.0-<1.5 and ≥1.5mg/dL/min, median RAD values were 13% vs. 14% vs. 17% vs. 19%, respectively (p=0.05). Conclusions Although accuracy is slightly improved with more calibrations, the timing of the calibrations appears more important. Modifying the algorithm to put less weight on daytime calibrations for nighttime values and calibrating during times of relative glucose stability may have greater impact on accuracy. PMID:16800753
10 CFR 72.11 - Completeness and accuracy of information.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 2 2013-01-01 2013-01-01 false Completeness and accuracy of information. 72.11 Section 72.11 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR THE INDEPENDENT STORAGE OF SPENT NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C...
10 CFR 72.11 - Completeness and accuracy of information.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 72.11 Section 72.11 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR THE INDEPENDENT STORAGE OF SPENT NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C...
10 CFR 72.11 - Completeness and accuracy of information.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 2 2012-01-01 2012-01-01 false Completeness and accuracy of information. 72.11 Section 72.11 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR THE INDEPENDENT STORAGE OF SPENT NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C...
10 CFR 72.11 - Completeness and accuracy of information.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 72.11 Section 72.11 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR THE INDEPENDENT STORAGE OF SPENT NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C...
10 CFR 72.11 - Completeness and accuracy of information.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 72.11 Section 72.11 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR THE INDEPENDENT STORAGE OF SPENT NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C...
NASA Technical Reports Server (NTRS)
Ackleson, S. G.; Klemas, V.
1987-01-01
Landsat MSS and TM imagery, obtained simultaneously over Guinea Marsh, VA, as analyzed and compares for its ability to detect submerged aquatic vegetation (SAV). An unsupervised clustering algorithm was applied to each image, where the input classification parameters are defined as functions of apparent sensor noise. Class confidence and accuracy were computed for all water areas by comparing the classified images, pixel-by-pixel, to rasterized SAV distributions derived from color aerial photography. To illustrate the effect of water depth on classification error, areas of depth greater than 1.9 m were masked, and class confidence and accuracy recalculated. A single-scattering radiative-transfer model is used to illustrate how percent canopy cover and water depth affect the volume reflectance from a water column containing SAV. For a submerged canopy that is morphologically and optically similar to Zostera marina inhabiting Lower Chesapeake Bay, dense canopies may be isolated by masking optically deep water. For less dense canopies, the effect of increasing water depth is to increase the apparent percent crown cover, which may result in classification error.
Research on effect of rough surface on FMCW laser radar range accuracy
NASA Astrophysics Data System (ADS)
Tao, Huirong
2018-03-01
The non-cooperative targets large scale measurement system based on frequency-modulated continuous-wave (FMCW) laser detection and ranging technology has broad application prospects. It is easy to automate measurement without cooperative targets. However, the complexity and diversity of the surface characteristics of the measured surface directly affects the measurement accuracy. First, the theoretical analysis of range accuracy for a FMCW laser radar was studied, the relationship between surface reflectivity and accuracy was obtained. Then, to verify the effect of surface reflectance for ranging accuracy, a standard tool ball and three standard roughness samples were measured within 7 m to 24 m. The uncertainty of each target was obtained. The results show that the measurement accuracy is found to increase as the surface reflectivity gets larger. Good agreements were obtained between theoretical analysis and measurements from rough surfaces. Otherwise, when the laser spot diameter is smaller than the surface correlation length, a multi-point averaged measurement can reduce the measurement uncertainty. The experimental results show that this method is feasible.
Children's Memories for Painful Cancer Treatment Procedures: Implications for Distress.
ERIC Educational Resources Information Center
Chen, Edith; Zeltzer, Lonnie K.; Craske, Michelle G.; Katz, Ernest R.
2000-01-01
Examined memory of 3- to 18-year-olds with leukemia regarding lumbar punctures (LP). Found that children displayed considerable accuracy for event details, with accuracy increasing with age. Use of Versed (anxiolytic medication described as a "memory blocker") was not related to recall. Higher distress predicted greater exaggerations in…
Modifications to the accuracy assessment analysis routine MLTCRP to produce an output file
NASA Technical Reports Server (NTRS)
Carnes, J. G.
1978-01-01
Modifications are described that were made to the analysis program MLTCRP in the accuracy assessment software system to produce a disk output file. The output files produced by this modified program are used to aggregate data for regions greater than a single segment.
Effects of cognitive training on change in accuracy in inductive reasoning ability.
Boron, Julie Blaskewicz; Turiano, Nicholas A; Willis, Sherry L; Schaie, K Warner
2007-05-01
We investigated cognitive training effects on accuracy and number of items attempted in inductive reasoning performance in a sample of 335 older participants (M = 72.78 years) from the Seattle Longitudinal Study. We assessed the impact of individual characteristics, including chronic disease. The reasoning training group showed significantly greater gain in accuracy and number of attempted items than did the comparison group; gain was primarily due to enhanced accuracy. Reasoning training effects involved a complex interaction of gender, prior cognitive status, and chronic disease. Women with prior decline on reasoning but no heart disease showed the greatest accuracy increase. In addition, stable reasoning-trained women with heart disease demonstrated significant accuracy gain. Comorbidity was associated with less change in accuracy. The results support the effectiveness of cognitive training on improving the accuracy of reasoning performance.
Normative Data on Audiovisual Speech Integration Using Sentence Recognition and Capacity Measures
Altieri, Nicholas; Hudock, Daniel
2016-01-01
Objective The ability to use visual speech cues and integrate them with auditory information is important, especially in noisy environments and for hearing-impaired (HI) listeners. Providing data on measures of integration skills that encompass accuracy and processing speed will benefit researchers and clinicians. Design The study consisted of two experiments: First, accuracy scores were obtained using CUNY sentences, and capacity measures that assessed reaction-time distributions were obtained from a monosyllabic word recognition task. Study Sample We report data on two measures of integration obtained from a sample comprised of 86 young and middle-age adult listeners: Results To summarize our results, capacity showed a positive correlation with accuracy measures of audiovisual benefit obtained from sentence recognition. More relevant, factor analysis indicated that a single-factor model captured audiovisual speech integration better than models containing more factors. Capacity exhibited strong loadings on the factor, while the accuracy-based measures from sentence recognition exhibited weaker loadings. Conclusions Results suggest that a listener’s integration skills may be assessed optimally using a measure that incorporates both processing speed and accuracy. PMID:26853446
Normative data on audiovisual speech integration using sentence recognition and capacity measures.
Altieri, Nicholas; Hudock, Daniel
2016-01-01
The ability to use visual speech cues and integrate them with auditory information is important, especially in noisy environments and for hearing-impaired (HI) listeners. Providing data on measures of integration skills that encompass accuracy and processing speed will benefit researchers and clinicians. The study consisted of two experiments: First, accuracy scores were obtained using City University of New York (CUNY) sentences, and capacity measures that assessed reaction-time distributions were obtained from a monosyllabic word recognition task. We report data on two measures of integration obtained from a sample comprised of 86 young and middle-age adult listeners: To summarize our results, capacity showed a positive correlation with accuracy measures of audiovisual benefit obtained from sentence recognition. More relevant, factor analysis indicated that a single-factor model captured audiovisual speech integration better than models containing more factors. Capacity exhibited strong loadings on the factor, while the accuracy-based measures from sentence recognition exhibited weaker loadings. Results suggest that a listener's integration skills may be assessed optimally using a measure that incorporates both processing speed and accuracy.
Accurate formula for gaseous transmittance in the infrared.
Gibson, G A; Pierluissi, J H
1971-07-01
By considering the infrared transmittance model of Zachor as the equation for an elliptic cone, a quadratic generalization is proposed that yields significantly greater computational accuracy. The strong-band parameters are obtained by iterative nonlinear, curve-fitting methods using a digital computer. The remaining parameters are determined with a linear least-squares technique and a weighting function that yields better results than the one adopted by Zachor. The model is applied to CO(2) over intervals of 50 cm(-1) between 550 cm(-1) and 9150 cm(-1) and to water vapor over similar intervals between 1050 cm(-1) and 9950 cm(-1), with mean rms deviations from the original data being 2.30 x 10(-3) and 1.83 x 10(-3), respectively.
NASA Technical Reports Server (NTRS)
Trubert, M.; Salama, M.
1979-01-01
Unlike an earlier shock spectra approach, generalization permits an accurate elastic interaction between the spacecraft and launch vehicle to obtain accurate bounds on the spacecraft response and structural loads. In addition, the modal response from a previous launch vehicle transient analysis with or without a dummy spacecraft - is exploited to define a modal impulse as a simple idealization of the actual forcing function. The idealized modal forcing function is then used to derive explicit expressions for an estimate of the bound on the spacecraft structural response and forces. Greater accuracy is achieved with the present method over the earlier shock spectra, while saving much computational effort over the transient analysis.
Analysis of ZDDP Content and Thermal Decomposition in Motor Oils Using NAA and NMR
NASA Astrophysics Data System (ADS)
Ferguson, S.; Johnson, J.; Gonzales, D.; Hobbs, C.; Allen, C.; Williams, S.
Zinc dialkyldithiophosphates (ZDDPs) are one of the most common anti-wear additives present in commercially-available motor oils. The ZDDP concentrations of motor oils are most commonly determined using inductively coupled plasma atomic emission spectroscopy (ICP-AES). As part of an undergraduate research project, we have determined the Zn concentrations of eight commercially-available motor oils and one oil additive using neutron activation analysis (NAA), which has potential for greater accuracy and less sensitivity to matrix effects as compared to ICP-AES. The 31P nuclear magnetic resonance (31P-NMR) spectra were also obtained for several oil additive samples which have been heated to various temperatures in order to study the thermal decomposition of ZDDPs.
Water vapor - The wet blanket of microwave interferometry
NASA Technical Reports Server (NTRS)
Resch, G. M.
1980-01-01
The various techniques that utilize microwave interferometry could be employed to determine distances of several thousand kilometers with an accuracy of 1 cm or 2 cm. Such measurements would be useful to obtain new knowledge of earth dynamics, greater insight into fundamental astronomical constants, and the ability to accurately navigate a spacecraft in interplanetary flight. There is, however, a basic problem, related to the presence of tropospheric water vapor, which has to be overcome before such measurements can be realized. Differing amounts of water vapor over the interferometer stations cause errors in the differential time of arrival which is the principal observable quantity. Approaches for overcoming this problem are considered, taking into account requirements for water vapor calibration to support interferometric techniques.
The speed of information processing of 9- to 13-year-old intellectually gifted children.
Duan, Xiaoju; Dan, Zhou; Shi, Jiannong
2013-02-01
In general, intellectually gifted children perform better than non-gifted children across many domains. The present validation study investigated the speed with which intellectually gifted children process information. 184 children, ages 9 to 13 years old (91 gifted, M age = 10.9 yr., SD = 1.8; 93 non-gifted children, M age = 11.0 yr., SD = 1.7) were tested individually on three information processing tasks: an inspection time task, a choice reaction time task, an abstract matching task. Intellectually gifted children outperformed their non-gifted peers on all three tasks obtaining shorter reaction time and doing so with greater accuracy. The findings supported the validity of the information processing speed in identifying intellectually gifted children.
On-line algorithms for forecasting hourly loads of an electric utility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vemuri, S.; Huang, W.L.; Nelson, D.J.
A method that lends itself to on-line forecasting of hourly electric loads is presented, and the results of its use are compared to models developed using the Box-Jenkins method. The method consits of processing the historical hourly loads with a sequential least-squares estimator to identify a finite-order autoregressive model which, in turn, is used to obtain a parsimonious autoregressive-moving average model. The method presented has several advantages in comparison with the Box-Jenkins method including much-less human intervention, improved model identification, and better results. The method is also more robust in that greater confidence can be placed in the accuracy ofmore » models based upon the various measures available at the identification stage.« less
NASA Astrophysics Data System (ADS)
Hoang, Nguyen Tien; Koike, Katsuaki
2018-03-01
Hyperspectral remote sensing generally provides more detailed spectral information and greater accuracy than multispectral remote sensing for identification of surface materials. However, there have been no hyperspectral imagers that cover the entire Earth surface. This lack points to a need for producing pseudo-hyperspectral imagery by hyperspectral transformation from multispectral images. We have recently developed such a method, a Pseudo-Hyperspectral Image Transformation Algorithm (PHITA), which transforms Landsat 7 ETM+ images into pseudo-EO-1 Hyperion images using multiple linear regression models of ETM+ and Hyperion band reflectance data. This study extends the PHITA to transform TM, OLI, and EO-1 ALI sensor images into pseudo-Hyperion images. By choosing a part of the Fish Lake Valley geothermal prospect area in the western United States for study, the pseudo-Hyperion images produced from the TM, ETM+, OLI, and ALI images by PHITA were confirmed to be applicable to mineral mapping. Using a reference map as the truth, three main minerals (muscovite and chlorite mixture, opal, and calcite) were identified with high overall accuracies from the pseudo-images (> 95% and > 42% for excluding and including unclassified pixels, respectively). The highest accuracy was obtained from the ALI image, followed by ETM+, TM, and OLI images in descending order. The TM, OLI, and ALI images can be alternatives to ETM+ imagery for the hyperspectral transformation that aids the production of pseudo-Hyperion images for areas without high-quality ETM+ images because of scan line corrector failure, and for long-term global monitoring of land surfaces.
Chang, Jiun-Yao; Chen, Wen-Cheng; Huang, Ta-Ko; Wang, Jen-Chyan; Fu, Po-Sung; Chen, Jeng-Huey; Hung, Chun-Cheng
2012-09-01
As we pay increasing attention to dental aesthetics, tooth color matching has become an important part of daily dental practice. This aim of this study was to develop a method to enhance the accuracy of a tooth color matching machine. The Munsell color tabs in the range of natural human teeth were measured using a tooth color measuring machine (ShadeEye NCC). The machine's accuracy was analyzed using an analysis of variance test and a Tukey post-hoc test. When matching the Munsell color tabs with the ShadeEye NCC colorimeter, settings of Chroma greater than 6 and Value less than 4 showed unacceptable clinical results. When the CIELAB mode was used, the a* value (which represents the red-green axis in the Commission Internationale de l'Eclairage color space) made no significant difference (p=0.84), the L* value (which represents the lightness) resulted in a negative correlation, and the b* value (which represents the yellow-blue axis) resulted in a positive correlation with ΔE. When the Munsell color tabs and the Vitapan were measured in the same mode and compared, the inaccuracies showed that the Vitapan was not a proper tool for evaluating the stability and accuracy of ShadeEye NCC. By knowing the limitations of the machine, we evaluated the data using the Munsell color tabs; shade beyond the acceptable range should be reevaluated using a visual shade matching method, or if measured by another machine, this shade range should be covered to obtain more accurate results. Copyright © 2012. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Galavís, M. E.; Mendoza, C.; Zeippen, C. J.
1998-12-01
Since te[Burgess et al. (1997)]{bur97} have recently questioned the accuracy of the effective collision strength calculated in the IRON Project for the electron impact excitation of the 3ssp23p sp4 \\ sp1 D -sp1 S quadrupole transition in Ar iii, an extended R-matrix calculation has been performed for this transition. The original 24-state target model was maintained, but the energy regime was increased to 100 Ryd. It is shown that in order to ensure convergence of the partial wave expansion at such energies, it is necessary to take into account partial collision strengths up to L=30 and to ``top-up'' with a geometric series procedure. By comparing effective collision strengths, it is found that the differences from the original calculation are not greater than 25% around the upper end of the common temperature range and that they are much smaller than 20% over most of it. This is consistent with the accuracy rating (20%) previously assigned to transitions in this low ionisation system. Also the present high-temperature limit agrees fairly well (15%) with the Coulomb-Born limit estimated by Burgess et al., thus confirming our previous accuracy rating. It appears that Burgess et al., in their data assessment, have overextended the low-energy behaviour of our reduced effective collision strength to obtain an extrapolated high-temperature limit that appeared to be in error by a factor of 2.
Dunham, Kylee; Grand, James B.
2016-01-01
We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.
Algorithms For Integrating Nonlinear Differential Equations
NASA Technical Reports Server (NTRS)
Freed, A. D.; Walker, K. P.
1994-01-01
Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.
Color, dispersion, and exposure time in performance on rotated figure recognition.
Huang, Kuo-Chen; Lee, Shin-Tsann; Chang, Chun-Chieh
2008-10-01
This study investigated the effects of dispersion, color, and rotation of figures on recognition under varied exposure times. A total of 30 women and 15 men, Taiwanese college students ages 18 to 20 years (M = 19.1, SD = 1.2), participated. Subjects were to recognize a target figure and respond with its location in each stimulus by pressing a mouse button. Analysis showed that the effect of rotation on accuracy was significant. Accuracy for the rotation of 180 degrees was greater than those for 60 degrees and 300 degrees. Exposure time also significantly influenced accuracy. The accuracy was greater for 2 and 3 sec. than for 1 sec. No significant effects on accuracy were associated with dispersion and color, and neither had any interactive effect on accuracy. Dispersion significantly affected the response time as response time for dispersion under 0.4 and 0.5 conditions were shorter than those under 0.2 and 0.3 conditions. Significantly less response time was needed for rotation of 180 degrees than for 60 degrees and 300 degrees conditions. Response time was longer for red figures than for blue, green, and yellow figures. No significant effect on response time was associated with duration of exposure. Two interactive two-way effects were found: dispersion x color of figure and dispersion x rotation. Implications for figure or icon design are discussed.
Li, Bingsheng; Gan, Aihua; Chen, Xiaolong; Wang, Xinying; He, Weifeng; Zhang, Xiaohui; Huang, Renxiang; Zhou, Shuzhu; Song, Xiaoxiao; Xu, Angao
2016-01-01
DNA hypermethylation in blood is becoming an attractive candidate marker for colorectal cancer (CRC) detection. To assess the diagnostic accuracy of blood hypermethylation markers for CRC in different clinical settings, we conducted a meta-analysis of published reports. Of 485 publications obtained in the initial literature search, 39 studies were included in the meta-analysis. Hypermethylation markers in peripheral blood showed a high degree of accuracy for the detection of CRC. The summary sensitivity was 0.62 [95% confidence interval (CI), 0.56–0.67] and specificity was 0.91 (95% CI, 0.89–0.93). Subgroup analysis showed significantly greater sensitivity for the methylated Septin 9 gene (SEPT9) subgroup (0.75; 95% CI, 0.67–0.81) than for the non-methylated SEPT9 subgroup (0.58; 95% CI, 0.52–0.64). Sensitivity and specificity were not affected significantly by target gene number, CRC staging, study region, or methylation analysis method. These findings show that hypermethylation markers in blood are highly sensitive and specific for CRC detection, with methylated SEPT9 being particularly robust. The diagnostic performance of hypermethylation markers, which have varied across different studies, can be improved by marker optimization. Future research should examine variation in diagnostic accuracy according to non-neoplastic factors. PMID:27158984
Acceptability and feasibility of a virtual counselor (VICKY) to collect family health histories.
Wang, Catharine; Bickmore, Timothy; Bowen, Deborah J; Norkunas, Tricia; Campion, MaryAnn; Cabral, Howard; Winter, Michael; Paasche-Orlow, Michael
2015-10-01
To overcome literacy-related barriers in the collection of electronic family health histories, we developed an animated Virtual Counselor for Knowing your Family History, or VICKY. This study examined the acceptability and accuracy of using VICKY to collect family histories from underserved patients as compared with My Family Health Portrait (MFHP). Participants were recruited from a patient registry at a safety net hospital and randomized to use either VICKY or MFHP. Accuracy was determined by comparing tool-collected histories with those obtained by a genetic counselor. A total of 70 participants completed this study. Participants rated VICKY as easy to use (91%) and easy to follow (92%), would recommend VICKY to others (83%), and were highly satisfied (77%). VICKY identified 86% of first-degree relatives and 42% of second-degree relatives; combined accuracy was 55%. As compared with MFHP, VICKY identified a greater number of health conditions overall (49% with VICKY vs. 31% with MFHP; incidence rate ratio (IRR): 1.59; 95% confidence interval (95% CI): 1.13-2.25; P = 0.008), in particular, hypertension (47 vs. 15%; IRR: 3.18; 95% CI: 1.66-6.10; P = 0.001) and type 2 diabetes (54 vs. 22%; IRR: 2.47; 95% CI: 1.33-4.60; P = 0.004). These results demonstrate that technological support for documenting family history risks can be highly accepted, feasible, and effective.
Self-refraction accuracy with adjustable spectacles among children in Ghana.
Ilechie, Alex Azuka; Abokyi, Samuel; Owusu-Ansah, Andrew; Boadi-Kusi, Samuel Bert; Denkyira, Andrew Kofi; Abraham, Carl Halladay
2015-04-01
To determine the accuracy of self-refraction (SR) in myopic teenagers, we compared visual and refractive outcomes of self-refracting spectacles (FocusSpecs) with those obtained using cycloplegic subjective refraction (CSR) as a gold standard. A total of 203 eligible schoolchildren (mean [±SD] age, 13.8 [±1.0] years; 59.1% were female) completed an examination consisting of SR with FocusSpecs adjustable spectacles, visual acuity with the logMAR (logarithm of the minimum angle of resolution) chart, cycloplegic retinoscopy, and CSR. Examiners were masked to the SR findings. Wilcoxon signed rank test and paired Student t test were used to compare measures across refraction methods (95% confidence intervals [CIs]). The mean (±SD) spherical equivalent refractive error measured by CSR and SR was -1.22 (±0.49) diopters (D) and -1.66 (±0.73) D, respectively, a statistically significant difference of -0.44 D (p < 0.001, t = 15.517). The greatest proportion of participants was correctable to visual acuity greater than or equal to 6/7.5 (logMAR 0.1) in the better eye by CSR (99.0%; 95% CI, 96.5 to 99.7%), followed by cycloplegic retinoscopy (94.1%; 95% CI, 90.0 to 96.6%) and SR (85.2%; 95% CI, 79.7 to 89.5%). These proportions differed significantly from each other (p < 0.001, Wilcoxon signed rank test). Myopic inaccuracy of greater than 0.50 D and greater than or equal to -1.00 D was present in 29 (15.3%) and 16 (8.4%) right eyes, respectively, with SR. In logistic regression models, failure to achieve visual acuity greater than or equal to 6/7.5 in right eyes with SR was significantly associated with age (odds ratio, 1.92; 95% CI, 1.12 to 3.28; p = 0.017) and spherical power (odds ratio, 0.017; 95% CI, 0.005 to 0.056; p < 0.001). Self-refraction offers acceptable visual and refractive results for young people in a rural setting in Ghana, although myopic inaccuracy in the more negative direction occurred in some children.
Navigation strategy and filter design for solar electric missions
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Hagar, H., Jr.
1972-01-01
Methods which have been proposed to improve the navigation accuracy for the low-thrust space vehicle include modifications to the standard Sequential- and Batch-type orbit determination procedures and the use of inertial measuring units (IMU) which measures directly the acceleration applied to the vehicle. The navigation accuracy obtained using one of the more promising modifications to the orbit determination procedures is compared with a combined IMU-Standard. The unknown accelerations are approximated as both first-order and second-order Gauss-Markov processes. The comparison is based on numerical results obtained in a study of the navigation requirements of a numerically simulated 152-day low-thrust mission to the asteroid Eros. The results obtained in the simulation indicate that the DMC algorithm will yield a significant improvement over the navigation accuracies achieved with previous estimation algorithms. In addition, the DMC algorithms will yield better navigation accuracies than the IMU-Standard Orbit Determination algorithm, except for extremely precise IMU measurements, i.e., gyroplatform alignment .01 deg and accelerometer signal-to-noise ratio .07. Unless these accuracies are achieved, the IMU navigation accuracies are generally unacceptable.
NASA Technical Reports Server (NTRS)
Otterson, D. A.; Seng, G. T.
1985-01-01
An high performance liquid chromatography (HPLC) method to estimate four aromatic classes in middistillate fuels is presented. Average refractive indices are used in a correlation to obtain the concentrations of each of the aromatic classes from HPLC data. The aromatic class concentrations can be obtained in about 15 min when the concentration of the aromatic group is known. Seven fuels with a wide range of compositions were used to test the method. Relative errors in the concentration of the two major aromatic classes were not over 10 percent. Absolute errors of the minor classes were all less than 0.3 percent. The data show that errors in group-type analyses using sulfuric acid derived standards are greater for fuels containing high concentrations of polycyclic aromatics. Corrections are based on the change in refractive index of the aromatic fraction which can occur when sulfuric acid and the fuel react. These corrections improved both the precision and the accuracy of the group-type results.
The star 12 Persei and separated fringe packet binaries (SFPB)
NASA Astrophysics Data System (ADS)
Bagnuolo, William G., Jr.; ten Brummelaar, Theo A.; McAlister, H. A.; Gies, Douglas R.; Ridgway, Stephen T.
2006-06-01
We have obtained high resolution orbital data with the CHARA Array for the bright star 12 Persei, a resolved double-lined spectroscopic binary, an example of a Separated Fringe Packet Binary. We describe the data reduction process involved. By using a technique we have developed of 'side-lobe verniering', we can obtain an improved precision in separation of up to 25 micro-arcsec along a given baseline. For this object we find a semi-major axis 0.3 of Barlow, Scarfe, and Fekel (1998) [BSF], but with an increased inclination angle. The revised masses are therefore almost 6% greater than those of BSF. The overall accuracy in the masses is about 1.3%, now primarily limited by the spectroscopically determined radial velocities. The precision of the masses due to the interferometrically derived "visual" orbit alone is only about 0.2%. We expect that improved RVs and improved absolute calibration can bring down the mass errors to below 1%.
Quantification of Finger-Tapping Angle Based on Wearable Sensors
Djurić-Jovičić, Milica; Jovičić, Nenad S.; Roby-Brami, Agnes; Popović, Mirjana B.; Kostić, Vladimir S.; Djordjević, Antonije R.
2017-01-01
We propose a novel simple method for quantitative and qualitative finger-tapping assessment based on miniature inertial sensors (3D gyroscopes) placed on the thumb and index-finger. We propose a simplified description of the finger tapping by using a single angle, describing rotation around a dominant axis. The method was verified on twelve subjects, who performed various tapping tasks, mimicking impaired patterns. The obtained tapping angles were compared with results of a motion capture camera system, demonstrating excellent accuracy. The root-mean-square (RMS) error between the two sets of data is, on average, below 4°, and the intraclass correlation coefficient is, on average, greater than 0.972. Data obtained by the proposed method may be used together with scores from clinical tests to enable a better diagnostic. Along with hardware simplicity, this makes the proposed method a promising candidate for use in clinical practice. Furthermore, our definition of the tapping angle can be applied to all tapping assessment systems. PMID:28125051
Quantification of Finger-Tapping Angle Based on Wearable Sensors.
Djurić-Jovičić, Milica; Jovičić, Nenad S; Roby-Brami, Agnes; Popović, Mirjana B; Kostić, Vladimir S; Djordjević, Antonije R
2017-01-25
We propose a novel simple method for quantitative and qualitative finger-tapping assessment based on miniature inertial sensors (3D gyroscopes) placed on the thumb and index-finger. We propose a simplified description of the finger tapping by using a single angle, describing rotation around a dominant axis. The method was verified on twelve subjects, who performed various tapping tasks, mimicking impaired patterns. The obtained tapping angles were compared with results of a motion capture camera system, demonstrating excellent accuracy. The root-mean-square (RMS) error between the two sets of data is, on average, below 4°, and the intraclass correlation coefficient is, on average, greater than 0.972. Data obtained by the proposed method may be used together with scores from clinical tests to enable a better diagnostic. Along with hardware simplicity, this makes the proposed method a promising candidate for use in clinical practice. Furthermore, our definition of the tapping angle can be applied to all tapping assessment systems.
NASA Astrophysics Data System (ADS)
Vagh, Hardik A.; Baghai-Wadji, Alireza
2008-12-01
Current technological challenges in materials science and high-tech device industry require the solution of boundary value problems (BVPs) involving regions of various scales, e.g. multiple thin layers, fibre-reinforced composites, and nano/micro pores. In most cases straightforward application of standard variational techniques to BVPs of practical relevance necessarily leads to unsatisfactorily ill-conditioned analytical and/or numerical results. To remedy the computational challenges associated with sub-sectional heterogeneities various sophisticated homogenization techniques need to be employed. Homogenization refers to the systematic process of smoothing out the sub-structural heterogeneities, leading to the determination of effective constitutive coefficients. Ordinarily, homogenization involves a sophisticated averaging and asymptotic order analysis to obtain solutions. In the majority of the cases only zero-order terms are constructed due to the complexity of the processes involved. In this paper we propose a constructive scheme for obtaining homogenized solutions involving higher order terms, and thus, guaranteeing higher accuracy and greater robustness of the numerical results. We present
Liu, Tung-Kuan; Chen, Yeh-Peng; Hou, Zone-Yuan; Wang, Chao-Chih; Chou, Jyh-Horng
2014-06-01
Evaluating and treating of stress can substantially benefits to people with health problems. Currently, mental stress evaluated using medical questionnaires. However, the accuracy of this evaluation method is questionable because of variations caused by factors such as cultural differences and individual subjectivity. Measuring of biomedical signals is an effective method for estimating mental stress that enables this problem to be overcome. However, the relationship between the levels of mental stress and biomedical signals remain poorly understood. A refined rough set algorithm is proposed to determine the relationship between mental stress and biomedical signals, this algorithm combines rough set theory with a hybrid Taguchi-genetic algorithm, called RS-HTGA. Two parameters were used for evaluating the performance of the proposed RS-HTGA method. A dataset obtained from a practice clinic comprising 362 cases (196 male, 166 female) was adopted to evaluate the performance of the proposed approach. The empirical results indicate that the proposed method can achieve acceptable accuracy in medical practice. Furthermore, the proposed method was successfully used to identify the relationship between mental stress levels and bio-medical signals. In addition, the comparison between the RS-HTGA and a support vector machine (SVM) method indicated that both methods yield good results. The total averages for sensitivity, specificity, and precision were greater than 96%, the results indicated that both algorithms produced highly accurate results, but a substantial difference in discrimination existed among people with Phase 0 stress. The SVM algorithm shows 89% and the RS-HTGA shows 96%. Therefore, the RS-HTGA is superior to the SVM algorithm. The kappa test results for both algorithms were greater than 0.936, indicating high accuracy and consistency. The area under receiver operating characteristic curve for both the RS-HTGA and a SVM method were greater than 0.77, indicating a good discrimination capability. In this study, crucial attributes in stress evaluation were successfully recognized using biomedical signals, thereby enabling the conservation of medical resources and elucidating the mapping relationship between levels of mental stress and candidate attributes. In addition, we developed a prototype system for mental stress evaluation that can be used to provide benefits in medical practice. Copyright © 2014. Published by Elsevier B.V.
Palaniappan, Rajkumar; Sundaraj, Kenneth; Sundaraj, Sebastian; Huliraj, N; Revadi, S S
2017-06-08
Auscultation is a medical procedure used for the initial diagnosis and assessment of lung and heart diseases. From this perspective, we propose assessing the performance of the extreme learning machine (ELM) classifiers for the diagnosis of pulmonary pathology using breath sounds. Energy and entropy features were extracted from the breath sound using the wavelet packet transform. The statistical significance of the extracted features was evaluated by one-way analysis of variance (ANOVA). The extracted features were inputted into the ELM classifier. The maximum classification accuracies obtained for the conventional validation (CV) of the energy and entropy features were 97.36% and 98.37%, respectively, whereas the accuracies obtained for the cross validation (CRV) of the energy and entropy features were 96.80% and 97.91%, respectively. In addition, maximum classification accuracies of 98.25% and 99.25% were obtained for the CV and CRV of the ensemble features, respectively. The results indicate that the classification accuracy obtained with the ensemble features was higher than those obtained with the energy and entropy features.
Overconfidence across the psychosis continuum: a calibration approach.
Balzan, Ryan P; Woodward, Todd S; Delfabbro, Paul; Moritz, Steffen
2016-11-01
An 'overconfidence in errors' bias has been consistently observed in people with schizophrenia relative to healthy controls, however, the bias is seldom found to be associated with delusional ideation. Using a more precise confidence-accuracy calibration measure of overconfidence, the present study aimed to explore whether the overconfidence bias is greater in people with higher delusional ideation. A sample of 25 participants with schizophrenia and 50 non-clinical controls (25 high- and 25 low-delusion-prone) completed 30 difficult trivia questions (accuracy <75%); 15 'half-scale' items required participants to indicate their level of confidence for accuracy, and the remaining 'confidence-range' items asked participants to provide lower/upper bounds in which they were 80% confident the true answer lay within. There was a trend towards higher overconfidence for half-scale items in the schizophrenia and high-delusion-prone groups, which reached statistical significance for confidence-range items. However, accuracy was particularly low in the two delusional groups and a significant negative correlation between clinical delusional scores and overconfidence was observed for half-scale items within the schizophrenia group. Evidence in support of an association between overconfidence and delusional ideation was therefore mixed. Inflated confidence-accuracy miscalibration for the two delusional groups may be better explained by their greater unawareness of their underperformance, rather than representing genuinely inflated overconfidence in errors.
Stress and emotional valence effects on children's versus adolescents' true and false memory.
Quas, Jodi A; Rush, Elizabeth B; Yim, Ilona S; Edelstein, Robin S; Otgaar, Henry; Smeets, Tom
2016-01-01
Despite considerable interest in understanding how stress influences memory accuracy and errors, particularly in children, methodological limitations have made it difficult to examine the effects of stress independent of the effects of the emotional valence of to-be-remembered information in developmental populations. In this study, we manipulated stress levels in 7-8- and 12-14-year-olds and then exposed them to negative, neutral, and positive word lists. Shortly afterward, we tested their recognition memory for the words and false memory for non-presented but related words. Adolescents in the high-stress condition were more accurate than those in the low-stress condition, while children's accuracy did not differ across stress conditions. Also, among adolescents, accuracy and errors were higher for the negative than positive words, while in children, word valence was unrelated to accuracy. Finally, increases in children's and adolescents' cortisol responses, especially in the high-stress condition, were related to greater accuracy but not false memories and only for positive emotional words. Findings suggest that stress at encoding, as well as the emotional content of to-be-remembered information, may influence memory in different ways across development, highlighting the need for greater complexity in existing models of true and false memory formation.
Analysis of biochemical phase shift oscillators by a harmonic balancing technique.
Rapp, P
1976-11-25
The use of harmonic balancing techniques for theoretically investigating a large class of biochemical phase shift oscillators is outlined and the accuracy of this approximate technique for large dimension nonlinear chemical systems is considered. It is concluded that for the equations under study these techniques can be successfully employed to both find periodic solutions and to indicate those cases which can not oscillate. The technique is a general one and it is possible to state a step by step procedure for its application. It has a substantial advantage in producing results which are immediately valid for arbitrary dimension. As the accuracy of the method increases with dimension, it complements classical small dimension methods. The results obtained by harmonic balancing analysis are compared with those obtained by studying the local stability properties of the singular points of the differential equation. A general theorem is derived which identifies those special cases where the results of first order harmonic balancing are identical to those of local stability analysis, and a necessary condition for this equivalence is derived. As a concrete example, the n-dimensional Goodwin oscillator is considered where p, the Hill coefficient of the feedback metabolite, is equal to three and four. It is shown that for p = 3 or 4 and n less than or equal to 4 the approximation indicates that it is impossible to construct a set of physically permissible reaction constants such that the system possesses a periodic solution. However for n greater than or equal to 5 it is always possible to find a large domain in the reaction constant space giving stable oscillations. A means of constructing such a parameter set is given. The results obtained here are compared with previously derived results for p = 1 and p = 2.
Integrated Multiscale Modeling of Molecular Computing Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregory Beylkin
2012-03-23
Significant advances were made on all objectives of the research program. We have developed fast multiresolution methods for performing electronic structure calculations with emphasis on constructing efficient representations of functions and operators. We extended our approach to problems of scattering in solids, i.e. constructing fast algorithms for computing above the Fermi energy level. Part of the work was done in collaboration with Robert Harrison and George Fann at ORNL. Specific results (in part supported by this grant) are listed here and are described in greater detail. (1) We have implemented a fast algorithm to apply the Green's function for themore » free space (oscillatory) Helmholtz kernel. The algorithm maintains its speed and accuracy when the kernel is applied to functions with singularities. (2) We have developed a fast algorithm for applying periodic and quasi-periodic, oscillatory Green's functions and those with boundary conditions on simple domains. Importantly, the algorithm maintains its speed and accuracy when applied to functions with singularities. (3) We have developed a fast algorithm for obtaining and applying multiresolution representations of periodic and quasi-periodic Green's functions and Green's functions with boundary conditions on simple domains. (4) We have implemented modifications to improve the speed of adaptive multiresolution algorithms for applying operators which are represented via a Gaussian expansion. (5) We have constructed new nearly optimal quadratures for the sphere that are invariant under the icosahedral rotation group. (6) We obtained new results on approximation of functions by exponential sums and/or rational functions, one of the key methods that allows us to construct separated representations for Green's functions. (7) We developed a new fast and accurate reduction algorithm for obtaining optimal approximation of functions by exponential sums and/or their rational representations.« less
A kinematic analysis of visually-guided movement in Williams syndrome.
Hocking, Darren R; Rinehart, Nicole J; McGinley, Jennifer L; Moss, Simon A; Bradshaw, John L
2011-02-15
Previous studies have reported that people with the neurodevelopmental disorder Williams syndrome exhibit difficulties with visuomotor control. In the current study, we examined the extent to which visuomotor deficits were associated with movement planning or feedback-based on-line control. We used a variant of the Fitts' reciprocal aiming task on a computerized touchscreen in adults with WS, IQ-matched individuals with Down syndrome (DS), and typically developing controls. By manipulating task difficulty both as a function of target size and amplitude, we were able to vary the requirements for accuracy to examine processes associated with dorsal visual stream and cerebellar functioning. Although a greater increase in movement time as a function of task difficulty was observed in the two clinical groups with WS and DS, greater magnitude in the late kinematic components of movement-specifically, time after peak velocity-was revealed in the WS group during increased demands for accuracy. In contrast, the DS group showed a greater speed-accuracy trade-off with significantly reduced and more variable endpoint accuracy, which may be associated with cerebellar deficits. In addition, the WS group spent more time stationary in the target when task-related features reflected a higher level of difficulty, suggestive of specific deficits in movement planning. Our results indicate that the visuomotor coordination deficits in WS may reflect known impairments of the dorsal stream, but may also indicate a role for the cerebellum in dynamic feed-forward motor control. Copyright © 2010 Elsevier B.V. All rights reserved.
Spatial localization deficits and auditory cortical dysfunction in schizophrenia
Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.
2014-01-01
Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608
Accuracy of digital and analogue cephalometric measurements assessed with the sandwich technique.
Santoro, Margherita; Jarjoura, Karim; Cangialosi, Thomas J
2006-03-01
The purpose of the study was to evaluate the accuracy of cephalometric measurements obtained with digital tracing software compared with equivalent hand-traced measurements. In the sandwich technique, a storage phosphor plate and a conventional radiographic film are placed in the same cassette and exposed simultaneously. The method eliminates positioning errors and potential differences associated with multiple radiographic exposures that affected previous studies. It was used to ensure the equivalence of the digital images to the hard copy radiographs. Cephalometric measurements instead of landmarks were the focus of this investigation in order to acquire data with direct clinical applications. The sample consisted of digital and analog radiographic images from 47 patients after orthodontic treatment. Nine cephalometric landmarks were identified and 13 measurements calculated by 1 operator, both manually and with digital tracing software. Measurement error was assessed for each method by duplicating measurements of 25 randomly selected radiographs and by using Pearson's correlation coefficient. A paired t test was used to detect differences between the manual and digital methods. An overall greater variability in the digital cephalometric measurements was found. Differences between the 2 methods for SNA, ANB, S-Go:N-Me, U1/L1, L1-GoGn, and N-ANS:ANS-Me were statistically significant (P < .05). However, only the U1/L1 and S-Go:N-Me measurements showed differences greater than 2 SE (P < .0001). The 2 tracing methods provide similar clinical results; therefore, efficient digital cephalometric software can be reliably chosen as a routine diagnostic tool. The user-friendly sandwich technique was effective as an option for interoffice communications.
Jarrahian, Courtney; Rein-Weston, Annie; Saxon, Gene; Creelman, Ben; Kachmarik, Greg; Anand, Abhijeet; Zehrung, Darin
2017-03-27
Intradermal delivery of a fractional dose of inactivated poliovirus vaccine (IPV) offers potential benefits compared to intramuscular (IM) delivery, including possible cost reductions and easing of IPV supply shortages. Objectives of this study were to assess intradermal delivery devices for dead space, wastage generated by the filling process, dose accuracy, and total number of doses that can be delivered per vial. Devices tested included syringes with staked (fixed) needles (autodisable syringes and syringes used with intradermal adapters), a luer-slip needle and syringe, a mini-needle syringe, a hollow microneedle device, and disposable-syringe jet injectors with their associated filling adapters. Each device was used to withdraw 0.1-mL fractional doses from single-dose IM glass vials which were then ejected into a beaker. Both vial and device were weighed before and after filling and again after expulsion of liquid to record change in volume at each stage of the process. Data were used to calculate the number of doses that could potentially be obtained from multidose vials. Results show wide variability in dead space, dose accuracy, overall wastage, and total number of doses that can be obtained per vial among intradermal delivery devices. Syringes with staked needles had relatively low dead space and low overall wastage, and could achieve a greater number of doses per vial compared to syringes with a detachable luer-slip needle. Of the disposable-syringe jet injectors tested, one was comparable to syringes with staked needles. If intradermal delivery of IPV is introduced, selection of an intradermal delivery device can have a substantial impact on vaccine wasted during administration, and thus on the required quantity of vaccine that needs to be purchased. An ideal intradermal delivery device should be not only safe, reliable, accurate, and acceptable to users and vaccine recipients, but should also have low dead space, high dose accuracy, and low overall wastage to maximize the potential number of doses that can be withdrawn and delivered. Copyright © 2017 PATH. Published by Elsevier Ltd.. All rights reserved.
Boerner, Vinzent; Johnston, David J; Tier, Bruce
2014-10-24
The major obstacles for the implementation of genomic selection in Australian beef cattle are the variety of breeds and in general, small numbers of genotyped and phenotyped individuals per breed. The Australian Beef Cooperative Research Center (Beef CRC) investigated these issues by deriving genomic prediction equations (PE) from a training set of animals that covers a range of breeds and crosses including Angus, Murray Grey, Shorthorn, Hereford, Brahman, Belmont Red, Santa Gertrudis and Tropical Composite. This paper presents accuracies of genomically estimated breeding values (GEBV) that were calculated from these PE in the commercial pure-breed beef cattle seed stock sector. PE derived by the Beef CRC from multi-breed and pure-breed training populations were applied to genotyped Angus, Limousin and Brahman sires and young animals, but with no pure-breed Limousin in the training population. The accuracy of the resulting GEBV was assessed by their genetic correlation to their phenotypic target trait in a bi-variate REML approach that models GEBV as trait observations. Accuracies of most GEBV for Angus and Brahman were between 0.1 and 0.4, with accuracies for abattoir carcass traits generally greater than for live animal body composition traits and reproduction traits. Estimated accuracies greater than 0.5 were only observed for Brahman abattoir carcass traits and for Angus carcass rib fat. Averaged across traits within breeds, accuracies of GEBV were highest when PE from the pooled across-breed training population were used. However, for the Angus and Brahman breeds the difference in accuracy from using pure-breed PE was small. For the Limousin breed no reasonable results could be achieved for any trait. Although accuracies were generally low compared to published accuracies estimated within breeds, they are in line with those derived in other multi-breed populations. Thus PE developed by the Beef CRC can contribute to the implementation of genomic selection in Australian beef cattle breeding.
Accuracy investigation of phthalate metabolite standards.
Langlois, Éric; Leblanc, Alain; Simard, Yves; Thellen, Claude
2012-05-01
Phthalates are ubiquitous compounds whose metabolites are usually determined in urine for biomonitoring studies. Following suspect and unexplained results from our laboratory in an external quality-assessment scheme, we investigated the accuracy of all phthalate metabolite standards in our possession by comparing them with those of several suppliers. Our findings suggest that commercial phthalate metabolite certified solutions are not always accurate and that lot-to-lot discrepancies significantly affect the accuracy of the results obtained with several of these standards. These observations indicate that the reliability of the results obtained from different lots of standards is not equal, which reduces the possibility of intra-laboratory and inter-laboratory comparisons of results. However, agreements of accuracy have been observed for a majority of neat standards obtained from different suppliers, which indicates that a solution to this issue is available. Data accuracy of phthalate metabolites should be of concern for laboratories performing phthalate metabolite analysis because of the standards used. The results of our investigation are presented from the perspective that laboratories performing phthalate metabolite analysis can obtain accurate and comparable results in the future. Our findings will contribute to improving the quality of future phthalate metabolite analyses and will affect the interpretation of past results.
Przybytek, Michal; Helgaker, Trygve
2013-08-07
We analyze the accuracy of the Coulomb energy calculated using the Gaussian-and-finite-element-Coulomb (GFC) method. In this approach, the electrostatic potential associated with the molecular electronic density is obtained by solving the Poisson equation and then used to calculate matrix elements of the Coulomb operator. The molecular electrostatic potential is expanded in a mixed Gaussian-finite-element (GF) basis set consisting of Gaussian functions of s symmetry centered on the nuclei (with exponents obtained from a full optimization of the atomic potentials generated by the atomic densities from symmetry-averaged restricted open-shell Hartree-Fock theory) and shape functions defined on uniform finite elements. The quality of the GF basis is controlled by means of a small set of parameters; for a given width of the finite elements d, the highest accuracy is achieved at smallest computational cost when tricubic (n = 3) elements are used in combination with two (γ(H) = 2) and eight (γ(1st) = 8) Gaussians on hydrogen and first-row atoms, respectively, with exponents greater than a given threshold (αmin (G)=0.5). The error in the calculated Coulomb energy divided by the number of atoms in the system depends on the system type but is independent of the system size or the orbital basis set, vanishing approximately like d(4) with decreasing d. If the boundary conditions for the Poisson equation are calculated in an approximate way, the GFC method may lose its variational character when the finite elements are too small; with larger elements, it is less sensitive to inaccuracies in the boundary values. As it is possible to obtain accurate boundary conditions in linear time, the overall scaling of the GFC method for large systems is governed by another computational step-namely, the generation of the three-center overlap integrals with three Gaussian orbitals. The most unfavorable (nearly quadratic) scaling is observed for compact, truly three-dimensional systems; however, this scaling can be reduced to linear by introducing more effective techniques for recognizing significant three-center overlap distributions.
Proton-nucleus total inelastic cross sections - An empirical formula for E greater than 10 MeV
NASA Technical Reports Server (NTRS)
Letaw, J. R.; Silberberg, R.; Tsao, C. H.
1983-01-01
An empirical formula for the total inelastic cross section of protons on nuclei with charge greater than 1 is presented. The formula is valid with a varying degree of accuracy down to proton energies of 10 MeV. At high energies (equal to or greater than 2 GeV) the formula reproduces experimental data to within reported errors (about 2%).
Yan, Min; Takahashi, Hidekazu; Nishimura, Fumio
2004-12-01
The aim of the present study was to evaluate the dimensional accuracy and surface property of titanium casting obtained using a gypsum-bonded alumina investment. The experimental gypsum-bonded alumina investment with 20 mass% gypsum content mixed with 2 mass% potassium sulfate was used for five cp titanium castings and three Cu-Zn alloy castings. The accuracy, surface roughness (Ra), and reaction layer thickness of these castings were investigated. The accuracy of the castings obtained from the experimental investment ranged from -0.04 to 0.23%, while surface roughness (Ra) ranged from 7.6 to 10.3microm. A reaction layer of about 150 microm thickness under the titanium casting surface was observed. These results suggested that the titanium casting obtained using the experimental investment was acceptable. Although the reaction layer was thin, surface roughness should be improved.
Lenz, Patrick R N; Beaulieu, Jean; Mansfield, Shawn D; Clément, Sébastien; Desponts, Mireille; Bousquet, Jean
2017-04-28
Genomic selection (GS) uses information from genomic signatures consisting of thousands of genetic markers to predict complex traits. As such, GS represents a promising approach to accelerate tree breeding, which is especially relevant for the genetic improvement of boreal conifers characterized by long breeding cycles. In the present study, we tested GS in an advanced-breeding population of the boreal black spruce (Picea mariana [Mill.] BSP) for growth and wood quality traits, and concurrently examined factors affecting GS model accuracy. The study relied on 734 25-year-old trees belonging to 34 full-sib families derived from 27 parents and that were established on two contrasting sites. Genomic profiles were obtained from 4993 Single Nucleotide Polymorphisms (SNPs) representative of as many gene loci distributed among the 12 linkage groups common to spruce. GS models were obtained for four growth and wood traits. Validation using independent sets of trees showed that GS model accuracy was high, related to trait heritability and equivalent to that of conventional pedigree-based models. In forward selection, gains per unit of time were three times higher with the GS approach than with conventional selection. In addition, models were also accurate across sites, indicating little genotype-by-environment interaction in the area investigated. Using information from half-sibs instead of full-sibs led to a significant reduction in model accuracy, indicating that the inclusion of relatedness in the model contributed to its higher accuracies. About 500 to 1000 markers were sufficient to obtain GS model accuracy almost equivalent to that obtained with all markers, whether they were well spread across the genome or from a single linkage group, further confirming the implication of relatedness and potential long-range linkage disequilibrium (LD) in the high accuracy estimates obtained. Only slightly higher model accuracy was obtained when using marker subsets that were identified to carry large effects, indicating a minor role for short-range LD in this population. This study supports the integration of GS models in advanced-generation tree breeding programs, given that high genomic prediction accuracy was obtained with a relatively small number of markers due to high relatedness and family structure in the population. In boreal spruce breeding programs and similar ones with long breeding cycles, much larger gain per unit of time can be obtained from genomic selection at an early age than by the conventional approach. GS thus appears highly profitable, especially in the context of forward selection in species which are amenable to mass vegetative propagation of selected stock, such as spruces.
Optimal rotated staggered-grid finite-difference schemes for elastic wave modeling in TTI media
NASA Astrophysics Data System (ADS)
Yang, Lei; Yan, Hongyong; Liu, Hong
2015-11-01
The rotated staggered-grid finite-difference (RSFD) is an effective approach for numerical modeling to study the wavefield characteristics in tilted transversely isotropic (TTI) media. But it surfaces from serious numerical dispersion, which directly affects the modeling accuracy. In this paper, we propose two different optimal RSFD schemes based on the sampling approximation (SA) method and the least-squares (LS) method respectively to overcome this problem. We first briefly introduce the RSFD theory, based on which we respectively derive the SA-based RSFD scheme and the LS-based RSFD scheme. Then different forms of analysis are used to compare the SA-based RSFD scheme and the LS-based RSFD scheme with the conventional RSFD scheme, which is based on the Taylor-series expansion (TE) method. The contrast in numerical accuracy analysis verifies the greater accuracy of the two proposed optimal schemes, and indicates that these schemes can effectively widen the wavenumber range with great accuracy compared with the TE-based RSFD scheme. Further comparisons between these two optimal schemes show that at small wavenumbers, the SA-based RSFD scheme performs better, while at large wavenumbers, the LS-based RSFD scheme leads to a smaller error. Finally, the modeling results demonstrate that for the same operator length, the SA-based RSFD scheme and the LS-based RSFD scheme can achieve greater accuracy than the TE-based RSFD scheme, while for the same accuracy, the optimal schemes can adopt shorter difference operators to save computing time.
Discrimination in measures of knowledge monitoring accuracy
Was, Christopher A.
2014-01-01
Knowledge monitoring predicts academic outcomes in many contexts. However, measures of knowledge monitoring accuracy are often incomplete. In the current study, a measure of students’ ability to discriminate known from unknown information as a component of knowledge monitoring was considered. Undergraduate students’ knowledge monitoring accuracy was assessed and used to predict final exam scores in a specific course. It was found that gamma, a measure commonly used as the measure of knowledge monitoring accuracy, accounted for a small, but significant amount of variance in academic performance whereas the discrimination and bias indexes combined to account for a greater amount of variance in academic performance. PMID:25339979
Cation and anion sequences in dark-adapted Balanus photoreceptor
1977-01-01
Anion and cation permeabilities in dark-adapted Balanus photoreceptors were determined by comparing changes in the membrane potential in response to replacement of the dominant anion (Cl-) or cation (Na+) by test anions or cations in the superfusing solution. The anion permeability sequence obtained was PI greater than PSO4 greater than PBr greater than PCl greater than Pisethionate greater than Pmethanesulfonate. Gluconate, glucuronate, and glutamate generally appeared more permeable and propionate less permeable than Cl-. The alkali-metal cation permeability sequence obtained was PK greater than PRb greater than PCx greater than PNa approximately PLi. This corresponds to Eisenman's IV which is the same sequencethat has been obtained for other classes of nerve cells in the resting state. The values obtained for the permeability ratios of the alkali-metal cations are considered to be minimal. The membrane conductance measured by passing inward current pulses in the different test cations followed the sequence, GK greater than GRb greater than GCs greater than GNa greater than GLi. The conductance ratios obtained for a full substitution of the test cation agreed quite well with permeability ratios for all the alkali-metal cations except K+ which was generally higher. PMID:199688
Dyskinesias differentiate autistic disorder from catatonia.
Brasic, J R; Barnett, J Y; Will, M V; Nadrich, R H; Sheitman, B B; Ahmad, R; Mendonca, M de F; Kaplan, D; Brathwaite, C
2000-12-01
Autistic disorder and catatonia are neuropsychiatric syndromes defined by impairments in social interaction, communication, and restricted, stereotypical motor routines. Assessments of children with these disorders are typically restricted in scope by the patients' limited ability to comprehend directions. The authors performed systematic assessments of dyskinesias on six prepubertal boys with autistic disorder and mental retardation and on one adolescent male with catatonia to determine if this type of information could be routinely obtained. The boys with autistic disorder had more stereotypies and tics, a greater degree of akathisia and hyperactivity, and more compulsions than the adolescent with catatonia. Catatonia was associated with catalepsy and dystonic postures. The authors conclude that the diagnostic accuracy and specificity of neuropsychiatric syndromes may be enhanced by the systematic assessment of the dyskinesias associated with each condition.
Strange attractors in weakly turbulent Couette-Taylor flow
NASA Technical Reports Server (NTRS)
Brandstater, A.; Swinney, Harry L.
1987-01-01
An experiment is conducted on the transition from quasi-periodic to weakly turbulent flow of a fluid contained between concentric cylinders with the inner cylinder rotating and the outer cylinder at rest. Power spectra, phase-space portraits, and circle maps obtained from velocity time-series data indicate that the nonperiodic behavior observed is deterministic, that is, it is described by strange attractors. Various problems that arise in computing the dimension of strange attractors constructed from experimental data are discussed and it is shown that these problems impose severe requirements on the quantity and accuracy of data necessary for determining dimensions greater than about 5. In the present experiment the attractor dimension increases from 2 at the onset of turbulence to about 4 at a Reynolds number 50-percent above the onset of turbulence.
Morales, Susana; Barros, Jorge; Echávarri, Orietta; García, Fabián; Osses, Alex; Moya, Claudia; Maino, María Paz; Fischman, Ronit; Núñez, Catalina; Szmulewicz, Tita; Tomicic, Alemka
2017-01-01
In efforts to develop reliable methods to detect the likelihood of impending suicidal behaviors, we have proposed the following. To gain a deeper understanding of the state of suicide risk by determining the combination of variables that distinguishes between groups with and without suicide risk. A study involving 707 patients consulting for mental health issues in three health centers in Greater Santiago, Chile. Using 345 variables, an analysis was carried out with artificial intelligence tools, Cross Industry Standard Process for Data Mining processes, and decision tree techniques. The basic algorithm was top-down, and the most suitable division produced by the tree was selected by using the lowest Gini index as a criterion and by looping it until the condition of belonging to the group with suicidal behavior was fulfilled. Four trees distinguishing the groups were obtained, of which the elements of one were analyzed in greater detail, since this tree included both clinical and personality variables. This specific tree consists of six nodes without suicide risk and eight nodes with suicide risk (tree decision 01, accuracy 0.674, precision 0.652, recall 0.678, specificity 0.670, F measure 0.665, receiver operating characteristic (ROC) area under the curve (AUC) 73.35%; tree decision 02, accuracy 0.669, precision 0.642, recall 0.694, specificity 0.647, F measure 0.667, ROC AUC 68.91%; tree decision 03, accuracy 0.681, precision 0.675, recall 0.638, specificity 0.721, F measure, 0.656, ROC AUC 65.86%; tree decision 04, accuracy 0.714, precision 0.734, recall 0.628, specificity 0.792, F measure 0.677, ROC AUC 58.85%). This study defines the interactions among a group of variables associated with suicidal ideation and behavior. By using these variables, it may be possible to create a quick and easy-to-use tool. As such, psychotherapeutic interventions could be designed to mitigate the impact of these variables on the emotional state of individuals, thereby reducing eventual risk of suicide. Such interventions may reinforce psychological well-being, feelings of self-worth, and reasons for living, for each individual in certain groups of patients.
Tural, Cristina; Tor, Jordi; Sanvisens, Arantza; Pérez-Alvarez, Núria; Martínez, Elisenda; Ojanguren, Isabel; García-Samaniego, Javier; Rockstroh, Juergen; Barluenga, Eva; Muga, Robert; Planas, Ramon; Sirera, Guillem; Rey-Joly, Celestino; Clotet, Bonaventura
2009-03-01
We assessed the ability of 3 simple biochemical tests to stage liver fibrosis in patients co-infected with human immunodeficiency virus (HIV) and hepatitis C virus (HCV). We analyzed liver biopsy samples from 324 consecutive HIV/HCV-positive patients (72% men; mean age, 38 y; mean CD4+ T-cell counts, 548 cells/mm(3)). Scheuer fibrosis scores were as follows: 30% had F0, 22% had F1, 19% had F2, 23% had F3, and 6% had F4. Logistic regression analyses were used to predict the probability of significant (>or=F2) or advanced (>or=F3) fibrosis, based on numeric scores from the APRI, FORNS, or FIB-4 tests (alone and in combination). Area under the receiver operating characteristic curves were analyzed to assess diagnostic performance. Area under the receiver operating characteristic curves analyses indicated that the 3 tests had similar abilities to identify F2 and F3; the ability of APRI, FORNS, and FIB-4 were as follows: F2 or greater: 0.72, 0.67, and 0.72, respectively; F3 or greater: 0.75, 0.73, and 0.78, respectively. The accuracy of each test in predicting which samples were F3 or greater was significantly higher than for F2 or greater (APRI, FORNS, and FIB-4: >or=F3: 75%, 76%, and 76%, respectively; >or=F2: 66%, 62%, and 68%, respectively). By using the lowest cut-off values for all 3 tests, F3 or greater was ruled out with sensitivity and negative predictive values of 79% to 94% and 87% to 91%, respectively, and 47% to 70% accuracy. Advanced liver fibrosis (>or=F3) was identified using the highest cut-off value, with specificity and positive predictive values of 90% to 96% and 63% to 73%, respectively, and 75% to 77% accuracy. Simple biochemical tests accurately predicted liver fibrosis in more than half the HIV/HCV co-infected patients. The absence and presence of liver fibrosis are predicted fairly using the lowest and highest cut-off levels, respectively.
ERIC Educational Resources Information Center
Matson, Johnny L.; Malone, Carrie J.; Gonzalez, Melissa L.; McClure, David R.; Laud, Rinita B.; Minshawi, Noha F.
2005-01-01
Program rankings and their visibility have taken on greater and greater significance. Rarely is the accuracy of these rankings, which are typically based on a small subset of university faculty impressions, questioned. This paper presents a more comprehensive survey method based on quantifiable measures of faculty publications and citations. The…
[Short-term memory characteristics of vibration intensity tactile perception on human wrist].
Hao, Fei; Chen, Li-Juan; Lu, Wei; Song, Ai-Guo
2014-12-25
In this study, a recall experiment and a recognition experiment were designed to assess the human wrist's short-term memory characteristics of tactile perception on vibration intensity, by using a novel homemade vibrotactile display device based on the spatiotemporal combination vibration of multiple micro vibration motors as a test device. Based on the obtained experimental data, the short-term memory span, recognition accuracy and reaction time of vibration intensity were analyzed. From the experimental results, some important conclusions can be made: (1) The average short-term memory span of tactile perception on vibration intensity is 3 ± 1 items; (2) The greater difference between two adjacent discrete intensities of vibrotactile stimulation is defined, the better average short-term memory span human wrist gets; (3) There is an obvious difference of the average short-term memory span on vibration intensity between the male and female; (4) The mechanism of information extraction in short-term memory of vibrotactile display is to traverse the scanning process by comparison; (5) The recognition accuracy and reaction time performance of vibrotactile display compares unfavourably with that of visual and auditory. The results from this study are important for designing vibrotactile display coding scheme.
Kimiskidis, Vasilios; Spanakis, Marios; Niopas, Ioannis; Kazis, Dimitrios; Gabrieli, Chrysi; Kanaze, Feras Imad; Divanoglou, Daniil
2007-01-17
An isocratic reversed-phase HPLC-UV procedure for the determination of oxcarbazepine and its main metabolites 10-hydroxy-10,11-dihydrocarbamazepine and 10,11-dihydroxy-trans-10,11-dihydrocarbamazepine in human plasma and cerebrospinal fluid has been developed and validated. After addition of bromazepam as internal standard, the analytes were isolated from plasma and cerebrospinal fluid by liquid-liquid extraction. Separation was achieved on a X-TERRA C18 column using a mobile phase composed of 20 mM KH(2)PO(4), acetonitrile, and n-octylamine (76:24:0.05, v/v/v) at 40 degrees C and detected at 237 nm. The described assay was validated in terms of linearity, accuracy, precision, recovery and lower limit of quantification according to the FDA validation guidelines. Calibration curves were linear with a coefficient of variation (r) greater than 0.998. Accuracy ranged from 92.3% to 106.0% and precision was between 2.3% and 8.2%. The method has been applied to plasma and cerebrospinal fluid samples obtained from patients treated with oxcarbazepine, both in monotherapy and adjunctive therapy.
2010-01-01
Background Methods for the calculation and application of quantitative electromyographic (EMG) statistics for the characterization of EMG data detected from forearm muscles of individuals with and without pain associated with repetitive strain injury are presented. Methods A classification procedure using a multi-stage application of Bayesian inference is presented that characterizes a set of motor unit potentials acquired using needle electromyography. The utility of this technique in characterizing EMG data obtained from both normal individuals and those presenting with symptoms of "non-specific arm pain" is explored and validated. The efficacy of the Bayesian technique is compared with simple voting methods. Results The aggregate Bayesian classifier presented is found to perform with accuracy equivalent to that of majority voting on the test data, with an overall accuracy greater than 0.85. Theoretical foundations of the technique are discussed, and are related to the observations found. Conclusions Aggregation of motor unit potential conditional probability distributions estimated using quantitative electromyographic analysis, may be successfully used to perform electrodiagnostic characterization of "non-specific arm pain." It is expected that these techniques will also be able to be applied to other types of electrodiagnostic data. PMID:20156353
Mapping coastal vegetation, land use and environmental impact from ERTS-1. [Delaware coastal zone
NASA Technical Reports Server (NTRS)
Klemas, V. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Digital analysis of ERTS-1 imagery was used in an attempt to map and inventory the significant ecological communities of Delaware's coastal zone. Eight vegetation and land use discrimination classes were selected: (1) Phragmites communis (giant reed grass); (2) Spartina alterniflora (salt marsh cord grass); (3) Spartina patens (salt marsh hay); (4) shallow water and exposed mud; (5) deep water (greater than 2 m); (6) forest; (7) agriculture; and (8) exposed sand and concrete. Canonical analysis showed the following classification accuracies: Spartina alterniflora, exposed sand, concrete, and forested land - 94% to 100%; shallow water - mud and deep water - 88% and 93% respectively; Phragmites communis 83%; Spartina patens - 52%. Classification accuracy for agriculture was very poor (51%). Limitations of time and available class-memory space resulted in limiting the analysis of agriculture to very gross identification of a class which actually consists of many varied signature classes. Abundant ground truth was available in the form of vegetation maps compiled from color and color infrared photographs. It is believed that with further refinement of training set selection, sufficiently accurate results can be obtained for all categories.
Crystal growth and piezoelectric properties of Ca3Ta(Ga0.9Sc0.1)3Si2O14 bulk single crystal
NASA Astrophysics Data System (ADS)
Igarashi, Yu; Yokota, Yuui; Ohashi, Yuji; Inoue, Kenji; Yamaji, Akihiro; Shoji, Yasuhiro; Kamada, Kei; Kurosawa, Shunsuke; Yoshikawa, Akira
2018-03-01
Ca3Ta(Ga0.9Sc0.1)3Si2O14 langasite-type single crystal with a diameter of 1 in. was grown by Czochralski (Cz) method. Obtained crystal had good crystallinity and its lattice constants exceeded those of Ca3TaGa3Si2O14 (CTGS) according to the X-ray analysis. A crack-free specimen cut from the grown crystal was used for the measurements of dielectric constant ε11T/ε0, electromechanical coupling factor k12, and piezoelectric constant d11. The accuracies of these measurements were better than those for the crystal grown by micro-pulling-down (μ-PD) method. Substitution of Ga with Sc resulted modification of these constants in the directions opposite to those observed after partial substitution of Ga (of CTGS) with Al. This suggests that increase of |d14| was most probably associated with enlargement of average size of the Ga sites. The crystal reported here had greater dimensions as compared to analogous crystals grown by the μ-PD method. As a result, accuracy of determination of acoustic constants of this material may be improved.
Anxiety, anticipation and contextual information: A test of attentional control theory.
Cocks, Adam J; Jackson, Robin C; Bishop, Daniel T; Williams, A Mark
2016-09-01
We tested the assumptions of Attentional Control Theory (ACT) by examining the impact of anxiety on anticipation using a dynamic, time-constrained task. Moreover, we examined the involvement of high- and low-level cognitive processes in anticipation and how their importance may interact with anxiety. Skilled and less-skilled tennis players anticipated the shots of opponents under low- and high-anxiety conditions. Participants viewed three types of video stimuli, each depicting different levels of contextual information. Performance effectiveness (response accuracy) and processing efficiency (response accuracy divided by corresponding mental effort) were measured. Skilled players recorded higher levels of response accuracy and processing efficiency compared to less-skilled counterparts. Processing efficiency significantly decreased under high- compared to low-anxiety conditions. No difference in response accuracy was observed. When reviewing directional errors, anxiety was most detrimental to performance in the condition conveying only contextual information, suggesting that anxiety may have a greater impact on high-level (top-down) cognitive processes, potentially due to a shift in attentional control. Our findings provide partial support for ACT; anxiety elicited greater decrements in processing efficiency than performance effectiveness, possibly due to predominance of the stimulus-driven attentional system.
Bailey, Timothy S; Wallace, Jane F; Pardo, Scott; Warchal-Windham, Mary Ellen; Harrison, Bern; Morin, Robert; Christiansen, Mark
2017-07-01
The new Contour ® Plus ONE blood glucose monitoring system (BGMS) features an easy-to-use, wireless-enabled blood glucose meter that links to a smart mobile device via Bluetooth ® connectivity and can sync with the Contour ™ Diabetes app on a smartphone or tablet. The accuracy of the new BGMS was assessed in 2 studies according to ISO 15197:2013 criteria. In Study 1 (laboratory study), fingertip capillary blood samples from 100 subjects were tested in duplicate using 3 test strip lots. In Study 2 (clinical study), 134 subjects with type 1 or type 2 diabetes enrolled at 2 clinical sites. BGMS results and YSI analyzer (YSI) reference results were compared for fingertip blood obtained by untrained subjects' self-testing and for study staff-obtained fingertip, subject palm, and venous results. In Study 1, 99.0% (594/600) of combined results for all 3 test strip lots fulfilled ISO 15197:2013 Section 6.3 accuracy criteria. In Study 2, 99.2% (133/134) of subject-obtained capillary fingertip results, 99.2% (133/134) of study staff-obtained fingertip results, 99.2% (125/126) of subject-obtained palm results, and 100% (132/132) of study staff-obtained venous results met ISO 15197:2013 Section 8 accuracy criteria. Moreover, 95.5% (128/134) of subject-obtained fingertip self-test results were within ±10 mg/dl (±0.6 mmol/L) or ±10% of the YSI reference result. Questionnaire results showed that most subjects found the BGMS easy to use. The BGMS exceeded ISO 15197:2013 accuracy criteria both in the laboratory and in a clinical setting when used by untrained subjects with diabetes.
Wang, Qihui; Gao, Pan; Cheng, Fei; Wang, Xiaoyi; Duan, Yixiang
2014-02-01
This study aimed to set-up an ultra performance liquid chromatography-electrospray ionization-mass spectrometry (UPLC-ESI-MS) method for the determination of salivary L-phenylalanine and L-leucine for early diagnosis of oral squamous cell carcinoma (OSCC). In addition, the diagnostic accuracy for both biomarkers was established by using receiver operating characteristic (ROC) analysis. Mean recoveries of l-phenylalanine and L-leucine ranged from 88.9 to 108.6% were obtained. Intra- and inter-day precision for both amino acids was less than 7%, with acceptable accuracy. Linear regression coefficients of both biomarkers were greater than 0.99. The diagnostic accuracy for both biomarkers was established by analyzing 60 samples from apparently healthy individuals and 30 samples from OSCC patients. Both potential biomarkers demonstrated significant differences in concentrations in distinguishing OSCC from control (P<0.05). As a single biomarker, L-leucine might have better predictive power in OSCC with T1-2 (early stage of OSCC including stage I and II), and L-phenylalanine might be used for screening and diagnosis of OSCC with T3-4 (advanced stage of OSCC including stage III and IV). The combination of L-phenylalanine and L-leucine will improve the sensitivity (92.3%) and specificity (91.7%) for early diagnosis of OSCC. The possibility of salivary metabolite biomarkers for OSCC diagnosis is successfully demonstrated in this study. This developed method shows advantages with non-invasive, simple, reliable, and also provides lower detection limits and excellent precision and accuracy. These non-invasive salivary biomarkers may lead to a simple clinical tool for the early diagnosis of OSCC. © 2013 Published by Elsevier B.V.
Improving the accuracy of acetabular cup implantation using a bulls-eye spirit level.
Macdonald, Duncan; Gupta, Sanjay; Ohly, Nicholas E; Patil, Sanjeev; Meek, R; Mohammed, Aslam
2011-01-01
Acetabular introducers have a built-in inclination of 45 degrees to the handle shaft. With patients in the lateral position, surgeons aim to align the introducer shaft vertical to the floor to implant the acetabulum at 45 degrees. We aimed to determine if a bulls-eye spirit level attached to an introducer improved the accuracy of implantation. A small circular bulls-eye spirit level was attached to the handle of an acetabular introducer. A saw bone hemipelvis was fixed to a horizontal, flat surface. A cement substitute was placed in the acetabulum and subjects were asked to implant a polyethylene cup, aiming to obtain an angle of inclination of 45 degrees. Two attempts were made with the spirit level masked and two with it unmasked. The distance of the air bubble from the spirit level's center was recorded by a single assessor. The angle of inclination of the acetabular component was then calculated. Subjects included both orthopedic consultants and trainees. Twenty-five subjects completed the study. Accuracy of acetabular implantation when using the unmasked spirit level improved significantly in all grades of surgeon. With the spirit level masked, 12 out of 50 attempts were accurate at 45 degrees inclination; 11 out of 50 attempts were "open," with greater than 45 degrees of inclination, and 27 were "closed," with less than 45 degrees. With the spirit level visible, all subjects achieved an inclination angle of exactly 45 degrees. A simple device attached to the handle of an acetabular introducer can significantly improve the accuracy of implantation of a cemented cup into a saw bone pelvis in the lateral position.
Mistry, Binoy; Stewart De Ramirez, Sarah; Kelen, Gabor; Schmitz, Paulo S K; Balhara, Kamna S; Levin, Scott; Martinez, Diego; Psoter, Kevin; Anton, Xavier; Hinson, Jeremiah S
2018-05-01
We assess accuracy and variability of triage score assignment by emergency department (ED) nurses using the Emergency Severity Index (ESI) in 3 countries. In accordance with previous reports and clinical observation, we hypothesize low accuracy and high variability across all sites. This cross-sectional multicenter study enrolled 87 ESI-trained nurses from EDs in Brazil, the United Arab Emirates, and the United States. Standardized triage scenarios published by the Agency for Healthcare Research and Quality (AHRQ) were used. Accuracy was defined by concordance with the AHRQ key and calculated as percentages. Accuracy comparisons were made with one-way ANOVA and paired t test. Interrater reliability was measured with Krippendorff's α. Subanalyses based on nursing experience and triage scenario type were also performed. Mean accuracy pooled across all sites and scenarios was 59.2% (95% confidence interval [CI] 56.4% to 62.0%) and interrater reliability was modest (α=.730; 95% CI .692 to .767). There was no difference in overall accuracy between sites or according to nurse experience. Medium-acuity scenarios were scored with greater accuracy (76.4%; 95% CI 72.6% to 80.3%) than high- or low-acuity cases (44.1%, 95% CI 39.3% to 49.0% and 54%, 95% CI 49.9% to 58.2%), and adult scenarios were scored with greater accuracy than pediatric ones (66.2%, 95% CI 62.9% to 69.7% versus 46.9%, 95% CI 43.4% to 50.3%). In this multinational study, concordance of nurse-assigned ESI score with reference standard was universally poor and variability was high. Although the ESI is the most popular ED triage tool in the United States and is increasingly used worldwide, our findings point to a need for more reliable ED triage tools. Copyright © 2017 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
A novel modality for intrapartum fetal heart rate monitoring.
Ashwal, Eran; Shinar, Shiri; Aviram, Amir; Orbach, Sharon; Yogev, Yariv; Hiersch, Liran
2017-11-02
Intrapartum fetal heart rate (FHR) monitoring is well recommended during labor to assess fetal wellbeing. Though commonly used, the external Doppler and fetal scalp electrode monitor have significant shortcomings. Lately, non-invasive technologies were developed as possible alternatives. The objective of this study is to compare the accuracy of FHR trace using novel Electronic Uterine Monitoring (EUM) to that of external Doppler and fetal scalp electrode monitor. A comparative study conducted in a single tertiary medical center. Intrapartum FHR trace was recorded simultaneously using three different methods: internal fetal scalp electrode, external Doppler, and EUM. The latter, a multichannel electromyogram (EMG) device acquires a uterine signal and maternal and fetal electrocardiograms. FHR traces obtained from all devices during the first and second stages of labor were analyzed. Positive percent of agreement (PPA) and accuracy (by measuring root means square error between observed and predicted values) of EUM and external Doppler were both compared to internal scalp electrode monitoring. A Bland-Altman agreement plot was used to compare the differences in FHR trace between all modalities. For momentary recordings of fetal heart rate <110 bpm or >160 bpm level of agreement, sensitivity, and specificity were also evaluated. Overall, 712,800 FHR momentary recordings were obtained from 33 parturients. Although both EUM and external Doppler highly correlated with internal scalp electrode monitoring (r 2 = 0.98, p < .001 for both methods), the accuracy of EUM was significantly higher than external Doppler (99.0% versus 96.6%, p < .001). In addition, for fetal heart rate <110 bpm or >160 bpm, the PPA, sensitivity, and specificity of EUM as compared with internal fetal scalp electrode, were significantly greater than those of external Doppler (p < .001). Intrapartum FHR using EUM is both valid and accurate, yielding higher correlations with internal scalp electrode monitoring than external Doppler. As such, it may provide a good framework for non-invasive evaluation of intrapartum FHR.
Optimizing care of ventilated infants by improving weighing accuracy on incubator scales.
El-Kafrawy, Ula; Taylor, R J
2016-01-01
To determine the accuracy of weighing ventilated infants on incubator scales and whether the accuracy can be improved by the addition of a ventilator tube compensator (VTC) device to counterbalance the force exerted by the ventilator tubing. Body weights on integral incubator scales were compared in ventilated infants (with and without a VTC), with body weights on standalone electronic scales (true weight). Individual and series of trend weights were obtained on the infants. The method of Bland and Altman was used to assess the introduced bias. The study included 60 ventilated infants; 66% of them weighed <1000 g. A total of 102 paired-weight datasets for 30 infants undergoing conventional ventilation and 30 undergoing high frequency oscillator ventilation (HFOV) supported by a SensorMedics oscillator, (with and without a VTC) were obtained. The mean differences and (95% CI for the bias) between the integral and true scale weighing methods was 60.8 g (49.1 g to 72.5 g) without and -2.8 g (-8.9 g to 3.3 g) with a VTC in HFOV infants; 41.0 g (32.1 g to 50.0 g) without and -5.1 g (-9.3 g to -0.8 g) with a VTC for conventionally ventilated infants. Differences of greater than 2% were considered clinically relevant and occurred in 93.8% without and 20.8% with a VTC in HFOV infants and 81.5% without and 27.8% with VTC in conventionally ventilated infants. The use of the VTC device represents a substantial improvement on the current practice for weighing ventilated infants, particularly in the extreme preterm infants where an over- or underestimated weight can have important clinical implications for treatment. A large-scale clinical trial to validate these findings is needed.
Multiclass Reduced-Set Support Vector Machines
NASA Technical Reports Server (NTRS)
Tang, Benyang; Mazzoni, Dominic
2006-01-01
There are well-established methods for reducing the number of support vectors in a trained binary support vector machine, often with minimal impact on accuracy. We show how reduced-set methods can be applied to multiclass SVMs made up of several binary SVMs, with significantly better results than reducing each binary SVM independently. Our approach is based on Burges' approach that constructs each reduced-set vector as the pre-image of a vector in kernel space, but we extend this by recomputing the SVM weights and bias optimally using the original SVM objective function. This leads to greater accuracy for a binary reduced-set SVM, and also allows vectors to be 'shared' between multiple binary SVMs for greater multiclass accuracy with fewer reduced-set vectors. We also propose computing pre-images using differential evolution, which we have found to be more robust than gradient descent alone. We show experimental results on a variety of problems and find that this new approach is consistently better than previous multiclass reduced-set methods, sometimes with a dramatic difference.
Skinner, Kenneth D.
2009-01-01
Elevation data in riverine environments can be used in various applications for which different levels of accuracy are required. The Experimental Advanced Airborne Research LiDAR (Light Detection and Ranging) - or EAARL - system was used to obtain topographic and bathymetric data along the lower Boise River, southwestern Idaho, for use in hydraulic and habitat modeling. The EAARL data were post-processed into bare earth and bathymetric raster and point datasets. Concurrently with the EAARL data collection, real-time kinetic global positioning system and total station ground-survey data were collected in three areas within the lower Boise River basin to assess the accuracy of the EAARL elevation data in different hydrogeomorphic settings. The accuracies of the EAARL-derived elevation data, determined in open, flat terrain, to provide an optimal vertical comparison surface, had root mean square errors ranging from 0.082 to 0.138 m. Accuracies for bank, floodplain, and in-stream bathymetric data had root mean square errors ranging from 0.090 to 0.583 m. The greater root mean square errors for the latter data are the result of high levels of turbidity in the downstream ground-survey area, dense tree canopy, and horizontal location discrepancies between the EAARL and ground-survey data in steeply sloping areas such as riverbanks. The EAARL point to ground-survey comparisons produced results similar to those for the EAARL raster to ground-survey comparisons, indicating that the interpolation of the EAARL points to rasters did not introduce significant additional error. The mean percent error for the wetted cross-sectional areas of the two upstream ground-survey areas was 1 percent. The mean percent error increases to -18 percent if the downstream ground-survey area is included, reflecting the influence of turbidity in that area.
Cunnington, Joanna; Marshall, Nicola; Hide, Geoff; Bracewell, Claire; Isaacs, John; Platt, Philip; Kane, David
2010-07-01
Most corticosteroid injections into the joint are guided by the clinical examination (CE), but up to 70% are inaccurately placed, which may contribute to an inadequate response. The aim of this study was to investigate whether ultrasound (US) guidance improves the accuracy and clinical outcome of joint injections as compared with CE guidance in patients with inflammatory arthritis. A total of 184 patients with inflammatory arthritis and an inflamed joint (shoulder, elbow, wrist, knee, or ankle) were randomized to receive either US-guided or CE-guided corticosteroid injections. Visual analog scales (VAS) for assessment of function, pain, and stiffness of the target joint, a modified Health Assessment Questionnaire, and the EuroQol 5-domain questionnaire were obtained at baseline and at 2 weeks and 6 weeks postinjection. The erythrocyte sedimentation rate and C-reactive protein level were measured at baseline and 2 weeks. Contrast injected with the steroid was used to assess the accuracy of the joint injection. One-third of CE-guided injections were inaccurate. US-guided injections performed by a trainee rheumatologist were more accurate than the CE-guided injections performed by more senior rheumatologists (83% versus 66%; P = 0.010). There was no significant difference in clinical outcome between the group receiving US-guided injections and the group receiving CE-guided injections. Accurate injections led to greater improvement in joint function, as determined by VAS scores, at 6 weeks, as compared with inaccurate injections (30.6 mm versus 21.2 mm; P = 0.030). Clinicians who used US guidance reliably assessed the accuracy of joint injection (P < 0.001), whereas those who used CE guidance did not (P = 0.29). US guidance significantly improves the accuracy of joint injection, allowing a trainee to rapidly achieve higher accuracy than more experienced rheumatologists. US guidance did not improve the short-term outcome of joint injection.
Relationship between resolution and accuracy of four intraoral scanners in complete-arch impressions
Pascual-Moscardó, Agustín; Camps, Isabel
2018-01-01
Background The scanner does not measure the dental surface continually. Instead, it generates a point cloud, and these points are then joined to form the scanned object. This approximation will depend on the number of points generated (resolution), which can lead to low accuracy (trueness and precision) when fewer points are obtained. The purpose of this study is to determine the resolution of four intraoral digital imaging systems and to demonstrate the relationship between accuracy and resolution of the intraoral scanner in impressions of a complete dental arch. Material and Methods A master cast of the complete maxillary arch was prepared with different dental preparations. Using four digital impression systems, the cast was scanned inside of a black methacrylate box, obtaining a total of 40 digital impressions from each scanner. The resolution was obtained by dividing the number of points of each digital impression by the total surface area of the cast. Accuracy was evaluated using a three-dimensional measurement software, using the “best alignment” method of the casts with a highly faithful reference model obtained from an industrial scanner. Pearson correlation was used for statistical analysis of the data. Results Of the intraoral scanners, Omnicam is the system with the best resolution, with 79.82 points per mm2, followed by True Definition with 54.68 points per mm2, Trios with 41.21 points per mm2, and iTero with 34.20 points per mm2. However, the study found no relationship between resolution and accuracy of the study digital impression systems (P >0.05), except for Omnicam and its precision. Conclusions The resolution of the digital impression systems has no relationship with the accuracy they achieve in the impression of a complete dental arch. The study found that the Omnicam scanner is the system that obtains the best resolution, and that as the resolution increases, its precision increases. Key words:Trueness, precision, accuracy, resolution, intraoral scanner, digital impression. PMID:29750097
Tabor, Rowland W.; Haugerud, Ralph A.; Haeussler, Peter J.; Clark, Kenneth P.
2011-01-01
This map is an interpretation of a 6-ft-resolution (2-m-resolution) lidar (light detection and ranging) digital elevation model combined with the geology depicted on the Geologic Map of the Wildcat Lake 7.5' quadrangle, Kitsap and Mason Counties, Washington (Haeussler and Clark, 2000). Haeussler and Clark described, interpreted, and located the geology on the 1:24,000-scale topographic map of the Wildcat Lake 7.5' quadrangle. This map, derived from 1951 aerial photographs, has 20-ft contours, nominal horizontal resolution of approximately 40 ft (12 m), and nominal mean vertical accuracy of approximately 10 ft (3 m). Similar to many geologic maps, much of the geology in the Haeussler and Clark (2000) map-especially the distribution of surficial deposits-was interpreted from landforms portrayed on the topographic map. In 2001, the Puget Sound lidar Consortium obtained a lidar-derived digital elevation model (DEM) for Kitsap Peninsula including all of the Wildcat Lake 7.5' quadrangle. This new DEM has a horizontal resolution of 6 ft (2 m) and a mean vertical accuracy of about 1 ft (0.3 m). The greater resolution and accuracy of the lidar DEM compared to topography constructed from air photo stereo models have much improved the interpretation of geology in this heavily vegetated landscape, especially the distribution and relative age of some surficial deposits. Many contacts of surficial deposits are adapted unmodified or slightly modified from Haugerud (2009).
Accuracy of digital American Board of Orthodontics Discrepancy Index measurements.
Dragstrem, Kristina; Galang-Boquiren, Maria Therese S; Obrez, Ales; Costa Viana, Maria Grace; Grubb, John E; Kusnoto, Budi
2015-07-01
A digital analysis that is shown to be accurate will ease the demonstration of initial case complexity. To date, no literature exists on the accuracy of the digital American Board of Orthodontics Discrepancy Index (DI) calculations when applied to pretreatment digital models. Plaster models were obtained from 45 previous patients with varying degrees of malocclusion. Total DI scores and the target disorders were computed manually with a periodontal probe on the original plaster casts (gold standard) and digitally using Ortho Insight 3D (Motion View Software, Hixson, Tenn) and OrthoCAD (Cadent, Carlstadt, NJ). Intrarater and interrater reliabilities were assessed for 15 subjects using the Spearman rho correlation test. Accuracies of the DI scores and target disorders were assessed for all 45 subjects using Wilcoxon signed ranks tests. Intrarater and interrater reliabilities were high for total DI scores and most target disorders (r > 0.8). No significant difference was found between total DI score when measured with OrthoCAD compared with manual calculations. The total DI scores calculated by Ortho Insight 3D were found to be significantly greater than those by manual calculation by 2.71 points. The findings indicate that a DI calculated by Ortho Insight 3D may lead the clinician to overestimate case complexity. OrthoCAD's DI module was demonstrated to be a clinically acceptable alternative to manual calculation of the total scores. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Lidar-revised geologic map of the Des Moines 7.5' quadrangle, King County, Washington
Tabor, Rowland W.; Booth, Derek B.
2017-11-06
This map is an interpretation of a modern lidar digital elevation model combined with the geology depicted on the Geologic Map of the Des Moines 7.5' Quadrangle, King County, Washington (Booth and Waldron, 2004). Booth and Waldron described, interpreted, and located the geology on the 1:24,000-scale topographic map of the Des Moines 7.5' quadrangle. The base map that they used was originally compiled in 1943 and revised using 1990 aerial photographs; it has 25-ft contours, nominal horizontal resolution of about 40 ft (12 m), and nominal mean vertical accuracy of about 10 ft (3 m). Similar to many geologic maps, much of the geology in the Booth and Waldron (2004) map was interpreted from landforms portrayed on the topographic map. In 2001, the Puget Sound Lidar Consortium obtained a lidar-derived digital elevation model (DEM) for much of the Puget Sound area, including the entire Des Moines 7.5' quadrangle. This new DEM has a horizontal resolution of about 6 ft (2 m) and a mean vertical accuracy of about 1 ft (0.3 m). The greater resolution and accuracy of the lidar DEM compared to topography constructed from air-photo stereo models have much improved the interpretation of geology, even in this heavily developed area, especially the distribution and relative age of some surficial deposits. For a brief description of the light detection and ranging (lidar) remote sensing method and this data acquisition program, see Haugerud and others (2003).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Brunner, Thomas A.; Gentile, Nicholas A.
2013-10-15
We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositionsmore » will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.« less
Effect of black point on accuracy of LCD displays colorimetric characterization
NASA Astrophysics Data System (ADS)
Li, Tong; Xie, Kai; He, Nannan; Ye, Yushan
2018-03-01
Black point is the point at which RGB's single channel digital drive value is 0. Due to the problem of light leakage of liquid-crystal displays (LCDs), black point's luminance value is not 0, this phenomenon bring some errors to colorimetric characterization of LCDs, especially low luminance value driving greater sampling effect. This paper describes the characteristic accuracy of polynomial model method and the effect of black point on accuracy, the color difference accuracy is given. When considering the black point in the characteristics equation, the maximum color difference is 3.246, the maximum color difference than without considering the black points reduced by 2.36. The experimental results show that the accuracy of LCDs colorimetric characterization can be improved, if the effect of black point is eliminated properly.
Evaluating the accuracy of SHAPE-directed RNA secondary structure predictions
Sükösd, Zsuzsanna; Swenson, M. Shel; Kjems, Jørgen; Heitsch, Christine E.
2013-01-01
Recent advances in RNA structure determination include using data from high-throughput probing experiments to improve thermodynamic prediction accuracy. We evaluate the extent and nature of improvements in data-directed predictions for a diverse set of 16S/18S ribosomal sequences using a stochastic model of experimental SHAPE data. The average accuracy for 1000 data-directed predictions always improves over the original minimum free energy (MFE) structure. However, the amount of improvement varies with the sequence, exhibiting a correlation with MFE accuracy. Further analysis of this correlation shows that accurate MFE base pairs are typically preserved in a data-directed prediction, whereas inaccurate ones are not. Thus, the positive predictive value of common base pairs is consistently higher than the directed prediction accuracy. Finally, we confirm sequence dependencies in the directability of thermodynamic predictions and investigate the potential for greater accuracy improvements in the worst performing test sequence. PMID:23325843
Camera system considerations for geomorphic applications of SfM photogrammetry
Mosbrucker, Adam; Major, Jon J.; Spicer, Kurt R.; Pitlick, John
2017-01-01
The availability of high-resolution, multi-temporal, remotely sensed topographic data is revolutionizing geomorphic analysis. Three-dimensional topographic point measurements acquired from structure-from-motion (SfM) photogrammetry have been shown to be highly accurate and cost-effective compared to laser-based alternatives in some environments. Use of consumer-grade digital cameras to generate terrain models and derivatives is becoming prevalent within the geomorphic community despite the details of these instruments being largely overlooked in current SfM literature. This article is protected by copyright. All rights reserved.A practical discussion of camera system selection, configuration, and image acquisition is presented. The hypothesis that optimizing source imagery can increase digital terrain model (DTM) accuracy is tested by evaluating accuracies of four SfM datasets conducted over multiple years of a gravel bed river floodplain using independent ground check points with the purpose of comparing morphological sediment budgets computed from SfM- and lidar-derived DTMs. Case study results are compared to existing SfM validation studies in an attempt to deconstruct the principle components of an SfM error budget. This article is protected by copyright. All rights reserved.Greater information capacity of source imagery was found to increase pixel matching quality, which produced 8 times greater point density and 6 times greater accuracy. When propagated through volumetric change analysis, individual DTM accuracy (6–37 cm) was sufficient to detect moderate geomorphic change (order 100,000 m3) on an unvegetated fluvial surface; change detection determined from repeat lidar and SfM surveys differed by about 10%. Simple camera selection criteria increased accuracy by 64%; configuration settings or image post-processing techniques increased point density by 5–25% and decreased processing time by 10–30%. This article is protected by copyright. All rights reserved.Regression analysis of 67 reviewed datasets revealed that the best explanatory variable to predict accuracy of SfM data is photographic scale. Despite the prevalent use of object distance ratios to describe scale, nominal ground sample distance is shown to be a superior metric, explaining 68% of the variability in mean absolute vertical error.
Evaluation of urine culture screening by light-scatter photometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hale, D.C.; Thrupp, L.D.; Matsen, J.M.
1981-08-01
Urine screening for bacteriuria by light-scatter photometry (Autobac) was evaluated for accuracy and compared with a colony count by the calibrated loop method. Incubation time, inoculum size, precision, and interference of particulate matter were evaluated in an effort to standardize the screening procedure. Results showed that urines could be accurately screened for Enterobacteriaceae by inoculating a single Autobac cuvette chamber with 0.1 or 0.2 ml of urine and determining the voltage change after four hours. A change of greater than or equal to 0.2 units indicates significant bacteriuria. Decreased accuracy was noted for urines having greater than 10(5) cfu/ml ofmore » Pseudomonas species or gram-positive cocci, possibly because these organisms grow more slowly.« less
Modeling Mediterranean forest structure using airborne laser scanning data
NASA Astrophysics Data System (ADS)
Bottalico, Francesca; Chirici, Gherardo; Giannini, Raffaello; Mele, Salvatore; Mura, Matteo; Puxeddu, Michele; McRoberts, Ronald E.; Valbuena, Ruben; Travaglini, Davide
2017-05-01
The conservation of biological diversity is recognized as a fundamental component of sustainable development, and forests contribute greatly to its preservation. Structural complexity increases the potential biological diversity of a forest by creating multiple niches that can host a wide variety of species. To facilitate greater understanding of the contributions of forest structure to forest biological diversity, we modeled relationships between 14 forest structure variables and airborne laser scanning (ALS) data for two Italian study areas representing two common Mediterranean forests, conifer plantations and coppice oaks subjected to irregular intervals of unplanned and non-standard silvicultural interventions. The objectives were twofold: (i) to compare model prediction accuracies when using two types of ALS metrics, echo-based metrics and canopy height model (CHM)-based metrics, and (ii) to construct inferences in the form of confidence intervals for large area structural complexity parameters. Our results showed that the effects of the two study areas on accuracies were greater than the effects of the two types of ALS metrics. In particular, accuracies were less for the more complex study area in terms of species composition and forest structure. However, accuracies achieved using the echo-based metrics were only slightly greater than when using the CHM-based metrics, thus demonstrating that both options yield reliable and comparable results. Accuracies were greatest for dominant height (Hd) (R2 = 0.91; RMSE% = 8.2%) and mean height weighted by basal area (R2 = 0.83; RMSE% = 10.5%) when using the echo-based metrics, 99th percentile of the echo height distribution and interquantile distance. For the forested area, the generalized regression (GREG) estimate of mean Hd was similar to the simple random sampling (SRS) estimate, 15.5 m for GREG and 16.2 m SRS. Further, the GREG estimator with standard error of 0.10 m was considerable more precise than the SRS estimator with standard error of 0.69 m.
NASA Technical Reports Server (NTRS)
Padula, Santo, II
2009-01-01
The ability to sufficiently measure orbiter window defects to allow for window recertification has been an ongoing challenge for the orbiter vehicle program. The recent Columbia accident has forced even tighter constraints on the criteria that must be met in order to recertify windows for flight. As a result, new techniques are being investigated to improve the reliability, accuracy and resolution of the defect detection process. The methodology devised in this work, which is based on the utilization of a vertical scanning interferometric (VSI) tool, shows great promise for meeting the ever increasing requirements for defect detection. This methodology has the potential of a 10-100 fold greater resolution of the true defect depth than can be obtained from the currently employed micrometer based methodology. An added benefit is that it also produces a digital elevation map of the defect, thereby providing information about the defect morphology which can be utilized to ascertain the type of debris that induced the damage. However, in order to successfully implement such a tool, a greater understanding of the resolution capability and measurement repeatability must be obtained. This work focused on assessing the variability of the VSI-based measurement methodology and revealed that the VSI measurement tool was more repeatable and more precise than the current micrometer based approach, even in situations where operator variation could affect the measurement. The analysis also showed that the VSI technique was relatively insensitive to the hardware and software settings employed, making the technique extremely robust and desirable
King, Alice; Shipley, Martin; Markus, Hugh
2011-10-01
Improved methods are required to identify patients with asymptomatic carotid stenosis at high risk for stroke. The Asymptomatic Carotid Emboli Study recently showed embolic signals (ES) detected by transcranial Doppler on 2 recordings that lasted 1-hour independently predict 2-year stroke risk. ES detection is time-consuming, and whether similar predictive information could be obtained from simpler recording protocols is unknown. In a predefined secondary analysis of Asymptomatic Carotid Emboli Study, we looked at the temporal variation of ES. We determined the predictive yield associated with different recording protocols and with the use of a higher threshold to indicate increased risk (≥2 ES). To compare the different recording protocols, sensitivity and specificity analyses were performed using analysis of receiver-operator characteristic curves. Of 477 patients, 467 had baseline recordings adequate for analysis; 77 of these had ES on 1 or both of the 2 recordings. ES status on the 2 recordings was significantly associated (P<0.0001), but there was poor agreement between ES positivity on the 2 recordings (κ=0.266). For the primary outcome of ipsilateral stroke or transient ischemic attack, the use of 2 baseline recordings lasting 1 hour had greater predictive accuracy than either the first baseline recording alone (P=0.0005), a single 30-minute (P<0.0001) recording, or 2 recordings lasting 30 minutes (P<0.0001). For the outcome of ipsilateral stroke alone, two recordings lasting 1 hour had greater predictive accuracy when compared to all other recording protocols (all P<0.0001). Our analysis demonstrates the relative predictive yield of different recording protocols that can be used in application of the technique in clinical practice. Two baseline recordings lasting 1 hour as used in Asymptomatic Carotid Emboli Study gave the best risk prediction.
Maukonen, Mirkka; Männistö, Satu; Tolonen, Hanna
2018-03-01
Up-to-date information on the accuracy between different anthropometric data collection methods is vital for the reliability of anthropometric data. A previous review on this matter was conducted a decade ago. Our aim was to conduct a literature review on the accuracy of self-reported height, weight, and body mass index (BMI) against measured values for assessing obesity in adults. To obtain an overview of the present situation, we included studies published after the previous review. Differences according to sex, BMI groups, and continents were also assessed. Studies published between January 2006 and April 2017 were identified from a literature search on PubMed. Our search retrieved 62 publications on adult populations that showed a tendency for self-reported height to be overestimated and weight to be underestimated when compared with measured values. The findings were similar for both sexes. BMI derived from self-reported height and weight was underestimated; there was a clear tendency for underestimation of overweight (from 1.8%-points to 9.8%-points) and obesity (from 0.7%-points to 13.4%-points) prevalence by self-report. The bias was greater in overweight and obese participants than those of normal weight. Studies conducted in North America showed a greater bias, whereas the bias in Asian studies seemed to be lower than those from other continents. With globally rising obesity rates, accurate estimation of obesity is essential for effective public health policies to support obesity prevention. As self-report bias tends to be higher among overweight and obese individuals, measured anthropometrics provide a more reliable tool for assessing the prevalence of obesity.
Sapara, Adegboyega; ffytche, Dominic H.; Birchwood, Max; Cooke, Michael A.; Fannon, Dominic; Williams, Steven C.R.; Kuipers, Elizabeth; Kumari, Veena
2014-01-01
Background Poor insight in schizophrenia has been theorised to reflect a cognitive deficit that is secondary to brain abnormalities, localized in the brain regions that are implicated in higher order cognitive functions, including working memory (WM). This study investigated WM-related neural substrates of preserved and poor insight in schizophrenia. Method Forty stable schizophrenia outpatients, 20 with preserved and 20 with poor insight (usable data obtained from 18 preserved and 14 poor insight patients), and 20 healthy participants underwent functional magnetic resonance imaging (fMRI) during a parametric ‘n-back’ task. The three groups were preselected to match on age, education and predicted IQ, and the two patient groups to have distinct insight levels. Performance and fMRI data were analysed to determine how groups of patients with preserved and poor insight differed from each other, and from healthy participants. Results Poor insight patients showed lower performance accuracy, relative to healthy participants (p = 0.01) and preserved insight patients (p = 0.08); the two patient groups were comparable on symptoms and medication. Preserved insight patients, relative to poor insight patients, showed greater activity most consistently in the precuneus and cerebellum (both bilateral) during WM; they also showed greater activity than healthy participants in the inferior–superior frontal gyrus and cerebellum (bilateral). Group differences in brain activity did not co-vary significantly with performance accuracy. Conclusions The precuneus and cerebellum function contribute to preserved insight in schizophrenia. Preserved insight as well as normal-range WM capacity in schizophrenia sub-groups may be achieved via compensatory neural activity in the frontal cortex and cerebellum. PMID:24332795
Consider the source: Children link the accuracy of text-based sources to the accuracy of the author.
Vanderbilt, Kimberly E; Ochoa, Karlena D; Heilbrun, Jayd
2018-05-06
The present research investigated whether young children link the accuracy of text-based information to the accuracy of its author. Across three experiments, three- and four-year-olds (N = 231) received information about object labels from accurate and inaccurate sources who provided information both in text and verbally. Of primary interest was whether young children would selectively rely on information provided by more accurate sources, regardless of the form in which the information was communicated. Experiment 1 tested children's trust in text-based information (e.g., books) written by an author with a history of either accurate or inaccurate verbal testimony and found that children showed greater trust in books written by accurate authors. Experiment 2 replicated the findings of Experiment 1 and extended them by showing that children's selective trust in more accurate text-based sources was not dependent on experience trusting or distrusting the author's verbal testimony. Experiment 3 investigated this understanding in reverse by testing children's trust in verbal testimony communicated by an individual who had authored either accurate or inaccurate text-based information. Experiment 3 revealed that children showed greater trust in individuals who had authored accurate rather than inaccurate books. Experiment 3 also demonstrated that children used the accuracy of text-based sources to make inferences about the mental states of the authors. Taken together, these results suggest children do indeed link the reliability of text-based sources to the reliability of the author. Statement of Contribution Existing knowledge Children use sources' prior accuracy to predict future accuracy in face-to-face verbal interactions. Children who are just learning to read show increased trust in text bases (vs. verbal) information. It is unknown whether children consider authors' prior accuracy when judging the accuracy of text-based information. New knowledge added by this article Preschool children track sources' accuracy across communication mediums - from verbal to text-based modalities and vice versa. Children link the reliability of text-based sources to the reliability of the author. © 2018 The British Psychological Society.
Throwing speed and accuracy in baseball and cricket players.
Freeston, Jonathan; Rooney, Kieron
2014-06-01
Throwing speed and accuracy are both critical to sports performance but cannot be optimized simultaneously. This speed-accuracy trade-off (SATO) is evident across a number of throwing groups but remains poorly understood. The goal was to describe the SATO in baseball and cricket players and determine the speed that optimizes accuracy. 20 grade-level baseball and cricket players performed 10 throws at 80% and 100% of maximal throwing speed (MTS) toward a cricket stump. Baseball players then performed a further 10 throws at 70%, 80%, 90%, and 100% of MTS toward a circular target. Baseball players threw faster with greater accuracy than cricket players at both speeds. Both groups demonstrated a significant SATO as vertical error increased with increases in speed; the trade-off was worse for cricketers than baseball players. Accuracy was optimized at 70% of MTS for baseballers. Throwing athletes should decrease speed when accuracy is critical. Cricket players could adopt baseball-training practices to improve throwing performance.
NASA Astrophysics Data System (ADS)
Ma, Lei; Cheng, Liang; Li, Manchun; Liu, Yongxue; Ma, Xiaoxue
2015-04-01
Unmanned Aerial Vehicle (UAV) has been used increasingly for natural resource applications in recent years due to their greater availability and the miniaturization of sensors. In addition, Geographic Object-Based Image Analysis (GEOBIA) has received more attention as a novel paradigm for remote sensing earth observation data. However, GEOBIA generates some new problems compared with pixel-based methods. In this study, we developed a strategy for the semi-automatic optimization of object-based classification, which involves an area-based accuracy assessment that analyzes the relationship between scale and the training set size. We found that the Overall Accuracy (OA) increased as the training set ratio (proportion of the segmented objects used for training) increased when the Segmentation Scale Parameter (SSP) was fixed. The OA increased more slowly as the training set ratio became larger and a similar rule was obtained according to the pixel-based image analysis. The OA decreased as the SSP increased when the training set ratio was fixed. Consequently, the SSP should not be too large during classification using a small training set ratio. By contrast, a large training set ratio is required if classification is performed using a high SSP. In addition, we suggest that the optimal SSP for each class has a high positive correlation with the mean area obtained by manual interpretation, which can be summarized by a linear correlation equation. We expect that these results will be applicable to UAV imagery classification to determine the optimal SSP for each class.
Lettieri, S.; Zuckerman, D.M.
2011-01-01
Typically, the most time consuming part of any atomistic molecular simulation is due to the repeated calculation of distances, energies and forces between pairs of atoms. However, many molecules contain nearly rigid multi-atom groups such as rings and other conjugated moieties, whose rigidity can be exploited to significantly speed up computations. The availability of GB-scale random-access memory (RAM) offers the possibility of tabulation (pre-calculation) of distance and orientation-dependent interactions among such rigid molecular bodies. Here, we perform an investigation of this energy tabulation approach for a fluid of atomistic – but rigid – benzene molecules at standard temperature and density. In particular, using O(1) GB of RAM, we construct an energy look-up table which encompasses the full range of allowed relative positions and orientations between a pair of whole molecules. We obtain a hardware-dependent speed-up of a factor of 24-50 as compared to an ordinary (“exact”) Monte Carlo simulation and find excellent agreement between energetic and structural properties. Second, we examine the somewhat reduced fidelity of results obtained using energy tables based on much less memory use. Third, the energy table serves as a convenient platform to explore potential energy smoothing techniques, akin to coarse-graining. Simulations with smoothed tables exhibit near atomistic accuracy while increasing diffusivity. The combined speed-up in sampling from tabulation and smoothing exceeds a factor of 100. For future applications greater speed-ups can be expected for larger rigid groups, such as those found in biomolecules. PMID:22120971
A Novel Polygonal Finite Element Method: Virtual Node Method
NASA Astrophysics Data System (ADS)
Tang, X. H.; Zheng, C.; Zhang, J. H.
2010-05-01
Polygonal finite element method (PFEM), which can construct shape functions on polygonal elements, provides greater flexibility in mesh generation. However, the non-polynomial form of traditional PFEM, such as Wachspress method and Mean Value method, leads to inexact numerical integration. Since the integration technique for non-polynomial functions is immature. To overcome this shortcoming, a great number of integration points have to be used to obtain sufficiently exact results, which increases computational cost. In this paper, a novel polygonal finite element method is proposed and called as virtual node method (VNM). The features of present method can be list as: (1) It is a PFEM with polynomial form. Thereby, Hammer integral and Gauss integral can be naturally used to obtain exact numerical integration; (2) Shape functions of VNM satisfy all the requirements of finite element method. To test the performance of VNM, intensive numerical tests are carried out. It found that, in standard patch test, VNM can achieve significantly better results than Wachspress method and Mean Value method. Moreover, it is observed that VNM can achieve better results than triangular 3-node elements in the accuracy test.
Implementation of ICP-MS protocols for uranium urinary measurements in worker monitoring.
Baglan, N; Cossonnet, C; Trompier, F; Ritt, J; Bérard, P
1999-10-01
The uranium concentration in human urine spiked with natural uranium and rat urine containing metabolized depleted uranium was determined by ICP-MS. The use of ICP-MS was investigated without any chemical treatment or after the different stages of a purification protocol currently carried out for routine monitoring. In the case of spiked urine, the measured uranium concentrations were consistent with those certified by an intercomparison network in radiotoxicological analysis (PROCORAD) and with those obtained by alpha spectrometry in the case of the urine containing metabolized uranium. The quantitative information which could be obtained in the different protocols investigated shows the extent to which ICP-MS provides greater flexibility for setting up appropriate monitoring approaches in radiation protection routines and accidental situations. This is due to the combination of high sensitivity and the accuracy with which traces of uranium in urine can be determined in a shorter time period. Moreover, it has been shown that ICP-MS measurement can be used to quantify the 235U isotope, which is useful for characterizing the nature of the uranium compound, but difficult to perform using alpha spectrometry.
NEEDLE BIOPSY OF THE LIVER—General Considerations
Molle, William E.; Kaplan, Leo
1952-01-01
Needle biopsy of the liver provides concrete diagnostic information that cannot be as readily obtained in any other way. This report reviews 401 liver biopsies in 312 patients. The major indications for use of this procedure are: To determine the cause of an obscure liver enlargement; to establish the cause of jaundice; to distinguish between malignant disease and cirrhosis of the liver; to determine when hepatitis has subsided; and to evaluate the results of treatment. At times, systemic disease that has not been recognized by other means may be diagnosed by this technique. There is risk in performing this test, and the 0.25 per cent mortality in this series compares favorably with that reported from other clinics. Where the diagnosis by biopsy could be compared with observations at operation or autopsy, the correct diagnosis was made by biopsy in 85 per cent of cases. Greater accuracy was obtained by two or more biopsic examinations in one case then by single biopsy. In several cases in which surgical operation was considered, biopsic information made it unnecessary, and vice versa. PMID:14886754
Attribute Weighting Based K-Nearest Neighbor Using Gain Ratio
NASA Astrophysics Data System (ADS)
Nababan, A. A.; Sitompul, O. S.; Tulus
2018-04-01
K- Nearest Neighbor (KNN) is a good classifier, but from several studies, the result performance accuracy of KNN still lower than other methods. One of the causes of the low accuracy produced, because each attribute has the same effect on the classification process, while some less relevant characteristics lead to miss-classification of the class assignment for new data. In this research, we proposed Attribute Weighting Based K-Nearest Neighbor Using Gain Ratio as a parameter to see the correlation between each attribute in the data and the Gain Ratio also will be used as the basis for weighting each attribute of the dataset. The accuracy of results is compared to the accuracy acquired from the original KNN method using 10-fold Cross-Validation with several datasets from the UCI Machine Learning repository and KEEL-Dataset Repository, such as abalone, glass identification, haberman, hayes-roth and water quality status. Based on the result of the test, the proposed method was able to increase the classification accuracy of KNN, where the highest difference of accuracy obtained hayes-roth dataset is worth 12.73%, and the lowest difference of accuracy obtained in the abalone dataset of 0.07%. The average result of the accuracy of all dataset increases the accuracy by 5.33%.
Development and validation of an Argentine set of facial expressions of emotion.
Vaiman, Marcelo; Wagner, Mónica Anna; Caicedo, Estefanía; Pereno, Germán Leandro
2017-02-01
Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion research is receiving in this region. Here we present the development and validation of the Universidad Nacional de Cordoba, Expresiones de Emociones Faciales (UNCEEF), a Facial Action Coding System (FACS)-verified set of pictures of Argentineans expressing the six basic emotions, plus neutral expressions. FACS scores, recognition rates, Hu scores, and discrimination indices are reported. Evidence of convergent validity was obtained using the Pictures of Facial Affect in an Argentine sample. However, recognition accuracy was greater for UNCEEF. The importance of local sets of emotion pictures is discussed.
Zerbini, Talita; da Silva, Luiz Fernando Ferraz; Ferro, Antonio Carlos Gonçalves; Kay, Fernando Uliana; Junior, Edson Amaro; Pasqualucci, Carlos Augusto Gonçalves; do Nascimento Saldiva, Paulo Hilario
2014-01-01
OBJECTIVE: The aim of the present work is to analyze the differences and similarities between the elements of a conventional autopsy and images obtained from postmortem computed tomography in a case of a homicide stab wound. METHOD: Comparison between the findings of different methods: autopsy and postmortem computed tomography. RESULTS: In some aspects, autopsy is still superior to imaging, especially in relation to external examination and the description of lesion vitality. However, the findings of gas embolism, pneumothorax and pulmonary emphysema and the relationship between the internal path of the instrument of aggression and the entry wound are better demonstrated by postmortem computed tomography. CONCLUSIONS: Although multislice computed tomography has greater accuracy than autopsy, we believe that the conventional autopsy method is fundamental for providing evidence in criminal investigations. PMID:25518020
P-Code-Enhanced Encryption-Mode Processing of GPS Signals
NASA Technical Reports Server (NTRS)
Young, Lawrence; Meehan, Thomas; Thomas, Jess B.
2003-01-01
A method of processing signals in a Global Positioning System (GPS) receiver has been invented to enable the receiver to recover some of the information that is otherwise lost when GPS signals are encrypted at the transmitters. The need for this method arises because, at the option of the military, precision GPS code (P-code) is sometimes encrypted by a secret binary code, denoted the A code. Authorized users can recover the full signal with knowledge of the A-code. However, even in the absence of knowledge of the A-code, one can track the encrypted signal by use of an estimate of the A-code. The present invention is a method of making and using such an estimate. In comparison with prior such methods, this method makes it possible to recover more of the lost information and obtain greater accuracy.
Occult Intertrochanteric Fracture Mimicking the Fracture of Greater Trochanter.
Chung, Phil Hyun; Kang, Suk; Kim, Jong Pil; Kim, Young Sung; Lee, Ho Min; Back, In Hwa; Eom, Kyeong Soo
2016-06-01
Occult intertrochanteric fractures are misdiagnosed as isolated greater trochanteric fractures in some cases. We investigated the utility of three-dimensional computed tomography (3D-CT) and magnetic resonance imaging (MRI) in the diagnosis and outcome management of occult intertrochanteric fractures. This study involved 23 cases of greater trochanteric fractures as diagnosed using plain radiographs from January 2004 to July 2013. Until January 2008, 9 cases were examined with 3D-CT only, while 14 cases were screened with both 3D-CT and MRI scans. We analyzed diagnostic accuracy and treatment results following 3D-CT and MRI scanning. Nine cases that underwent 3D-CT only were diagnosed with isolated greater trochanteric fractures without occult intertrochanteric fractures. Of these, a patient with displacement received surgical treatment. Of the 14 patients screened using both CT and MRI, 13 were diagnosed with occult intertrochanteric fractures. Of these, 11 were treated with surgical intervention and 2 with conservative management. Three-dimensional CT has very low diagnostic accuracy in diagnosing occult intertrochanteric fractures. For this reason, MRI is recommended to confirm a suspected occult intertrochanteric fracture and to determine the most appropriate mode of treatment.
Occult Intertrochanteric Fracture Mimicking the Fracture of Greater Trochanter
Chung, Phil Hyun; Kang, Suk; Kim, Jong Pil; Kim, Young Sung; Back, In Hwa; Eom, Kyeong Soo
2016-01-01
Purpose Occult intertrochanteric fractures are misdiagnosed as isolated greater trochanteric fractures in some cases. We investigated the utility of three-dimensional computed tomography (3D-CT) and magnetic resonance imaging (MRI) in the diagnosis and outcome management of occult intertrochanteric fractures. Materials and Methods This study involved 23 cases of greater trochanteric fractures as diagnosed using plain radiographs from January 2004 to July 2013. Until January 2008, 9 cases were examined with 3D-CT only, while 14 cases were screened with both 3D-CT and MRI scans. We analyzed diagnostic accuracy and treatment results following 3D-CT and MRI scanning. Results Nine cases that underwent 3D-CT only were diagnosed with isolated greater trochanteric fractures without occult intertrochanteric fractures. Of these, a patient with displacement received surgical treatment. Of the 14 patients screened using both CT and MRI, 13 were diagnosed with occult intertrochanteric fractures. Of these, 11 were treated with surgical intervention and 2 with conservative management. Conclusion Three-dimensional CT has very low diagnostic accuracy in diagnosing occult intertrochanteric fractures. For this reason, MRI is recommended to confirm a suspected occult intertrochanteric fracture and to determine the most appropriate mode of treatment. PMID:27536653
Application of a stochastic snowmelt model for probabilistic decisionmaking
NASA Technical Reports Server (NTRS)
Mccuen, R. H.
1983-01-01
A stochastic form of the snowmelt runoff model that can be used for probabilistic decision-making was developed. The use of probabilistic streamflow predictions instead of single valued deterministic predictions leads to greater accuracy in decisions. While the accuracy of the output function is important in decisionmaking, it is also important to understand the relative importance of the coefficients. Therefore, a sensitivity analysis was made for each of the coefficients.
ERIC Educational Resources Information Center
Mier, Constance M.
2011-01-01
The accuracy of video analysis of the passive straight-leg raise test (PSLR) and the validity of the sit-and-reach test (SR) were tested in 60 men and women. Computer software measured static hip-joint flexion accurately. High within-session reliability of the PSLR was demonstrated (R greater than 0.97). Test-retest (separate days) reliability for…
Improving the Accuracy of Structural Fatigue Life Tracking Through Dynamic Strain Sensor Calibration
2011-09-01
strength corrosion resistant 7075 -T6 alloy, and included hinge lugs, a bulkhead, spars, and wing skins that were fastened together using welds, rivets...release, distribution unlimited 13. SUPPLEMENTARY NOTES See also ADA580921. International Workshop on Structural Health Monitoring: From Condition -based...greater than 10% under the same loading conditions [1]. These differences must be accounted for to have acceptable accuracy levels in the ultimate
Soldier Performance and Mood States Following a Strenuous Road March
1990-01-01
13) and the more intense the exercise, the greater the elevation (14). Reductions in heart rate through the use of beta - blockers can substantially...extreme physical fatigue. Shooting accuracy degraded severely under these conditions. An increase in body tremors due to fatigue or elevated post...exercise (9) and this may effect shooting accuracy. Muscle tremors increase after brief or prolonged muscular contractions (10, 11) and such tremors
a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data
NASA Astrophysics Data System (ADS)
Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.
2018-05-01
In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.
Hunter, C; Siddiqui, M; Georgiou Delisle, T; Blake, H; Jeyadevan, N; Abulafi, M; Swift, I; Toomey, P; Brown, G
2017-04-01
To compare the preoperative staging accuracy of computed tomography (CT) and 3-T magnetic resonance imaging (MRI) in colon cancer, and to investigate the prognostic significance of identified risk factors. Fifty-eight patients undergoing primary resection of their colon cancer were prospectively recruited, with 53 patients included for final analysis. Accuracy of CT and MRI were compared for two readers, using postoperative histology as the reference standard. Patients were followed-up for a median of 39 months. Risk factors were compared by modality and reader in terms of metachronous metastases and disease-free survival (DFS), stratified for adjuvant chemotherapy. Accuracy for the identification of T3c+ disease was non-significantly greater on MRI (75% and 79%) than CT (70% and 77%). Differences in the accuracy of MRI and CT for identification of T3+ disease (MRI 75% and 57%, CT 72% and 66%) and N+ disease (MRI 62% and 63%, CT 62% and 56%) were also non-significant. Identification of extramural venous invasion (EMVI+) disease was significantly greater on MRI (75% and 75%) than CT (79% and 54%) for one reader (p=0.029). T3c+ disease at histopathology was the only risk factor that demonstrated a significant difference in rate of metachronous metastases (odds ratio [OR] 8.6, p=0.0044) and DFS stratified for adjuvant therapy (OR=4, p=0.048). T3c or greater disease is the strongest risk factor for predicting DFS in colon cancer, and is accurately identified on imaging. T3c+ disease may therefore be the best imaging entry criteria for trials of neoadjuvant treatment. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
de la Coba, Pablo; Bruehl, Stephen; Gálvez-Sánchez, Carmen María; Reyes Del Paso, Gustavo A
2018-05-01
This study examined the diagnostic accuracy and test-retest reliability of a novel dynamic evoked pain protocol (slowly repeated evoked pain; SREP) compared to temporal summation of pain (TSP), a standard index of central sensitization. Thirty-five fibromyalgia (FM) and 30 rheumatoid arthritis (RA) patients completed, in pseudorandomized order, a standard mechanical TSP protocol (10 stimuli of 1s duration at the thenar eminence using a 300g monofilament with 1s interstimulus interval) and the SREP protocol (9 suprathreshold pressure stimuli of 5s duration applied to the fingernail with a 30s interstimulus interval). In order to evaluate reliability for both protocols, they were repeated in a second session 4-7 days later. Evidence for significant pain sensitization over trials (increasing pain intensity ratings) was observed for SREP in FM (p<.001) but not in RA (p=.35), whereas significant sensitization was observed in both diagnostic groups for the TSP protocol (p's<.008). Compared to TSP, SREP demonstrated higher overall diagnostic accuracy (87.7% vs. 64.6%), greater sensitivity (0.89 vs. 0.57), and greater specificity (0.87 vs. 0.73) in discriminating between FM and RA patients. Test-retest reliability of SREP sensitization was good in FM (ICCs: 0.80), and moderate in RA (ICC: 0.68). SREP seems to be a dynamic evoked pain index tapping into pain sensitization that allows for greater diagnostic accuracy in identifying FM patients compared to a standard TSP protocol. Further research is needed to study mechanisms underlying SREP and the potential utility of adding SREP to standard pain evaluation protocols.
Scheurich, Rebecca; Zamm, Anna; Palmer, Caroline
2018-01-01
The ability to flexibly adapt one’s behavior is critical for social tasks such as speech and music performance, in which individuals must coordinate the timing of their actions with others. Natural movement frequencies, also called spontaneous rates, constrain synchronization accuracy between partners during duet music performance, whereas musical training enhances synchronization accuracy. We investigated the combined influences of these factors on the flexibility with which individuals can synchronize their actions with sequences at different rates. First, we developed a novel musical task capable of measuring spontaneous rates in both musicians and non-musicians in which participants tapped the rhythm of a familiar melody while hearing the corresponding melody tones. The novel task was validated by similar measures of spontaneous rates generated by piano performance and by the tapping task from the same pianists. We then implemented the novel task with musicians and non-musicians as they synchronized tapping of a familiar melody with a metronome at their spontaneous rates, and at rates proportionally slower and faster than their spontaneous rates. Musicians synchronized more flexibly across rates than non-musicians, indicated by greater synchronization accuracy. Additionally, musicians showed greater engagement of error correction mechanisms than non-musicians. Finally, differences in flexibility were characterized by more recurrent (repetitive) and patterned synchronization in non-musicians, indicative of greater temporal rigidity. PMID:29681872
Hussein, Atef H.; Rashed, Samia M.; El-Hayawan, Ibrahim A.; Aly, Nagwa S. M.; Abou Ouf, Eman A.; Ali, Amira T.
2017-01-01
The aim of the present study was to assess the frequency of intestinal parasitic infection among patients with gastrointestinal tract disorders from the Greater Cairo region, Egypt. In addition, a comparison was made of the accuracy of direct thin and thick smear, formol-ether sedimentation (FEC), centrifugal flotation (CF), and mini-FLOTAC techniques in the diagnosis of infection. Out of 100 patients, the overall prevalence of parasitic infection was 51%. Only 6% had dual infection. Giardia lamblia was the most common parasite (26%), followed by Hymenolepis nana (20%), Entamoeba coli (8%), and Enterobius vermicularis (3%). Except the statistically significant association between E. vermicularis infection and perianal itching and insomnia (P < 0.001), age, gender, and complaints of the examined individuals had no association with prevalence of parasitic infection. Both FEC and CF were equally the most accurate techniques (accuracy = 98.2%, confidence interval [CI] = 0.95–1.0, and κ index = 0.962), whereas the Kato-Katz method was the least accurate (accuracy = 67.5%, CI = 0.57–0.78, and κ index = 0.333). However, mini-FLOTAC-ZnSO4 was the most accurate for diagnosis of helminthic infection, and FEC was more accurate for diagnosis of protozoal infection (accuracy = 100%, CI = 1.0–1.0, and κ index = 1). PMID:28093543
Monte Carlo based electron treatment planning and cutout output factor calculations
NASA Astrophysics Data System (ADS)
Mitrou, Ellis
Electron radiotherapy (RT) offers a number of advantages over photons. The high surface dose, combined with a rapid dose fall-off beyond the target volume presents a net increase in tumor control probability and decreases the normal tissue complication for superficial tumors. Electron treatments are normally delivered clinically without previously calculated dose distributions due to the complexity of the electron transport involved and greater error in planning accuracy. This research uses Monte Carlo (MC) methods to model clinical electron beams in order to accurately calculate electron beam dose distributions in patients as well as calculate cutout output factors, reducing the need for a clinical measurement. The present work is incorporated into a research MC calculation system: McGill Monte Carlo Treatment Planning (MMCTP) system. Measurements of PDDs, profiles and output factors in addition to 2D GAFCHROMICRTM EBT2 film measurements in heterogeneous phantoms were obtained to commission the electron beam model. The use of MC for electron TP will provide more accurate treatments and yield greater knowledge of the electron dose distribution within the patient. The calculation of output factors could invoke a clinical time saving of up to 1 hour per patient.
An Analysis of the Autorotative Performance of a Helicopter Powered by Rotor-Tip Jet Units
NASA Technical Reports Server (NTRS)
Gessow, Alfred
1950-01-01
The autorotative performance of an assumed helicopter was studied to determine the effect of inoperative jet units located at the rotor-blade tip on the helicopter rate of descent. For a representative ramjet design, the effect of the jet drag is to increase the minimum rate of descent of the helicopter from about 1,OO feet per minute to 3,700 feet per minute when the rotor is operating at a tip speed of approximately 600 feet per second. The effect is less if the rotor operates at lower tip speeds, but the rotor kinetic energy and the stall margin available for the landing maneuver are then reduced. Power-off rates of descent of pulse-jet helicopters would be expected to be less than those of ramjet. helicopters because pulse jets of current design appear to have greater ratios of net power-on thrust to power-off, drag than currently designed rain jets. Iii order to obtain greater accuracy in studies of autorotative performance, calculations in'volving high power-off rates of descent should include the weight-supporting effect of the fuselage parasite-drag force and the fact that the rotor thrust does not equal the weight of the helicopter.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., or to temperature simulation devices. (vi) Conduct a visual inspection of each sensor every quarter... sensor values with electronic signal simulations or via relative accuracy testing. (v) Perform accuracy... values with electronic signal simulations or with values obtained via relative accuracy testing. (vi...
Code of Federal Regulations, 2013 CFR
2013-07-01
..., or to temperature simulation devices. (vi) Conduct a visual inspection of each sensor every quarter... sensor values with electronic signal simulations or via relative accuracy testing. (v) Perform accuracy... values with electronic signal simulations or with values obtained via relative accuracy testing. (vi...
Code of Federal Regulations, 2012 CFR
2012-07-01
..., or to temperature simulation devices. (vi) Conduct a visual inspection of each sensor every quarter... sensor values with electronic signal simulations or via relative accuracy testing. (v) Perform accuracy... values with electronic signal simulations or with values obtained via relative accuracy testing. (vi...
Evaluation of the accuracy of GPS as a method of locating traffic collisions.
DOT National Transportation Integrated Search
2004-06-01
The objective of this study were to determine the accuracy of GPS units as a traffic crash location tool, evaluate the accuracy of the location data obtained using the GPS units, and determine the largest sources of any errors found. : The analysis s...
Beaulieu, Jean; Doerksen, Trevor K; MacKay, John; Rainville, André; Bousquet, Jean
2014-12-02
Genomic selection (GS) may improve selection response over conventional pedigree-based selection if markers capture more detailed information than pedigrees in recently domesticated tree species and/or make it more cost effective. Genomic prediction accuracies using 1748 trees and 6932 SNPs representative of as many distinct gene loci were determined for growth and wood traits in white spruce, within and between environments and breeding groups (BG), each with an effective size of Ne ≈ 20. Marker subsets were also tested. Model fits and/or cross-validation (CV) prediction accuracies for ridge regression (RR) and the least absolute shrinkage and selection operator models approached those of pedigree-based models. With strong relatedness between CV sets, prediction accuracies for RR within environment and BG were high for wood (r = 0.71-0.79) and moderately high for growth (r = 0.52-0.69) traits, in line with trends in heritabilities. For both classes of traits, these accuracies achieved between 83% and 92% of those obtained with phenotypes and pedigree information. Prediction into untested environments remained moderately high for wood (r ≥ 0.61) but dropped significantly for growth (r ≥ 0.24) traits, emphasizing the need to phenotype in all test environments and model genotype-by-environment interactions for growth traits. Removing relatedness between CV sets sharply decreased prediction accuracies for all traits and subpopulations, falling near zero between BGs with no known shared ancestry. For marker subsets, similar patterns were observed but with lower prediction accuracies. Given the need for high relatedness between CV sets to obtain good prediction accuracies, we recommend to build GS models for prediction within the same breeding population only. Breeding groups could be merged to build genomic prediction models as long as the total effective population size does not exceed 50 individuals in order to obtain high prediction accuracy such as that obtained in the present study. A number of markers limited to a few hundred would not negatively impact prediction accuracies, but these could decrease more rapidly over generations. The most promising short-term approach for genomic selection would likely be the selection of superior individuals within large full-sib families vegetatively propagated to implement multiclonal forestry.
Grit blasting and the marginal accuracy of two ceramic veneer systems--a pilot study.
Lim, C; Ironside, J G
1997-04-01
Margins of ceramic restorations can be damaged during removal of investment materials with grit blasting and result in relatively large marginal discrepancies and excessive marginal discrepancies with greater exposure of cement to the oral environment. Subsequent dissolution of cement can encourage plaque retention, dental caries, and periodontal problems. This study compared marginal adaptation of ceramic veneers created by the refractory die technique (R), Dicor glass ceramic technique (D), and effects of grit blasting on their margins. Two groups of ceramic veneers were constructed for each system, one without grit blasting (R g and D g) and one with grit blasting (R+g and D+g). Statistical analyses revealed that grit blasting had a greater effect in reducing marginal accuracy for Dicor ceramic veneers compared with refractory die ceramic veneers.
Decoding human mental states by whole-head EEG+fNIRS during category fluency task performance
NASA Astrophysics Data System (ADS)
Omurtag, Ahmet; Aghajani, Haleh; Onur Keles, Hasan
2017-12-01
Objective. Concurrent scalp electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS), which we refer to as EEG+fNIRS, promises greater accuracy than the individual modalities while remaining nearly as convenient as EEG. We sought to quantify the hybrid system’s ability to decode mental states and compare it with its unimodal components. Approach. We recorded from healthy volunteers taking the category fluency test and applied machine learning techniques to the data. Main results. EEG+fNIRS’s decoding accuracy was greater than that of its subsystems, partly due to the new type of neurovascular features made available by hybrid data. Significance. Availability of an accurate and practical decoding method has potential implications for medical diagnosis, brain-computer interface design, and neuroergonomics.
Lee, J; Kachman, S D; Spangler, M L
2017-08-01
Genomic selection (GS) has become an integral part of genetic evaluation methodology and has been applied to all major livestock species, including beef and dairy cattle, pigs, and chickens. Significant contributions in increased accuracy of selection decisions have been clearly illustrated in dairy cattle after practical application of GS. In the majority of U.S. beef cattle breeds, similar efforts have also been made to increase the accuracy of genetic merit estimates through the inclusion of genomic information into routine genetic evaluations using a variety of methods. However, prediction accuracies can vary relative to panel density, the number of folds used for folds cross-validation, and the choice of dependent variables (e.g., EBV, deregressed EBV, adjusted phenotypes). The aim of this study was to evaluate the accuracy of genomic predictors for Red Angus beef cattle with different strategies used in training and evaluation. The reference population consisted of 9,776 Red Angus animals whose genotypes were imputed to 2 medium-density panels consisting of over 50,000 (50K) and approximately 80,000 (80K) SNP. Using the imputed panels, we determined the influence of marker density, exclusion (deregressed EPD adjusting for parental information [DEPD-PA]) or inclusion (deregressed EPD without adjusting for parental information [DEPD]) of parental information in the deregressed EPD used as the dependent variable, and the number of clusters used to partition training animals (3, 5, or 10). A BayesC model with π set to 0.99 was used to predict molecular breeding values (MBV) for 13 traits for which EPD existed. The prediction accuracies were measured as genetic correlations between MBV and weighted deregressed EPD. The average accuracies across all traits were 0.540 and 0.552 when using the 50K and 80K SNP panels, respectively, and 0.538, 0.541, and 0.561 when using 3, 5, and 10 folds, respectively, for cross-validation. Using DEPD-PA as the response variable resulted in higher accuracies of MBV than those obtained by DEPD for growth and carcass traits. When DEPD were used as the response variable, accuracies were greater for threshold traits and those that are sex limited, likely due to the fact that these traits suffer from a lack of information content and excluding animals in training with only parental information substantially decreases the training population size. It is recommended that the contribution of parental average to deregressed EPD should be removed in the construction of genomic prediction equations. The difference in terms of prediction accuracies between the 2 SNP panels or the number of folds compared herein was negligible.
NASA Astrophysics Data System (ADS)
Robleda Prieto, G.; Pérez Ramos, A.
2015-02-01
Sometimes it could be difficult to represent "on paper" an architectural idea, a solution, a detail or a newly created element, depending on the complexity what it want be conveyed through its graphical representation but it may be even harder to represent the existing reality. (a building, a detail,...), at least with an acceptable degree of definition and accuracy. As a solution to this hypothetical problem, this paper try to show a methodology to collect measure data by combining different methods or techniques, to obtain the characteristic geometry of architectonic elements, especially in those highly decorated and/or complex geometry, as well as to assess the accuracy of the results obtained, but in an accuracy level enough and not very expensive costs. In addition, we can obtain a 3D recovery model that allows us a strong support, beyond point clouds obtained through another more expensive methods as using laser scanner, to obtain orthoimages. This methodology was used in the study case of the 3D-virtual reconstruction of a main medieval church façade because of the geometrical complexity in many elements as the existing main doorway with archivolts and many details, as well as the rose window located above it so it's inaccessible due to the height.
Clinical versus actuarial judgment.
Dawes, R M; Faust, D; Meehl, P E
1989-03-31
Professionals are frequently consulted to diagnose and predict human behavior; optimal treatment and planning often hinge on the consultant's judgmental accuracy. The consultant may rely on one of two contrasting approaches to decision-making--the clinical and actuarial methods. Research comparing these two approaches shows the actuarial method to be superior. Factors underlying the greater accuracy of actuarial methods, sources of resistance to the scientific findings, and the benefits of increased reliance on actuarial approaches are discussed.
NASA Astrophysics Data System (ADS)
Sciazko, Anna; Komatsu, Yosuke; Brus, Grzegorz; Kimijima, Shinji; Szmyd, Janusz S.
2014-09-01
For a mathematical model based on the result of physical measurements, it becomes possible to determine their influence on the final solution and its accuracy. However, in classical approaches, the influence of different model simplifications on the reliability of the obtained results are usually not comprehensively discussed. This paper presents a novel approach to the study of methane/steam reforming kinetics based on an advanced methodology called the Orthogonal Least Squares method. The kinetics of the reforming process published earlier are divergent among themselves. To obtain the most probable values of kinetic parameters and enable direct and objective model verification, an appropriate calculation procedure needs to be proposed. The applied Generalized Least Squares (GLS) method includes all the experimental results into the mathematical model which becomes internally contradicted, as the number of equations is greater than number of unknown variables. The GLS method is adopted to select the most probable values of results and simultaneously determine the uncertainty coupled with all the variables in the system. In this paper, the evaluation of the reaction rate after the pre-determination of the reaction rate, which was made by preliminary calculation based on the obtained experimental results over a Nickel/Yttria-stabilized Zirconia catalyst, was performed.
NASA Technical Reports Server (NTRS)
Arnold, Steven M. (Technical Monitor); Bansal, Yogesh; Pindera, Marek-Jerzy
2004-01-01
The High-Fidelity Generalized Method of Cells is a new micromechanics model for unidirectionally reinforced periodic multiphase materials that was developed to overcome the original model's shortcomings. The high-fidelity version predicts the local stress and strain fields with dramatically greater accuracy relative to the original model through the use of a better displacement field representation. Herein, we test the high-fidelity model's predictive capability in estimating the elastic moduli of periodic composites characterized by repeating unit cells obtained by rotation of an infinite square fiber array through an angle about the fiber axis. Such repeating unit cells may contain a few or many fibers, depending on the rotation angle. In order to analyze such multi-inclusion repeating unit cells efficiently, the high-fidelity micromechanics model's framework is reformulated using the local/global stiffness matrix approach. The excellent agreement with the corresponding results obtained from the standard transformation equations confirms the new model's predictive capability for periodic composites characterized by multi-inclusion repeating unit cells lacking planes of material symmetry. Comparison of the effective moduli and local stress fields with the corresponding results obtained from the original Generalized Method of Cells dramatically highlights the original model's shortcomings for certain classes of unidirectional composites.
Multisensor Arrays for Greater Reliability and Accuracy
NASA Technical Reports Server (NTRS)
Immer, Christopher; Eckhoff, Anthony; Lane, John; Perotti, Jose; Randazzo, John; Blalock, Norman; Ree, Jeff
2004-01-01
Arrays of multiple, nominally identical sensors with sensor-output-processing electronic hardware and software are being developed in order to obtain accuracy, reliability, and lifetime greater than those of single sensors. The conceptual basis of this development lies in the statistical behavior of multiple sensors and a multisensor-array (MSA) algorithm that exploits that behavior. In addition, advances in microelectromechanical systems (MEMS) and integrated circuits are exploited. A typical sensor unit according to this concept includes multiple MEMS sensors and sensor-readout circuitry fabricated together on a single chip and packaged compactly with a microprocessor that performs several functions, including execution of the MSA algorithm. In the MSA algorithm, the readings from all the sensors in an array at a given instant of time are compared and the reliability of each sensor is quantified. This comparison of readings and quantification of reliabilities involves the calculation of the ratio between every sensor reading and every other sensor reading, plus calculation of the sum of all such ratios. Then one output reading for the given instant of time is computed as a weighted average of the readings of all the sensors. In this computation, the weight for each sensor is the aforementioned value used to quantify its reliability. In an optional variant of the MSA algorithm that can be implemented easily, a running sum of the reliability value for each sensor at previous time steps as well as at the present time step is used as the weight of the sensor in calculating the weighted average at the present time step. In this variant, the weight of a sensor that continually fails gradually decreases, so that eventually, its influence over the output reading becomes minimal: In effect, the sensor system "learns" which sensors to trust and which not to trust. The MSA algorithm incorporates a criterion for deciding whether there remain enough sensor readings that approximate each other sufficiently closely to constitute a majority for the purpose of quantifying reliability. This criterion is, simply, that if there do not exist at least three sensors having weights greater than a prescribed minimum acceptable value, then the array as a whole is deemed to have failed.
Myocardial scar segmentation from magnetic resonance images using convolutional neural network
NASA Astrophysics Data System (ADS)
Zabihollahy, Fatemeh; White, James A.; Ukwatta, Eranga
2018-02-01
Accurate segmentation of the myocardial fibrosis or scar may provide important advancements for the prediction and management of malignant ventricular arrhythmias in patients with cardiovascular disease. In this paper, we propose a semi-automated method for segmentation of myocardial scar from late gadolinium enhancement magnetic resonance image (LGE-MRI) using a convolutional neural network (CNN). In contrast to image intensitybased methods, CNN-based algorithms have the potential to improve the accuracy of scar segmentation through the creation of high-level features from a combination of convolutional, detection and pooling layers. Our developed algorithm was trained using 2,336,703 image patches extracted from 420 slices of five 3D LGE-MR datasets, then validated on 2,204,178 patches from a testing dataset of seven 3D LGE-MR images including 624 slices, all obtained from patients with chronic myocardial infarction. For evaluation of the algorithm, we compared the algorithmgenerated segmentations to manual delineations by experts. Our CNN-based method reported an average Dice similarity coefficient (DSC), precision, and recall of 94.50 +/- 3.62%, 96.08 +/- 3.10%, and 93.96 +/- 3.75% as the accuracy of segmentation, respectively. As compared to several intensity threshold-based methods for scar segmentation, the results of our developed method have a greater agreement with manual expert segmentation.
Weighted least squares techniques for improved received signal strength based localization.
Tarrío, Paula; Bernardos, Ana M; Casar, José R
2011-01-01
The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.
Investigation of a Coupled Arrhenius-Type/Rossard Equation of AH36 Material.
Qin, Qin; Tian, Ming-Liang; Zhang, Peng
2017-04-13
High-temperature tensile testing of AH36 material in a wide range of temperatures (1173-1573 K) and strain rates (10 -4 -10 -2 s -1 ) has been obtained by using a Gleeble system. These experimental stress-strain data have been adopted to develop the constitutive equation. The constitutive equation of AH36 material was suggested based on the modified Arrhenius-type equation and the modified Rossard equation respectively. The results indicate that the constitutive equation is strongly influenced by temperature and strain, especially strain. Moreover, there is a good agreement between the predicted data of the modified Arrhenius-type equation and the experimental results when the strain is greater than 0.02. There is also good agreement between the predicted data of the Rossard equation and the experimental results when the strain is less than 0.02. Therefore, a coupled equation where the modified Arrhenius-type equation and Rossard equation are combined has been proposed to describe the constitutive equation of AH36 material according to the different strain values in order to improve the accuracy. The correlation coefficient between the computed and experimental flow stress data was 0.998. The minimum value of the average absolute relative error shows the high accuracy of the coupled equation compared with the two modified equations.
Miniature Convection Cooled Plug-type Heat Flux Gauges
NASA Technical Reports Server (NTRS)
Liebert, Curt H.
1994-01-01
Tests and analysis of a new miniature plug-type heat flux gauge configuration are described. This gauge can simultaneously measure heat flux on two opposed active surfaces when heat flux levels are equal to or greater than about 0.2 MW/m(sup 2). The performance of this dual active surface gauge was investigated over a wide transient and steady heat flux and temperature range. The tests were performed by radiatively heating the front surface with an argon arc lamp while the back surface was convection cooled with air. Accuracy is about +20 percent. The gauge is responsive to fast heat flux transients and is designed to withstand the high temperature (1300 K), high pressure (15 MPa), erosive and corrosive environments in modern engines. This gauge can be used to measure heat flux on the surfaces of internally cooled apparatus such as turbine blades and combustors used in jet propulsion systems and on the surfaces of hypersonic vehicles. Heat flux measurement accuracy is not compromised when design considerations call for various size gauges to be fabricated into alloys of various shapes and properties. Significant gauge temperature reductions (120 K), which can lead to potential gauge durability improvement, were obtained when the gauges were air-cooled by forced convection.
van Valkengoed, I G; Boeke, A J; Morré, S A; van den Brule, A J; Meijer, C J; Devillé, W; Bouter, L M
2000-10-01
In an inner-city population with a low prevalence of Chlamydia trachomatis infection, selective screening may be indicated to increase the efficiency of screening. To evaluate the performance of sets of selective screening criteria for asymptomatic Chlamydia trachomatis infection in an inner-city population. The criteria were derived from reports of studies carried out in various settings. A total of 5714 women age 15 to 40 years living in Amsterdam were invited for a screening based on home-obtained urine specimens. Criteria identified from the literature were applied to the screening population. A calculated area under the receiver-operator characteristic curve (AUC) of greater than 0.75 was considered a good measure of diagnostic accuracy. Of the four sets of criteria, selection based on the following determinants showed the highest diagnostic accuracy: younger than 25 years, being unmarried, number of partners during the previous 6 months, Surinam or Antillean origin (black), and vaginal douching (AUC, 0.67; 95% CI, 0.65-0.69). Selection based on age alone showed an AUC of 0.57 (95% CI, 0.55-0.69). The performance of selective screening criteria for asymptomatic C trachomatis infection in an inner-city population in Amsterdam was insufficient to recommend its implementation in practice.
Inversion of oceanic constituents in case I and II waters with genetic programming algorithms.
Chami, Malik; Robilliard, Denis
2002-10-20
A stochastic inverse technique based on agenetic programming (GP) algorithm was developed toinvert oceanic constituents from simulated data for case I and case II water applications. The simulations were carried out with the Ordre Successifs Ocean Atmosphere (OSOA) radiative transfer model. They include the effects of oceanic substances such as algal-related chlorophyll, nonchlorophyllous suspended matter, and dissolved organic matter. The synthetic data set also takes into account the directional effects of particles through a variation of their phase function that makes the simulated data realistic. It is shown that GP can be successfully applied to the inverse problem with acceptable stability in the presence of realistic noise in the data. GP is compared with neural network methodology for case I waters; GP exhibits similar retrieval accuracy, which is greater than for traditional techniques such as band ratio algorithms. The application of GP to real satellite data [a Sea-viewing Wide Field-of-view Sensor (SeaWiFS)] was also carried out for case I waters as a validation. Good agreement was obtained when GP results were compared with the SeaWiFS empirical algorithm. For case II waters the accuracy of GP is less than 33%, which remains satisfactory, at the present time, for remote-sensing purposes.
Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization
Tarrío, Paula; Bernardos, Ana M.; Casar, José R.
2011-01-01
The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling. PMID:22164092
Marchetti, Michael A; Codella, Noel C F; Dusza, Stephen W; Gutman, David A; Helba, Brian; Kalloo, Aadi; Mishra, Nabin; Carrera, Cristina; Celebi, M Emre; DeFazio, Jennifer L; Jaimes, Natalia; Marghoob, Ashfaq A; Quigley, Elizabeth; Scope, Alon; Yélamos, Oriol; Halpern, Allan C
2018-02-01
Computer vision may aid in melanoma detection. We sought to compare melanoma diagnostic accuracy of computer algorithms to dermatologists using dermoscopic images. We conducted a cross-sectional study using 100 randomly selected dermoscopic images (50 melanomas, 44 nevi, and 6 lentigines) from an international computer vision melanoma challenge dataset (n = 379), along with individual algorithm results from 25 teams. We used 5 methods (nonlearned and machine learning) to combine individual automated predictions into "fusion" algorithms. In a companion study, 8 dermatologists classified the lesions in the 100 images as either benign or malignant. The average sensitivity and specificity of dermatologists in classification was 82% and 59%. At 82% sensitivity, dermatologist specificity was similar to the top challenge algorithm (59% vs. 62%, P = .68) but lower than the best-performing fusion algorithm (59% vs. 76%, P = .02). Receiver operating characteristic area of the top fusion algorithm was greater than the mean receiver operating characteristic area of dermatologists (0.86 vs. 0.71, P = .001). The dataset lacked the full spectrum of skin lesions encountered in clinical practice, particularly banal lesions. Readers and algorithms were not provided clinical data (eg, age or lesion history/symptoms). Results obtained using our study design cannot be extrapolated to clinical practice. Deep learning computer vision systems classified melanoma dermoscopy images with accuracy that exceeded some but not all dermatologists. Copyright © 2017 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.
Akuffo, Kwadwo Owusu; Beatty, Stephen; Stack, Jim; Peto, Tunde; Leung, Irene; Corcoran, Laura; Power, Rebecca; Nolan, John M
2015-12-01
We compared macular pigment (MP) measurements using customized heterochromatic flicker photometry (Macular Metrics Densitometer) and dual-wavelength fundus autofluorescence (Heidelberg Spectralis HRA + OCT MultiColor) in subjects with early age-related macular degeneration (AMD). Macular pigment was measured in 117 subjects with early AMD (age, 44-88 years) using the Densitometer and Spectralis, as part of the Central Retinal Enrichment Supplementation Trial (CREST; ISRCTN13894787). Baseline and 6-month study visits data were used for the analyses. Agreement was investigated at four different retinal eccentricities, graphically and using indices of agreement, including Pearson correlation coefficient (precision), accuracy coefficient, and concordance correlation coefficient (ccc). Agreement was poor between the Densitometer and Spectralis at all eccentricities, at baseline (e.g., at 0.25° eccentricity, accuracy = 0.63, precision = 0.35, ccc = 0.22) and at 6 months (e.g., at 0.25° eccentricity, accuracy = 0.52, precision = 0.43, ccc = 0.22). Agreement between the two devices was significantly greater for males at 0.5° and 1.0° of eccentricity. At all eccentricities, agreement was unaffected by cataract grade. In subjects with early AMD, MP measurements obtained using the Densitometer and Spectralis are not statistically comparable and should not be used interchangeably in either the clinical or research setting. Despite this lack of agreement, statistically significant increases in MP, following 6 months of supplementation with macular carotenoids, were detected with each device, confirming that these devices are capable of measuring change in MP within subjects over time. (http://www.controlled-trials.com number, ISRCTN13894787.).
Evaluating Rater Accuracy in Rater-Mediated Assessments Using an Unfolding Model
ERIC Educational Resources Information Center
Wang, Jue; Engelhard, George, Jr.; Wolfe, Edward W.
2016-01-01
The number of performance assessments continues to increase around the world, and it is important to explore new methods for evaluating the quality of ratings obtained from raters. This study describes an unfolding model for examining rater accuracy. Accuracy is defined as the difference between observed and expert ratings. Dichotomous accuracy…
NASA Astrophysics Data System (ADS)
Gabor, A.; Jivanescu, A.; Zaharia, C.; Hategan, S.; Topala, F. I.; Levai, C. M.; Negrutiu, M. L.; Sinescu, C.; Duma, V.-F.; Bradu, A.; Podoleanu, A. Gh.
2016-03-01
Digital impressions were introduced to overcome some of the obstacles due to traditional impression materials and techniques. The aim of this in vitro study is to compare the accuracy of all ceramic crowns obtained with digital impression and CAD-CAM technology with the accuracy of those obtained with conventional impression techniques. Two groups of 10 crowns each have been considered. The digital data obtained from Group 1 have been processed and the all-ceramic crowns were milled with a CAD/CAM technology (CEREC MCX, Sirona). The all ceramic crowns in Group 2 were obtained with the classical technique of pressing (emax, Ivoclar, Vivadent). The evaluation of the marginal adaptation was performed with Time Domain Optical Coherence Tomography (TD OCT), working at a wavelength of 1300 nm. Tri-dimensional (3D) reconstructions of the selected areas were obtained. Based on the findings in this study, one may conclude that the marginal accuracy of all ceramic crowns fabricated with digital impression and the CAD/CAM technique is superior to the conventional impression technique.
Compressed Sensing for Chemistry
NASA Astrophysics Data System (ADS)
Sanders, Jacob Nathan
Many chemical applications, from spectroscopy to quantum chemistry, involve measuring or computing a large amount of data, and then compressing this data to retain the most chemically-relevant information. In contrast, compressed sensing is an emergent technique that makes it possible to measure or compute an amount of data that is roughly proportional to its information content. In particular, compressed sensing enables the recovery of a sparse quantity of information from significantly undersampled data by solving an ℓ 1-optimization problem. This thesis represents the application of compressed sensing to problems in chemistry. The first half of this thesis is about spectroscopy. Compressed sensing is used to accelerate the computation of vibrational and electronic spectra from real-time time-dependent density functional theory simulations. Using compressed sensing as a drop-in replacement for the discrete Fourier transform, well-resolved frequency spectra are obtained at one-fifth the typical simulation time and computational cost. The technique is generalized to multiple dimensions and applied to two-dimensional absorption spectroscopy using experimental data collected on atomic rubidium vapor. Finally, a related technique known as super-resolution is applied to open quantum systems to obtain realistic models of a protein environment, in the form of atomistic spectral densities, at lower computational cost. The second half of this thesis deals with matrices in quantum chemistry. It presents a new use of compressed sensing for more efficient matrix recovery whenever the calculation of individual matrix elements is the computational bottleneck. The technique is applied to the computation of the second-derivative Hessian matrices in electronic structure calculations to obtain the vibrational modes and frequencies of molecules. When applied to anthracene, this technique results in a threefold speed-up, with greater speed-ups possible for larger molecules. The implementation of the method in the Q-Chem commercial software package is described. Moreover, the method provides a general framework for bootstrapping cheap low-accuracy calculations in order to reduce the required number of expensive high-accuracy calculations.
A fast RCS accuracy assessment method for passive radar calibrators
NASA Astrophysics Data System (ADS)
Zhou, Yongsheng; Li, Chuanrong; Tang, Lingli; Ma, Lingling; Liu, QI
2016-10-01
In microwave radar radiometric calibration, the corner reflector acts as the standard reference target but its structure is usually deformed during the transportation and installation, or deformed by wind and gravity while permanently installed outdoor, which will decrease the RCS accuracy and therefore the radiometric calibration accuracy. A fast RCS accuracy measurement method based on 3-D measuring instrument and RCS simulation was proposed in this paper for tracking the characteristic variation of the corner reflector. In the first step, RCS simulation algorithm was selected and its simulation accuracy was assessed. In the second step, the 3-D measuring instrument was selected and its measuring accuracy was evaluated. Once the accuracy of the selected RCS simulation algorithm and 3-D measuring instrument was satisfied for the RCS accuracy assessment, the 3-D structure of the corner reflector would be obtained by the 3-D measuring instrument, and then the RCSs of the obtained 3-D structure and corresponding ideal structure would be calculated respectively based on the selected RCS simulation algorithm. The final RCS accuracy was the absolute difference of the two RCS calculation results. The advantage of the proposed method was that it could be applied outdoor easily, avoiding the correlation among the plate edge length error, plate orthogonality error, plate curvature error. The accuracy of this method is higher than the method using distortion equation. In the end of the paper, a measurement example was presented in order to show the performance of the proposed method.
Fanchon, Louise M; Dogan, Snjezana; Moreira, Andre L; Carlin, Sean A; Schmidtlein, C Ross; Yorke, Ellen; Apte, Aditya P; Burger, Irene A; Durack, Jeremy C; Erinjeri, Joseph P; Maybody, Majid; Schöder, Heiko; Siegelbaum, Robert H; Sofocleous, Constantinos T; Deasy, Joseph O; Solomon, Stephen B; Humm, John L; Kirov, Assen S
2015-04-01
Core biopsies obtained using PET/CT guidance contain bound radiotracer and therefore provide information about tracer uptake in situ. Our goal was to develop a method for quantitative autoradiography of biopsy specimens (QABS), to use this method to correlate (18)F-FDG tracer uptake in situ with histopathology findings, and to briefly discuss its potential application. Twenty-seven patients referred for a PET/CT-guided biopsy of (18)F-FDG-avid primary or metastatic lesions in different locations consented to participate in this institutional review board-approved study, which complied with the Health Insurance Portability and Accountability Act. Autoradiography of biopsy specimens obtained using 5 types of needles was performed immediately after extraction. The response of autoradiography imaging plates was calibrated using dummy specimens with known activity obtained using 2 core-biopsy needle sizes. The calibration curves were used to quantify the activity along biopsy specimens obtained with these 2 needles and to calculate the standardized uptake value, SUVARG. Autoradiography images were correlated with histopathologic findings and fused with PET/CT images demonstrating the position of the biopsy needle within the lesion. Logistic regression analysis was performed to search for an SUVARG threshold distinguishing benign from malignant tissue in liver biopsy specimens. Pearson correlation between SUVARG of the whole biopsy specimen and average SUVPET over the voxels intersected by the needle in the fused PET/CT image was calculated. Activity concentrations were obtained using autoradiography for 20 specimens extracted with 18- and 20-gauge needles. The probability of finding malignancy in a specimen is greater than 50% (95% confidence) if SUVARG is greater than 7.3. For core specimens with preserved shape and orientation and in the absence of motion, one can achieve autoradiography, CT, and PET image registration with spatial accuracy better than 2 mm. The correlation coefficient between the mean specimen SUVARG and SUVPET was 0.66. Performing QABS on core-biopsy specimens obtained using PET/CT guidance enables in situ correlation of (18)F-FDG tracer uptake and histopathology on a millimeter scale. QABS promises to provide useful information for guiding interventional radiology procedures and localized therapies and for in situ high-spatial-resolution validation of radiopharmaceutical uptake. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Application of high-precision two-way ranging to Galileo Earth-1 encounter navigation
NASA Technical Reports Server (NTRS)
Pollmeier, V. M.; Thurman, S. W.
1992-01-01
The application of precision two-way ranging to orbit determination with relatively short data arcs is investigated for the Galileo spacecraft's approach to its first Earth encounter (December 8, 1990). Analysis of previous S-band (2.3-GHz) ranging data acquired from Galileo indicated that under good signal conditions submeter precision and 10-m ranging accuracy were achieved. It is shown that ranging data of sufficient accuracy, when acquired from multiple stations, can sense the geocentric angular position of a distant spacecraft. A range data filtering technique, in which explicit modeling of range measurement bias parameters for each station pass is utilized, is shown to largely remove the systematic ground system calibration errors and transmission media effects from the Galileo range measurements, which would otherwise corrupt the angle-finding capabilities of the data. The accuracy of the Galileo orbit solutions obtained with S-band Doppler and precision ranging were found to be consistent with simple theoretical calculations, which predicted that angular accuracies of 0.26-0.34 microrad were achievable. In addition, the navigation accuracy achieved with precision ranging was marginally better than that obtained using delta-differenced one-way range (delta DOR), the principal data type that was previously used to obtain spacecraft angular position measurements operationally.
Chen, L; Schenkel, F; Vinsky, M; Crews, D H; Li, C
2013-10-01
In beef cattle, phenotypic data that are difficult and/or costly to measure, such as feed efficiency, and DNA marker genotypes are usually available on a small number of animals of different breeds or populations. To achieve a maximal accuracy of genomic prediction using the phenotype and genotype data, strategies for forming a training population to predict genomic breeding values (GEBV) of the selection candidates need to be evaluated. In this study, we examined the accuracy of predicting GEBV for residual feed intake (RFI) based on 522 Angus and 395 Charolais steers genotyped on SNP with the Illumina Bovine SNP50 Beadchip for 3 training population forming strategies: within breed, across breed, and by pooling data from the 2 breeds (i.e., combined). Two other scenarios with the training and validation data split by birth year and by sire family within a breed were also investigated to assess the impact of genetic relationships on the accuracy of genomic prediction. Three statistical methods including the best linear unbiased prediction with the relationship matrix defined based on the pedigree (PBLUP), based on the SNP genotypes (GBLUP), and a Bayesian method (BayesB) were used to predict the GEBV. The results showed that the accuracy of the GEBV prediction was the highest when the prediction was within breed and when the validation population had greater genetic relationships with the training population, with a maximum of 0.58 for Angus and 0.64 for Charolais. The within-breed prediction accuracies dropped to 0.29 and 0.38, respectively, when the validation populations had a minimal pedigree link with the training population. When the training population of a different breed was used to predict the GEBV of the validation population, that is, across-breed genomic prediction, the accuracies were further reduced to 0.10 to 0.22, depending on the prediction method used. Pooling data from the 2 breeds to form the training population resulted in accuracies increased to 0.31 and 0.43, respectively, for the Angus and Charolais validation populations. The results suggested that the genetic relationship of selection candidates with the training population has a greater impact on the accuracy of GEBV using the Illumina Bovine SNP50 Beadchip. Pooling data from different breeds to form the training population will improve the accuracy of across breed genomic prediction for RFI in beef cattle.
Hans-Erik Andersen; Tobey Clarkin; Ken Winterberger; Jacob Strunk
2009-01-01
The accuracy of recreational- and survey-grade global positioning system (GPS) receivers was evaluated across a range of forest conditions in the Tanana Valley of interior Alaska. High-accuracy check points, established using high-order instruments and closed-traverse surveying methods, were then used to evaluate the accuracy of positions acquired in different forest...
Discussion on accuracy degree evaluation of accident velocity reconstruction model
NASA Astrophysics Data System (ADS)
Zou, Tiefang; Dai, Yingbiao; Cai, Ming; Liu, Jike
In order to investigate the applicability of accident velocity reconstruction model in different cases, a method used to evaluate accuracy degree of accident velocity reconstruction model is given. Based on pre-crash velocity in theory and calculation, an accuracy degree evaluation formula is obtained. With a numerical simulation case, Accuracy degrees and applicability of two accident velocity reconstruction models are analyzed; results show that this method is feasible in practice.
Effectiveness of link prediction for face-to-face behavioral networks.
Tsugawa, Sho; Ohsaki, Hiroyuki
2013-01-01
Research on link prediction for social networks has been actively pursued. In link prediction for a given social network obtained from time-windowed observation, new link formation in the network is predicted from the topology of the obtained network. In contrast, recent advances in sensing technology have made it possible to obtain face-to-face behavioral networks, which are social networks representing face-to-face interactions among people. However, the effectiveness of link prediction techniques for face-to-face behavioral networks has not yet been explored in depth. To clarify this point, here we investigate the accuracy of conventional link prediction techniques for networks obtained from the history of face-to-face interactions among participants at an academic conference. Our findings were (1) that conventional link prediction techniques predict new link formation with a precision of 0.30-0.45 and a recall of 0.10-0.20, (2) that prolonged observation of social networks often degrades the prediction accuracy, (3) that the proposed decaying weight method leads to higher prediction accuracy than can be achieved by observing all records of communication and simply using them unmodified, and (4) that the prediction accuracy for face-to-face behavioral networks is relatively high compared to that for non-social networks, but not as high as for other types of social networks.
NASA Astrophysics Data System (ADS)
Godah, Walyeldeen; Krynski, Jan; Szelachowska, Malgorzata
2018-05-01
The objective of this paper is to demonstrate the usefulness of absolute gravity data for the validation of Global Geopotential Models (GGMs). It is also aimed at improving quasigeoid heights determined from satellite-only GGMs using absolute gravity data. The area of Poland, as a unique one, covered with a homogeneously distributed set of absolute gravity data, has been selected as a study area. The gravity anomalies obtained from GGMs were validated using the corresponding ones determined from absolute gravity data. The spectral enhancement method was implemented to overcome the spectral inconsistency in data being validated. The quasigeoid heights obtained from the satellite-only GGM as well as from the satellite-only GGM in combination with absolute gravity data were evaluated with high accuracy GNSS/levelling data. Estimated accuracy of gravity anomalies obtained from GGMs investigated is of 1.7 mGal. Considering omitted gravity signal, e.g. from degree and order 101 to 2190, satellite-only GGMs can be validated at the accuracy level of 1 mGal using absolute gravity data. An improvement up to 59% in the accuracy of quasigeoid heights obtained from the satellite-only GGM can be observed when combining the satellite-only GGM with absolute gravity data.
Infrared Imagery of Shuttle (IRIS). Task 2, summary report
NASA Technical Reports Server (NTRS)
Chocol, C. J.
1978-01-01
End-to-end tests of a 16 element indium antimonide sensor array and 10 channels of associated electronic signal processing were completed. Quantitative data were gathered on system responsivity, frequency response, noise, stray capacitance effects, and sensor paralleling. These tests verify that the temperature accuracies, predicted in the Task 1 study, can be obtained with a very carefully designed electro-optical flight system. Pre-flight and inflight calibration of a high quality are mandatory to obtain these accuracies. Also, optical crosstalk in the array-dewar assembly must be carefully eliminated by its design. Tests of the scaled up tracking system reticle also demonstrate that the predicted tracking system accuracies can be met in the flight system. In addition, improvements in the reticle pattern and electronics are possible, which will reduce the complexity of the flight system and increase tracking accuracy.
Reducing errors benefits the field-based learning of a fundamental movement skill in children.
Capio, C M; Poolton, J M; Sit, C H P; Holmstrom, M; Masters, R S W
2013-03-01
Proficient fundamental movement skills (FMS) are believed to form the basis of more complex movement patterns in sports. This study examined the development of the FMS of overhand throwing in children through either an error-reduced (ER) or error-strewn (ES) training program. Students (n = 216), aged 8-12 years (M = 9.16, SD = 0.96), practiced overhand throwing in either a program that reduced errors during practice (ER) or one that was ES. ER program reduced errors by incrementally raising the task difficulty, while the ES program had an incremental lowering of task difficulty. Process-oriented assessment of throwing movement form (Test of Gross Motor Development-2) and product-oriented assessment of throwing accuracy (absolute error) were performed. Changes in performance were examined among children in the upper and lower quartiles of the pretest throwing accuracy scores. ER training participants showed greater gains in movement form and accuracy, and performed throwing more effectively with a concurrent secondary cognitive task. Movement form improved among girls, while throwing accuracy improved among children with low ability. Reduced performance errors in FMS training resulted in greater learning than a program that did not restrict errors. Reduced cognitive processing costs (effective dual-task performance) associated with such approach suggest its potential benefits for children with developmental conditions. © 2011 John Wiley & Sons A/S.
Gao, Chun-Hua; Wang, Jun-Yun; Shi, Feng; Steverding, Dietmar; Wang, Xia; Yang, Yue-Tao; Zhou, Xiao-Nong
2018-05-23
The larval stages of the tapeworms Echinocoocus granulosus and Echinococcus multilocularis are the causative agents of human cystic echinococcosis (CE) and human alveolar echinococcosis (AE), respectively. Both CE and AE are chronic diseases characterised by long asymptomatic periods of many years. However, early diagnosis of the disease is important if treatment and management of echinococcosis patients are to be successful. A previously developed rapid diagnostic test (RDT) for the differential detection of CE and AE was evaluated under field conditions with finger prick blood samples taken from 1502 people living in the Ganzi Tibetan Autonomous Prefecture, China, a region with a high prevalence for both forms of human echinococcosis. The results were compared with simultaneously obtained abdominal ultrasonographic scans of the individuals. Using the ultrasonography as the gold standard, sensitivity and specificity, and the diagnostic accuracy of the RDT were determined to be greater than 94% for both CE and AE. For CE cases, high detection rates (95.6-98.8%) were found with patients having active cysts while lower detection rates (40.0-68.8%) were obtained with patients having transient or inactive cysts. In contrast, detection rates in AE patients were independent of the lesion type. The positive likelihood ratio of the RDT for CE and AE was greater than 20 and thus fairly high, indicating that a patient with a positive test result has a high probability of having echinococcosis. The results suggest that our previously developed RDT is suitable as a screening tool for the early detection of human echinococcosis in endemic areas.
Audience preferences are predicted by temporal reliability of neural processing
Dmochowski, Jacek P.; Bezdek, Matthew A.; Abelson, Brian P.; Johnson, John S.; Schumacher, Eric H.; Parra, Lucas C.
2014-01-01
Naturalistic stimuli evoke highly reliable brain activity across viewers. Here we record neural activity from a group of naive individuals while viewing popular, previously-broadcast television content for which the broad audience response is characterized by social media activity and audience ratings. We find that the level of inter-subject correlation in the evoked encephalographic responses predicts the expressions of interest and preference among thousands. Surprisingly, ratings of the larger audience are predicted with greater accuracy than those of the individuals from whom the neural data is obtained. An additional functional magnetic resonance imaging study employing a separate sample of subjects shows that the level of neural reliability evoked by these stimuli covaries with the amount of blood-oxygenation-level-dependent (BOLD) activation in higher-order visual and auditory regions. Our findings suggest that stimuli which we judge favourably may be those to which our brains respond in a stereotypical manner shared by our peers. PMID:25072833
Development of an X-ray surface analyzer for planetary exploration
NASA Technical Reports Server (NTRS)
Clark, B. C.
1972-01-01
An ultraminiature X-ray fluorescence spectrometer was developed which can obtain data on element composition not provided by present spacecraft instrumentation. The apparatus employs two radioisotope sources (Fe-55 and Cd-109) which irradiate adjacent areas on a soil sample. Fluorescent X-rays emitted by the sample are detected by four thin-window proportional counters. Using pulse-height discrimination, the energy spectra are determined. Virtually all elements above sodium in the periodic table are detected if present at sufficient levels. Minimum detection limits range from 30 ppm to several percent, depending upon the element and the matrix. For most elements, they are below 0.5 percent. Accuracies likewise depend upon the matrix, but are generally better than plus or minus 0.5 percent for all elements of atomic number greater than 14. Elements below sodium are also detected, but as a single group.
Machine Learning to Differentiate Between Positive and Negative Emotions Using Pupil Diameter
Babiker, Areej; Faye, Ibrahima; Prehn, Kristin; Malik, Aamir
2015-01-01
Pupil diameter (PD) has been suggested as a reliable parameter for identifying an individual’s emotional state. In this paper, we introduce a learning machine technique to detect and differentiate between positive and negative emotions. We presented 30 participants with positive and negative sound stimuli and recorded pupillary responses. The results showed a significant increase in pupil dilation during the processing of negative and positive sound stimuli with greater increase for negative stimuli. We also found a more sustained dilation for negative compared to positive stimuli at the end of the trial, which was utilized to differentiate between positive and negative emotions using a machine learning approach which gave an accuracy of 96.5% with sensitivity of 97.93% and specificity of 98%. The obtained results were validated using another dataset designed for a different study and which was recorded while 30 participants processed word pairs with positive and negative emotions. PMID:26733912
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kushner, R.F.; Kunigk, A.; Alspaugh, M.
1990-08-01
The bioelectrical-impedance-analysis (BIA) method accurately measures body composition in weight-stable subjects. This study validates the use of BIA to measure change in body composition. Twelve obese females underwent weight loss at a mean rate of 1.16 kg/wk. Body composition was measured by deuterium oxide dilution (D2O), BIA, and skinfold anthropometry (SFA) at baseline and at 5% decrements in weight. Highly significant correlations were obtained between D2O and BIA (r = 0.971) and between D2O and SFA (r = 0.932). Overall, BIA predicted change in fat-free mass with greater accuracy (to 0.4 kg) and precision (+/- 1.28 kg) than did anthropometrymore » (to 0.8 kg and +/- 2.58 kg, respectively). We conclude that BIA is a useful clinical method for measuring change in body composition.« less
Inukai, Tomoe; Kumada, Takatsune; Kawahara, Jun-ichiro
2010-05-01
The identification of a central visual target is impaired by the onset of a peripheral distractor. This impairment is said to occur because attentional focus is diverted to the peripheral distractor. We examined whether distractor offset would enhance or reduce attentional capture by manipulating the duration of the distractor. Observers identified a color singleton among a rapid stream of homogeneous nontargets. Peripheral distractors disappeared 43 or 172 msec after onset (the short- and long-duration conditions, respectively). Identification accuracy was greater in the long-duration condition than in the short-duration condition. The same pattern of results was obtained when participants identified a target of a designated color among heterogeneous nontargets when the color of the distractor was the same as that of the target. These findings suggest that attentional capture consists of stimulus onset and offset, both of which are susceptible to top-down attentional set.
Audience preferences are predicted by temporal reliability of neural processing.
Dmochowski, Jacek P; Bezdek, Matthew A; Abelson, Brian P; Johnson, John S; Schumacher, Eric H; Parra, Lucas C
2014-07-29
Naturalistic stimuli evoke highly reliable brain activity across viewers. Here we record neural activity from a group of naive individuals while viewing popular, previously-broadcast television content for which the broad audience response is characterized by social media activity and audience ratings. We find that the level of inter-subject correlation in the evoked encephalographic responses predicts the expressions of interest and preference among thousands. Surprisingly, ratings of the larger audience are predicted with greater accuracy than those of the individuals from whom the neural data is obtained. An additional functional magnetic resonance imaging study employing a separate sample of subjects shows that the level of neural reliability evoked by these stimuli covaries with the amount of blood-oxygenation-level-dependent (BOLD) activation in higher-order visual and auditory regions. Our findings suggest that stimuli which we judge favourably may be those to which our brains respond in a stereotypical manner shared by our peers.
Use of Feedback in Clinical Prediction
ERIC Educational Resources Information Center
Schroeder, Harold E.
1972-01-01
Results indicated that predictive accuracy is greater when feedback is applied to the basis for the prediction than when applied to gut" impressions. Judges forming hypotheses were also able to learn from experience. (Author)
Skinner, Sarah
2012-11-01
Magnetic resonance imaging (MRI) is the gold standard in noninvasive investigation of knee pain. It has a very high negative predictive value and may assist in avoiding unnecessary knee arthroscopy; its accuracy in the diagnosis of meniscal and anterior cruciate ligament (ACL) tears is greater than 89%; it has a greater than 90% sensitivity for the detection of medial meniscal tears; and it is probably better at assessing the posterior horn than arthroscopy.
Pettersson-Yeo, William; Benetti, Stefania; Marquand, Andre F.; Joules, Richard; Catani, Marco; Williams, Steve C. R.; Allen, Paul; McGuire, Philip; Mechelli, Andrea
2014-01-01
In the pursuit of clinical utility, neuroimaging researchers of psychiatric and neurological illness are increasingly using analyses, such as support vector machine, that allow inference at the single-subject level. Recent studies employing single-modality data, however, suggest that classification accuracies must be improved for such utility to be realized. One possible solution is to integrate different data types to provide a single combined output classification; either by generating a single decision function based on an integrated kernel matrix, or, by creating an ensemble of multiple single modality classifiers and integrating their predictions. Here, we describe four integrative approaches: (1) an un-weighted sum of kernels, (2) multi-kernel learning, (3) prediction averaging, and (4) majority voting, and compare their ability to enhance classification accuracy relative to the best single-modality classification accuracy. We achieve this by integrating structural, functional, and diffusion tensor magnetic resonance imaging data, in order to compare ultra-high risk (n = 19), first episode psychosis (n = 19) and healthy control subjects (n = 23). Our results show that (i) whilst integration can enhance classification accuracy by up to 13%, the frequency of such instances may be limited, (ii) where classification can be enhanced, simple methods may yield greater increases relative to more computationally complex alternatives, and, (iii) the potential for classification enhancement is highly influenced by the specific diagnostic comparison under consideration. In conclusion, our findings suggest that for moderately sized clinical neuroimaging datasets, combining different imaging modalities in a data-driven manner is no “magic bullet” for increasing classification accuracy. However, it remains possible that this conclusion is dependent on the use of neuroimaging modalities that had little, or no, complementary information to offer one another, and that the integration of more diverse types of data would have produced greater classification enhancement. We suggest that future studies ideally examine a greater variety of data types (e.g., genetic, cognitive, and neuroimaging) in order to identify the data types and combinations optimally suited to the classification of early stage psychosis. PMID:25076868
Pettersson-Yeo, William; Benetti, Stefania; Marquand, Andre F; Joules, Richard; Catani, Marco; Williams, Steve C R; Allen, Paul; McGuire, Philip; Mechelli, Andrea
2014-01-01
In the pursuit of clinical utility, neuroimaging researchers of psychiatric and neurological illness are increasingly using analyses, such as support vector machine, that allow inference at the single-subject level. Recent studies employing single-modality data, however, suggest that classification accuracies must be improved for such utility to be realized. One possible solution is to integrate different data types to provide a single combined output classification; either by generating a single decision function based on an integrated kernel matrix, or, by creating an ensemble of multiple single modality classifiers and integrating their predictions. Here, we describe four integrative approaches: (1) an un-weighted sum of kernels, (2) multi-kernel learning, (3) prediction averaging, and (4) majority voting, and compare their ability to enhance classification accuracy relative to the best single-modality classification accuracy. We achieve this by integrating structural, functional, and diffusion tensor magnetic resonance imaging data, in order to compare ultra-high risk (n = 19), first episode psychosis (n = 19) and healthy control subjects (n = 23). Our results show that (i) whilst integration can enhance classification accuracy by up to 13%, the frequency of such instances may be limited, (ii) where classification can be enhanced, simple methods may yield greater increases relative to more computationally complex alternatives, and, (iii) the potential for classification enhancement is highly influenced by the specific diagnostic comparison under consideration. In conclusion, our findings suggest that for moderately sized clinical neuroimaging datasets, combining different imaging modalities in a data-driven manner is no "magic bullet" for increasing classification accuracy. However, it remains possible that this conclusion is dependent on the use of neuroimaging modalities that had little, or no, complementary information to offer one another, and that the integration of more diverse types of data would have produced greater classification enhancement. We suggest that future studies ideally examine a greater variety of data types (e.g., genetic, cognitive, and neuroimaging) in order to identify the data types and combinations optimally suited to the classification of early stage psychosis.
Oguma, Tsuyoshi; Hirai, Toyohiro; Niimi, Akio; Matsumoto, Hisako; Muro, Shigeo; Shigematsu, Michio; Nishimura, Takashi; Kubo, Yoshiro; Mishima, Michiaki
2013-01-01
Objectives (a) To assess the effects of computed tomography (CT) scanners, scanning conditions, airway size, and phantom composition on airway dimension measurement and (b) to investigate the limitations of accurate quantitative assessment of small airways using CT images. Methods An airway phantom, which was constructed using various types of material and with various tube sizes, was scanned using four CT scanner types under different conditions to calculate airway dimensions, luminal area (Ai), and the wall area percentage (WA%). To investigate the limitations of accurate airway dimension measurement, we then developed a second airway phantom with a thinner tube wall, and compared the clinical CT images of healthy subjects with the phantom images scanned using the same CT scanner. The study using clinical CT images was approved by the local ethics committee, and written informed consent was obtained from all subjects. Data were statistically analyzed using one-way ANOVA. Results Errors noted in airway dimension measurement were greater in the tube of small inner radius made of material with a high CT density and on images reconstructed by body algorithm (p<0.001), and there was some variation in error among CT scanners under different fields of view. Airway wall thickness had the maximum effect on the accuracy of measurements with all CT scanners under all scanning conditions, and the magnitude of errors for WA% and Ai varied depending on wall thickness when airways of <1.0-mm wall thickness were measured. Conclusions The parameters of airway dimensions measured were affected by airway size, reconstruction algorithm, composition of the airway phantom, and CT scanner types. In dimension measurement of small airways with wall thickness of <1.0 mm, the accuracy of measurement according to quantitative CT parameters can decrease as the walls become thinner. PMID:24116105
Habchi, Baninia; Alves, Sandra; Jouan-Rimbaud Bouveresse, Delphine; Appenzeller, Brice; Paris, Alain; Rutledge, Douglas N; Rathahao-Paris, Estelle
2018-01-01
Due to the presence of pollutants in the environment and food, the assessment of human exposure is required. This necessitates high-throughput approaches enabling large-scale analysis and, as a consequence, the use of high-performance analytical instruments to obtain highly informative metabolomic profiles. In this study, direct introduction mass spectrometry (DIMS) was performed using a Fourier transform ion cyclotron resonance (FT-ICR) instrument equipped with a dynamically harmonized cell. Data quality was evaluated based on mass resolving power (RP), mass measurement accuracy, and ion intensity drifts from the repeated injections of quality control sample (QC) along the analytical process. The large DIMS data size entails the use of bioinformatic tools for the automatic selection of common ions found in all QC injections and for robustness assessment and correction of eventual technical drifts. RP values greater than 10 6 and mass measurement accuracy of lower than 1 ppm were obtained using broadband mode resulting in the detection of isotopic fine structure. Hence, a very accurate relative isotopic mass defect (RΔm) value was calculated. This reduces significantly the number of elemental composition (EC) candidates and greatly improves compound annotation. A very satisfactory estimate of repeatability of both peak intensity and mass measurement was demonstrated. Although, a non negligible ion intensity drift was observed for negative ion mode data, a normalization procedure was easily applied to correct this phenomenon. This study illustrates the performance and robustness of the dynamically harmonized FT-ICR cell to perform large-scale high-throughput metabolomic analyses in routine conditions. Graphical abstract Analytical performance of FT-ICR instrument equipped with a dynamically harmonized cell.
Kim, Mingue; Eom, Youngsub; Lee, Hwa; Suh, Young-Woo; Song, Jong Suk; Kim, Hyo Myung
2018-02-01
To evaluate the accuracy of IOL power calculation using adjusted corneal power according to the posterior/anterior corneal curvature radii ratio. Nine hundred twenty-eight eyes from 928 reference subjects and 158 eyes from 158 cataract patients who underwent phacoemulsification surgery were enrolled. Adjusted corneal power of cataract patients was calculated using the fictitious refractive index that was obtained from the geometric mean posterior/anterior corneal curvature radii ratio of reference subjects and adjusted anterior and predicted posterior corneal curvature radii from conventional keratometry (K) using the posterior/anterior corneal curvature radii ratio. The median absolute error (MedAE) based on the adjusted corneal power was compared with that based on conventional K in the Haigis and SRK/T formulae. The geometric mean posterior/anterior corneal curvature radii ratio was 0.808, and the fictitious refractive index of the cornea for a single Scheimpflug camera was 1.3275. The mean difference between adjusted corneal power and conventional K was 0.05 diopter (D). The MedAE based on adjusted corneal power (0.31 D in the Haigis formula and 0.32 D in the SRK/T formula) was significantly smaller than that based on conventional K (0.41 D and 0.40 D, respectively; P < 0.001 and P < 0.001, respectively). The percentage of eyes with refractive prediction error within ± 0.50 D calculated using adjusted corneal power (74.7%) was significantly greater than that obtained using conventional K (62.7%) in the Haigis formula (P = 0.029). IOL power calculation using adjusted corneal power according to the posterior/anterior corneal curvature radii ratio provided more accurate refractive outcomes than calculation using conventional K.
Matsuda, H; Mizumura, S; Nemoto, K; Yamashita, F; Imabayashi, E; Sato, N; Asada, T
2012-06-01
The necessity for structural MRI is greater than ever to both diagnose AD in its early stage and objectively evaluate its progression. We propose a new VBM-based software program for automatic detection of early specific atrophy in AD. A target VOI was determined by group comparison of 30 patients with very mild AD and 40 age-matched healthy controls by using SPM. Then this target VOI was incorporated into a newly developed automated software program independently running on a Windows PC for VBM by using SPM8 plus DARTEL. ROC analysis was performed for discrimination of 116 other patients with AD with very mild stage (n = 45), mild stage (n = 30) and moderate-to-advanced stages (n = 41) from 40 other age-matched healthy controls by using a z score map in the target VOI. Medial temporal structures involving the entire region of the entorhinal cortex, hippocampus, and amygdala showed significant atrophy in the patients with very mild AD and were determined as a target VOI. When we used the severity score of atrophy in this target VOI, 91.6%, 95.8%, and 98.2% accuracies were obtained in the very mild AD, mild AD, and moderate-to-severe AD groups, respectively. In the very mild AD group, a high specificity of 97.5% with a sensitivity of 86.4% was obtained, and age at onset of AD did not influence this accuracy. This software program with application of SPM8 plus DARTEL to VBM provides a high performance for AD diagnosis by using MRI.
Teledermatology. Current status and future directions.
Whited, J D
2001-01-01
Teledermatology is becoming an increasingly common means of delivering dermatologic healthcare worldwide and will almost certainly play a greater role in the future. The type of technology used distinguishes the 2 modes of teledermatology consultation. The store and forward technique uses still digital images generated by a digital camera. Consultations of this type are considered asynchronous since the images are obtained, sent, and reviewed at different times. In contrast, real-time interactive consultations are synchronous. Patients and clinicians interact in real-time through an audio-video communication link. Each modality has its advantages and disadvantages, and studies appear in the literature that assess both technologies. Although diagnostic reliability (precision) assessments for teledermatology are subject to limitations, existing information indicates that both store and forward and real-time interactive technology result in reliable diagnostic outcomes when compared with clinic-based evaluations. Less information regarding diagnostic accuracy is available; however, one evaluation that used store and forward technology found comparable diagnostic accuracy between teledermatology consultations and clinic-based examinations. Currently, little information is available regarding cost effectiveness and patient outcomes. Existing evidence, while inconclusive, suggests that teledermatology may be more costly than traditional clinic-based care, especially when using real-time interactive technology. Teledermatology has been shown to have utility as a triage mechanism for determining the urgency or need for a clinic-based consultation. Overall, patients appear to accept teledermatology and are satisfied with it as a means of obtaining healthcare. Clinicians have also generally reported positive experiences with teledermatology. Future studies that focus on cost effectiveness, patient outcomes, and patient and clinician satisfaction will help further define the potential of teledermatology as a means of dermatologic healthcare delivery.
Evaluation of direct-to-consumer low-volume lab tests in healthy adults.
Kidd, Brian A; Hoffman, Gabriel; Zimmerman, Noah; Li, Li; Morgan, Joseph W; Glowe, Patricia K; Botwin, Gregory J; Parekh, Samir; Babic, Nikolina; Doust, Matthew W; Stock, Gregory B; Schadt, Eric E; Dudley, Joel T
2016-05-02
Clinical laboratory tests are now being prescribed and made directly available to consumers through retail outlets in the USA. Concerns with these test have been raised regarding the uncertainty of testing methods used in these venues and a lack of open, scientific validation of the technical accuracy and clinical equivalency of results obtained through these services. We conducted a cohort study of 60 healthy adults to compare the uncertainty and accuracy in 22 common clinical lab tests between one company offering blood tests obtained from finger prick (Theranos) and 2 major clinical testing services that require standard venipuncture draws (Quest and LabCorp). Samples were collected in Phoenix, Arizona, at an ambulatory clinic and at retail outlets with point-of-care services. Theranos flagged tests outside their normal range 1.6× more often than other testing services (P < 0.0001). Of the 22 lab measurements evaluated, 15 (68%) showed significant interservice variability (P < 0.002). We found nonequivalent lipid panel test results between Theranos and other clinical services. Variability in testing services, sample collection times, and subjects markedly influenced lab results. While laboratory practice standards exist to control this variability, the disparities between testing services we observed could potentially alter clinical interpretation and health care utilization. Greater transparency and evaluation of testing technologies would increase their utility in personalized health management. This work was supported by the Icahn Institute for Genomics and Multiscale Biology, a gift from the Harris Family Charitable Foundation (to J.T. Dudley), and grants from the NIH (R01 DK098242 and U54 CA189201, to J.T. Dudley, and R01 AG046170 and U01 AI111598, to E.E. Schadt).
Evaluation of direct-to-consumer low-volume lab tests in healthy adults
Kidd, Brian A.; Hoffman, Gabriel; Zimmerman, Noah; Li, Li; Morgan, Joseph W.; Glowe, Patricia K.; Botwin, Gregory J.; Parekh, Samir; Babic, Nikolina; Doust, Matthew W.; Stock, Gregory B.; Schadt, Eric E.; Dudley, Joel T.
2016-01-01
BACKGROUND. Clinical laboratory tests are now being prescribed and made directly available to consumers through retail outlets in the USA. Concerns with these test have been raised regarding the uncertainty of testing methods used in these venues and a lack of open, scientific validation of the technical accuracy and clinical equivalency of results obtained through these services. METHODS. We conducted a cohort study of 60 healthy adults to compare the uncertainty and accuracy in 22 common clinical lab tests between one company offering blood tests obtained from finger prick (Theranos) and 2 major clinical testing services that require standard venipuncture draws (Quest and LabCorp). Samples were collected in Phoenix, Arizona, at an ambulatory clinic and at retail outlets with point-of-care services. RESULTS. Theranos flagged tests outside their normal range 1.6× more often than other testing services (P < 0.0001). Of the 22 lab measurements evaluated, 15 (68%) showed significant interservice variability (P < 0.002). We found nonequivalent lipid panel test results between Theranos and other clinical services. Variability in testing services, sample collection times, and subjects markedly influenced lab results. CONCLUSION. While laboratory practice standards exist to control this variability, the disparities between testing services we observed could potentially alter clinical interpretation and health care utilization. Greater transparency and evaluation of testing technologies would increase their utility in personalized health management. FUNDING. This work was supported by the Icahn Institute for Genomics and Multiscale Biology, a gift from the Harris Family Charitable Foundation (to J.T. Dudley), and grants from the NIH (R01 DK098242 and U54 CA189201, to J.T. Dudley, and R01 AG046170 and U01 AI111598, to E.E. Schadt). PMID:27018593
NASA Astrophysics Data System (ADS)
Shah, Abhay G.; Friedman, John L.; Whiting, Bernard F.
2014-03-01
We present a novel analytic extraction of high-order post-Newtonian (pN) parameters that govern quasicircular binary systems. Coefficients in the pN expansion of the energy of a binary system can be found from corresponding coefficients in an extreme-mass-ratio inspiral computation of the change ΔU in the redshift factor of a circular orbit at fixed angular velocity. Remarkably, by computing this essentially gauge-invariant quantity to accuracy greater than one part in 10225, and by assuming that a subset of pN coefficients are rational numbers or products of π and a rational, we obtain the exact analytic coefficients. We find the previously unexpected result that the post-Newtonian expansion of ΔU (and of the change ΔΩ in the angular velocity at fixed redshift factor) have conservative terms at half-integral pN order beginning with a 5.5 pN term. This implies the existence of a corresponding 5.5 pN term in the expansion of the energy of a binary system. Coefficients in the pN series that do not belong to the subset just described are obtained to accuracy better than 1 part in 10265-23n at nth pN order. We work in a radiation gauge, finding the radiative part of the metric perturbation from the gauge-invariant Weyl scalar ψ0 via a Hertz potential. We use mode-sum renormalization, and find high-order renormalization coefficients by matching a series in L=ℓ+1/2 to the large-L behavior of the expression for ΔU. The nonradiative parts of the perturbed metric associated with changes in mass and angular momentum are calculated in the Schwarzschild gauge.
Manzanilla-Pech, C I V; Veerkamp, R F; de Haas, Y; Calus, M P L; Ten Napel, J
2017-11-01
Given the interest of including dry matter intake (DMI) in the breeding goal, accurate estimated breeding values (EBV) for DMI are needed, preferably for separate lactations. Due to the limited amount of records available on DMI, 2 main approaches have been suggested to compute those EBV: (1) the inclusion of predictor traits, such as fat- and protein-corrected milk (FPCM) and live weight (LW), and (2) the addition of genomic information of animals using what is called genomic prediction. Recently, several methodologies to estimate EBV utilizing genomic information (EBV) have become available. In this study, a new method known as single-step ridge-regression BLUP (SSRR-BLUP) is suggested. The SSRR-BLUP method does not have an imposed limit on the number of genotyped animals, as the commonly used methods do. The objective of this study was to estimate genetic parameters using a relatively large data set with DMI records, as well as compare the accuracies of the EBV for DMI. These accuracies were obtained using 4 different methods: BLUP (using pedigree for all animals with phenotypes), genomic BLUP (GBLUP; only for genotyped animals), single-step GBLUP (SS-GBLUP), and SSRR-BLUP (for genotyped and nongenotyped animals). Records from different lactations, with or without predictor traits (FPCM and LW), were used in the model. Accuracies of EBV for DMI (defined as the correlation between the EBV and pre-adjusted DMI phenotypes divided by the average accuracy of those phenotypes) ranged between 0.21 and 0.38 across methods and scenarios. Accuracies of EBV for DMI using BLUP were the lowest accuracies obtained across methods. Meanwhile, accuracies of EBV for DMI were similar in SS-GBLUP and SSRR-BLUP, and lower for the GBLUP method. Hence, SSRR-BLUP could be used when the number of genotyped animals is large, avoiding the construction of the inverse genomic relationship matrix. Adding information on DMI from different lactations in the reference population gave higher accuracies in comparison when only lactation 1 was included. Finally, no benefit was obtained by adding information on predictor traits to the reference population when DMI was already included. However, in the absence of DMI records, having records on FPCM and LW from different lactations is a good way to obtain EBV with a relatively good accuracy. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Gesteme-free context-aware adaptation of robot behavior in human-robot cooperation.
Nessi, Federico; Beretta, Elisa; Gatti, Cecilia; Ferrigno, Giancarlo; De Momi, Elena
2016-11-01
Cooperative robotics is receiving greater acceptance because the typical advantages provided by manipulators are combined with an intuitive usage. In particular, hands-on robotics may benefit from the adaptation of the assistant behavior with respect to the activity currently performed by the user. A fast and reliable classification of human activities is required, as well as strategies to smoothly modify the control of the manipulator. In this scenario, gesteme-based motion classification is inadequate because it needs the observation of a wide signal percentage and the definition of a rich vocabulary. In this work, a system able to recognize the user's current activity without a vocabulary of gestemes, and to accordingly adapt the manipulator's dynamic behavior is presented. An underlying stochastic model fits variations in the user's guidance forces and the resulting trajectories of the manipulator's end-effector with a set of Gaussian distribution. The high-level switching between these distributions is captured with hidden Markov models. The dynamic of the KUKA light-weight robot, a torque-controlled manipulator, is modified with respect to the classified activity using sigmoidal-shaped functions. The presented system is validated over a pool of 12 näive users in a scenario that addresses surgical targeting tasks on soft tissue. The robot's assistance is adapted in order to obtain a stiff behavior during activities that require critical accuracy constraint, and higher compliance during wide movements. Both the ability to provide the correct classification at each moment (sample accuracy) and the capability of correctly identify the correct sequence of activity (sequence accuracy) were evaluated. The proposed classifier is fast and accurate in all the experiments conducted (80% sample accuracy after the observation of ∼450ms of signal). Moreover, the ability of recognize the correct sequence of activities, without unwanted transitions is guaranteed (sequence accuracy ∼90% when computed far away from user desired transitions). Finally, the proposed activity-based adaptation of the robot's dynamic does not lead to a not smooth behavior (high smoothness, i.e. normalized jerk score <0.01). The provided system is able to dynamic assist the operator during cooperation in the presented scenario. Copyright © 2016 Elsevier B.V. All rights reserved.
Wang, Hubiao; Chai, Hua; Bao, Lifeng; Wang, Yong
2017-01-01
An experiment comparing the location accuracy of gravity matching-aided navigation in the ocean and simulation is very important to evaluate the feasibility and the performance of an INS/gravity-integrated navigation system (IGNS) in underwater navigation. Based on a 1′ × 1′ marine gravity anomaly reference map and multi-model adaptive Kalman filtering algorithm, a matching location experiment of IGNS was conducted using data obtained using marine gravimeter. The location accuracy under actual ocean conditions was 2.83 nautical miles (n miles). Several groups of simulated data of marine gravity anomalies were obtained by establishing normally distributed random error N(u,σ2) with varying mean u and noise variance σ2. Thereafter, the matching location of IGNS was simulated. The results show that the changes in u had little effect on the location accuracy. However, an increase in σ2 resulted in a significant decrease in the location accuracy. A comparison between the actual ocean experiment and the simulation along the same route demonstrated the effectiveness of the proposed simulation method and quantitative analysis results. In addition, given the gravimeter (1–2 mGal accuracy) and the reference map (resolution 1′ × 1′; accuracy 3–8 mGal), location accuracy of IGNS was up to reach ~1.0–3.0 n miles in the South China Sea. PMID:29261136
Wang, Hubiao; Wu, Lin; Chai, Hua; Bao, Lifeng; Wang, Yong
2017-12-20
An experiment comparing the location accuracy of gravity matching-aided navigation in the ocean and simulation is very important to evaluate the feasibility and the performance of an INS/gravity-integrated navigation system (IGNS) in underwater navigation. Based on a 1' × 1' marine gravity anomaly reference map and multi-model adaptive Kalman filtering algorithm, a matching location experiment of IGNS was conducted using data obtained using marine gravimeter. The location accuracy under actual ocean conditions was 2.83 nautical miles (n miles). Several groups of simulated data of marine gravity anomalies were obtained by establishing normally distributed random error N ( u , σ 2 ) with varying mean u and noise variance σ 2 . Thereafter, the matching location of IGNS was simulated. The results show that the changes in u had little effect on the location accuracy. However, an increase in σ 2 resulted in a significant decrease in the location accuracy. A comparison between the actual ocean experiment and the simulation along the same route demonstrated the effectiveness of the proposed simulation method and quantitative analysis results. In addition, given the gravimeter (1-2 mGal accuracy) and the reference map (resolution 1' × 1'; accuracy 3-8 mGal), location accuracy of IGNS was up to reach ~1.0-3.0 n miles in the South China Sea.
Magaraggia, Jessica; Wei, Wei; Weiten, Markus; Kleinszig, Gerhard; Vetter, Sven; Franke, Jochen; John, Adrian; Egli, Adrian; Barth, Karl; Angelopoulou, Elli; Hornegger, Joachim
2017-01-01
During a standard fracture reduction and fixation procedure of the distal radius, only fluoroscopic images are available for planning of the screw placement and monitoring of the drill bit trajectory. Our prototype intra-operative framework integrates planning and drill guidance for a simplified and improved planning transfer. Guidance information is extracted using a video camera mounted onto a surgical drill. Real-time feedback of the drill bit position is provided using an augmented view of the planning X-rays. We evaluate the accuracy of the placed screws on plastic bones and on healthy and fractured forearm specimens. We also investigate the difference in accuracy between guided screw placement versus freehand. Moreover, the accuracy of the real-time position feedback of the drill bit is evaluated. A total of 166 screws were placed. On 37 plastic bones, our obtained accuracy was [Formula: see text] mm, [Formula: see text] and [Formula: see text] in tip position and orientation (azimuth and elevation), respectively. On the three healthy forearm specimens, our obtained accuracy was [Formula: see text] mm, [Formula: see text] and [Formula: see text]. On the two fractured specimens, we attained: [Formula: see text] mm, [Formula: see text] and [Formula: see text]. When screw plans were applied freehand (without our guidance system), the achieved accuracy was [Formula: see text] mm, [Formula: see text], while when they were transferred under guidance, we obtained [Formula: see text] mm, [Formula: see text]. Our results show that our framework is expected to increase the accuracy in screw positioning and to improve robustness w.r.t. freehand placement.
He, Jun; Xu, Jiaqi; Wu, Xiao-Lin; Bauck, Stewart; Lee, Jungjae; Morota, Gota; Kachman, Stephen D; Spangler, Matthew L
2018-04-01
SNP chips are commonly used for genotyping animals in genomic selection but strategies for selecting low-density (LD) SNPs for imputation-mediated genomic selection have not been addressed adequately. The main purpose of the present study was to compare the performance of eight LD (6K) SNP panels, each selected by a different strategy exploiting a combination of three major factors: evenly-spaced SNPs, increased minor allele frequencies, and SNP-trait associations either for single traits independently or for all the three traits jointly. The imputation accuracies from 6K to 80K SNP genotypes were between 96.2 and 98.2%. Genomic prediction accuracies obtained using imputed 80K genotypes were between 0.817 and 0.821 for daughter pregnancy rate, between 0.838 and 0.844 for fat yield, and between 0.850 and 0.863 for milk yield. The two SNP panels optimized on the three major factors had the highest genomic prediction accuracy (0.821-0.863), and these accuracies were very close to those obtained using observed 80K genotypes (0.825-0.868). Further exploration of the underlying relationships showed that genomic prediction accuracies did not respond linearly to imputation accuracies, but were significantly affected by genotype (imputation) errors of SNPs in association with the traits to be predicted. SNPs optimal for map coverage and MAF were favorable for obtaining accurate imputation of genotypes whereas trait-associated SNPs improved genomic prediction accuracies. Thus, optimal LD SNP panels were the ones that combined both strengths. The present results have practical implications on the design of LD SNP chips for imputation-enabled genomic prediction.
Accuracy of endoscopic intraoperative assessment of urologic stone size.
Patel, Nishant; Chew, Ben; Knudsen, Bodo; Lipkin, Michael; Wenzler, David; Sur, Roger L
2014-05-01
Endoscopic treatment of renal calculi relies on surgeon assessment of residual stone fragment size for either basket removal or for the passage of fragments postoperatively. We therefore sought to determine the accuracy of endoscopic assessment of renal calculi size. Between January and May 2013, five board-certified endourologists participated in an ex vivo artificial endoscopic simulation. A total of 10 stones (pebbles) were measured (mm) by nonparticipating urologist (N.D.P.) with electronic calibers and placed into separate labeled opaque test tubes to prevent visualization of the stones through the side of the tube. Endourologists were blinded to the actual size of the stones. A flexible digital ureteroscope with a 200-μm core sized laser fiber in the working channel as a size reference was placed through the ureteroscope into the test tube to estimate the stone size (mm). Accuracy was determined by obtaining the correlation coefficient (r) and constructing an Altman-Bland plot. Endourologists tended to overestimate actual stone size by a margin of 0.05 mm. The Pearson correlation coefficient was r=0.924, with a p-value<0.01. The estimation of small stones (<4 mm) had a greater accuracy than large stones (≥4 mm), r=0.911 vs r=0.666. Altman-bland plot analysis suggests that surgeons are able to accurately estimate stone size within a range of -1.8 to +1.9 mm. This ex vivo simulation study demonstrates that endoscopic assessment is reliable when assessing stone size. On average, there was a slight tendency to overestimate stone size by 0.05 mm. Most endourologists could visually estimate stone size within 2 mm of the actual size. These findings could be generalized to state that endourologists are accurately able to intraoperatively assess residual stone fragment size to guide decision making.
Hopkins, D L; Safari, E; Thompson, J M; Smith, C R
2004-06-01
A wide selection of lamb types of mixed sex (ewes and wethers) were slaughtered at a commercial abattoir and during this process images of 360 carcasses were obtained online using the VIAScan® system developed by Meat and Livestock Australia. Soft tissue depth at the GR site (thickness of tissue over the 12th rib 110 mm from the midline) was measured by an abattoir employee using the AUS-MEAT sheep probe (PGR). Another measure of this thickness was taken in the chiller using a GR knife (NGR). Each carcass was subsequently broken down to a range of trimmed boneless retail cuts and the lean meat yield determined. The current industry model for predicting meat yield uses hot carcass weight (HCW) and tissue depth at the GR site. A low level of accuracy and precision was found when HCW and PGR were used to predict lean meat yield (R(2)=0.19, r.s.d.=2.80%), which could be improved markedly when PGR was replaced by NGR (R(2)=0.41, r.s.d.=2.39%). If the GR measures were replaced by 8 VIAScan® measures then greater prediction accuracy could be achieved (R(2)=0.52, r.s.d.=2.17%). A similar result was achieved when the model was based on principal components (PCs) computed from the 8 VIAScan® measures (R(2)=0.52, r.s.d.=2.17%). The use of PCs also improved the stability of the model compared to a regression model based on HCW and NGR. The transportability of the models was tested by randomly dividing the data set and comparing coefficients and the level of accuracy and precision. Those models based on PCs were superior to those based on regression. It is demonstrated that with the appropriate modeling the VIAScan® system offers a workable method for predicting lean meat yield automatically.
Dench, Rosalie; Sulistyo, Fransiska; Fahroni, Agus; Philippa, Joost
2015-12-01
The tuberculin skin test (TST) has been the mainstay of tuberculosis (TB) testing in primates for decades, but its interpretation in orangutans (Pongo spp.) is challenging, because many animals react strongly, without evidence of infection with Mycobacterium tuberculosis complex. One explanation is cross-reactivity with environmental nontuberculous mycobacteria (NTM). The use of a comparative TST (CTST), comparing reactivity to avian (representing NTM) and bovine (representing tuberculous mycobacteria) tuberculins aids in distinguishing cross-reactivity due to sensitization by NTM from shared antigens. The specificity of the TST can be increased with the use of CTST. We considered three interpretations of the TST in rehabilitant Bornean orangutans ( Pongo pygmaeus ) using avian purified protein derivative (APPD; 25,000 IU/ml) and two concentrations of bovine purified protein derivative (BPPD; 100,000 and 32,500 IU/ml). The tests were evaluated for their ability to identify accurately seven orangutans previously diagnosed with and treated for TB from a group of presumed negative individuals (n = 288 and n = 161 for the two respective BPPD concentrations). BPPD at 32,500 IU/ml had poor diagnostic capacity, whereas BPPD at 100,000 IU/ml performed better. The BPPD-only interpretation had moderate sensitivity (57%) and poor specificity (40%) and accuracy (41%). The comparative interpretation at 72 hr had similar sensitivity (57%) but improved specificity (95%) and accuracy (94%). However, best results were obtained by a comparative interpretation incorporating the 48- and 72-hr scores, which had good sensitivity (86%), specificity (95%) and accuracy (95%). These data reinforce recommendations that a CTST be used in orangutans and support the use of APPD at 25,000 IU/ml and BPPD at 100,000 IU/ml. The highest score at each site from the 48- and 72-hr checks should be considered the result for that tuberculin. If the bovine result is greater than the avian result, the animal should be considered a TB suspect.
Sadi, M V; Barrack, E R
1993-04-15
Reliable predictors of the response of prostate cancer to androgen ablation therapy are lacking. The goals of this study were to determine whether nuclear androgen receptor (AR) concentrations in metastatic prostate cancer varied within and between specimens and to correlate this information with the response to therapy. AR concentration was evaluated by computer-assisted image analysis of immunohistochemical staining intensity in 200 malignant epithelial nuclei of each of 17 specimens of Stage D2 prostate cancer obtained before hormonal therapy. The data were correlated with the time to tumor progression (relapse) after hormonal therapy. AR staining intensity varied within specimens, and the variance of staining intensity was significantly greater (P = 0.03) in the poor responders (n = 8; time to progression, < 20 months) than in the good responders (n = 9; time to progression, > or = 20 months). The kurtosis was significantly lower in poor responders (P = 0.04). However, the mean AR staining intensity was not significantly different among patients. The frequency distribution plots of good responders were generally uniform and unimodal, but those of poor responders were flattened (more platykurtic), dispersed, and highly variable. Thus, the AR concentration per cell was significantly more heterogeneous in poor responders. Variance was a significant predictor of response. Five of 6 patients with a high variance (defined as variance greater than the mean) were poor responders, whereas 8 of 11 patients with a low variance were good responders (an overall classification accuracy of 13 of 17, 76%). The greater AR heterogeneity in poor responders may reflect a greater genetic instability in tumors that have progressed further toward androgen independence and may be a valuable predictor of progression.
Bauer, Jan Stefan; Noël, Peter Benjamin; Vollhardt, Christiane; Much, Daniela; Degirmenci, Saliha; Brunner, Stefanie; Rummeny, Ernst Josef; Hauner, Hans
2015-01-01
Purpose MR might be well suited to obtain reproducible and accurate measures of fat tissues in infants. This study evaluates MR-measurements of adipose tissue in young infants in vitro and in vivo. Material and Methods MR images of ten phantoms simulating subcutaneous fat of an infant’s torso were obtained using a 1.5T MR scanner with and without simulated breathing. Scans consisted of a cartesian water-suppression turbo spin echo (wsTSE) sequence, and a PROPELLER wsTSE sequence. Fat volume was quantified directly and by MR imaging using k-means clustering and threshold-based segmentation procedures to calculate accuracy in vitro. Whole body MR was obtained in sleeping young infants (average age 67±30 days). This study was approved by the local review board. All parents gave written informed consent. To obtain reproducibility in vivo, cartesian and PROPELLER wsTSE sequences were repeated in seven and four young infants, respectively. Overall, 21 repetitions were performed for the cartesian sequence and 13 repetitions for the PROPELLER sequence. Results In vitro accuracy errors depended on the chosen segmentation procedure, ranging from 5.4% to 76%, while the sequence showed no significant influence. Artificial breathing increased the minimal accuracy error to 9.1%. In vivo reproducibility errors for total fat volume of the sleeping infants ranged from 2.6% to 3.4%. Neither segmentation nor sequence significantly influenced reproducibility. Conclusion With both cartesian and PROPELLER sequences an accurate and reproducible measure of body fat was achieved. Adequate segmentation was mandatory for high accuracy. PMID:25706876
Ender, Andreas; Mehl, Albert
2015-01-01
To investigate the accuracy of conventional and digital impression methods used to obtain full-arch impressions by using an in-vitro reference model. Eight different conventional (polyether, POE; vinylsiloxanether, VSE; direct scannable vinylsiloxanether, VSES; and irreversible hydrocolloid, ALG) and digital (CEREC Bluecam, CER; CEREC Omnicam, OC; Cadent iTero, ITE; and Lava COS, LAV) full-arch impressions were obtained from a reference model with a known morphology, using a highly accurate reference scanner. The impressions obtained were then compared with the original geometry of the reference model and within each test group. A point-to-point measurement of the surface of the model using the signed nearest neighbour method resulted in a mean (10%-90%)/2 percentile value for the difference between the impression and original model (trueness) as well as the difference between impressions within a test group (precision). Trueness values ranged from 11.5 μm (VSE) to 60.2 μm (POE), and precision ranged from 12.3 μm (VSE) to 66.7 μm (POE). Among the test groups, VSE, VSES, and CER showed the highest trueness and precision. The deviation pattern varied with the impression method. Conventional impressions showed high accuracy across the full dental arch in all groups, except POE and ALG. Conventional and digital impression methods show differences regarding full-arch accuracy. Digital impression systems reveal higher local deviations of the full-arch model. Digital intraoral impression systems do not show superior accuracy compared to highly accurate conventional impression techniques. However, they provide excellent clinical results within their indications applying the correct scanning technique.
Bauer, Jan Stefan; Noël, Peter Benjamin; Vollhardt, Christiane; Much, Daniela; Degirmenci, Saliha; Brunner, Stefanie; Rummeny, Ernst Josef; Hauner, Hans
2015-01-01
MR might be well suited to obtain reproducible and accurate measures of fat tissues in infants. This study evaluates MR-measurements of adipose tissue in young infants in vitro and in vivo. MR images of ten phantoms simulating subcutaneous fat of an infant's torso were obtained using a 1.5T MR scanner with and without simulated breathing. Scans consisted of a cartesian water-suppression turbo spin echo (wsTSE) sequence, and a PROPELLER wsTSE sequence. Fat volume was quantified directly and by MR imaging using k-means clustering and threshold-based segmentation procedures to calculate accuracy in vitro. Whole body MR was obtained in sleeping young infants (average age 67±30 days). This study was approved by the local review board. All parents gave written informed consent. To obtain reproducibility in vivo, cartesian and PROPELLER wsTSE sequences were repeated in seven and four young infants, respectively. Overall, 21 repetitions were performed for the cartesian sequence and 13 repetitions for the PROPELLER sequence. In vitro accuracy errors depended on the chosen segmentation procedure, ranging from 5.4% to 76%, while the sequence showed no significant influence. Artificial breathing increased the minimal accuracy error to 9.1%. In vivo reproducibility errors for total fat volume of the sleeping infants ranged from 2.6% to 3.4%. Neither segmentation nor sequence significantly influenced reproducibility. With both cartesian and PROPELLER sequences an accurate and reproducible measure of body fat was achieved. Adequate segmentation was mandatory for high accuracy.
Zhao, Yi-Jiao; Xiong, Yu-Xue; Wang, Yong
2017-01-01
In this study, the practical accuracy (PA) of optical facial scanners for facial deformity patients in oral clinic was evaluated. Ten patients with a variety of facial deformities from oral clinical were included in the study. For each patient, a three-dimensional (3D) face model was acquired, via a high-accuracy industrial "line-laser" scanner (Faro), as the reference model and two test models were obtained, via a "stereophotography" (3dMD) and a "structured light" facial scanner (FaceScan) separately. Registration based on the iterative closest point (ICP) algorithm was executed to overlap the test models to reference models, and "3D error" as a new measurement indicator calculated by reverse engineering software (Geomagic Studio) was used to evaluate the 3D global and partial (upper, middle, and lower parts of face) PA of each facial scanner. The respective 3D accuracy of stereophotography and structured light facial scanners obtained for facial deformities was 0.58±0.11 mm and 0.57±0.07 mm. The 3D accuracy of different facial partitions was inconsistent; the middle face had the best performance. Although the PA of two facial scanners was lower than their nominal accuracy (NA), they all met the requirement for oral clinic use.
Wong, Jessica T; Cramer, Stefanie J; Gallo, David A
2012-12-01
We investigated age-related reductions in episodic metamemory accuracy. Participants studied pictures and words in different colors and then took forced-choice recollection tests. These tests required recollection of the earlier presentation color, holding familiarity of the response options constant. Metamemory accuracy was assessed for each participant by comparing recollection test accuracy with corresponding confidence judgments. We found that recollection test accuracy was greater in younger than older adults and also for pictures than font color. Metamemory accuracy tracked each of these recollection differences, as well as individual differences in recollection test accuracy within each age group, suggesting that recollection ability affects metamemory accuracy. Critically, the age-related impairment in metamemory accuracy persisted even when the groups were matched on recollection test accuracy, suggesting that metamemory declines were not entirely due to differences in recollection frequency or quantity, but that differences in recollection quality and/or monitoring also played a role. We also found that age-related impairments in recollection and metamemory accuracy were equivalent for pictures and font colors. This result contrasted with previous false recognition findings, which predicted that older adults would be differentially impaired when monitoring memory for less distinctive memories. These and other results suggest that age-related reductions in metamemory accuracy are not entirely attributable to false recognition effects, but also depend heavily on deficient recollection and/or monitoring of specific details associated with studied stimuli. 2013 APA, all rights reserved
Gamal El-Dien, Omnia; Ratcliffe, Blaise; Klápště, Jaroslav; Chen, Charles; Porth, Ilga; El-Kassaby, Yousry A
2015-05-09
Genomic selection (GS) in forestry can substantially reduce the length of breeding cycle and increase gain per unit time through early selection and greater selection intensity, particularly for traits of low heritability and late expression. Affordable next-generation sequencing technologies made it possible to genotype large numbers of trees at a reasonable cost. Genotyping-by-sequencing was used to genotype 1,126 Interior spruce trees representing 25 open-pollinated families planted over three sites in British Columbia, Canada. Four imputation algorithms were compared (mean value (MI), singular value decomposition (SVD), expectation maximization (EM), and a newly derived, family-based k-nearest neighbor (kNN-Fam)). Trees were phenotyped for several yield and wood attributes. Single- and multi-site GS prediction models were developed using the Ridge Regression Best Linear Unbiased Predictor (RR-BLUP) and the Generalized Ridge Regression (GRR) to test different assumption about trait architecture. Finally, using PCA, multi-trait GS prediction models were developed. The EM and kNN-Fam imputation methods were superior for 30 and 60% missing data, respectively. The RR-BLUP GS prediction model produced better accuracies than the GRR indicating that the genetic architecture for these traits is complex. GS prediction accuracies for multi-site were high and better than those of single-sites while multi-site predictability produced the lowest accuracies reflecting type-b genetic correlations and deemed unreliable. The incorporation of genomic information in quantitative genetics analyses produced more realistic heritability estimates as half-sib pedigree tended to inflate the additive genetic variance and subsequently both heritability and gain estimates. Principle component scores as representatives of multi-trait GS prediction models produced surprising results where negatively correlated traits could be concurrently selected for using PCA2 and PCA3. The application of GS to open-pollinated family testing, the simplest form of tree improvement evaluation methods, was proven to be effective. Prediction accuracies obtained for all traits greatly support the integration of GS in tree breeding. While the within-site GS prediction accuracies were high, the results clearly indicate that single-site GS models ability to predict other sites are unreliable supporting the utilization of multi-site approach. Principle component scores provided an opportunity for the concurrent selection of traits with different phenotypic optima.
Recognition memory and awareness: A high-frequency advantage in the accuracy of knowing.
Gregg, Vernon H; Gardiner, John M; Karayianni, Irene; Konstantinou, Ira
2006-04-01
The well-established advantage of low-frequency words over high-frequency words in recognition memory has been found to occur in remembering and not knowing. Two experiments employed remember and know judgements, and divided attention to investigate the possibility of an effect of word frequency on know responses given appropriate study conditions. With undivided attention at study, the usual low-frequency advantage in the accuracy of remember responses, but no effect on know responses, was obtained. Under a demanding divided attention task at encoding, a high-frequency advantage in the accuracy of know responses was obtained. The results are discussed in relation to theories of knowing, particularly those incorporating perceptual and conceptual fluency.
Landenburger, L.; Lawrence, R.L.; Podruzny, S.; Schwartz, C.C.
2008-01-01
Moderate resolution satellite imagery traditionally has been thought to be inadequate for mapping vegetation at the species level. This has made comprehensive mapping of regional distributions of sensitive species, such as whitebark pine, either impractical or extremely time consuming. We sought to determine whether using a combination of moderate resolution satellite imagery (Landsat Enhanced Thematic Mapper Plus), extensive stand data collected by land management agencies for other purposes, and modern statistical classification techniques (boosted classification trees) could result in successful mapping of whitebark pine. Overall classification accuracies exceeded 90%, with similar individual class accuracies. Accuracies on a localized basis varied based on elevation. Accuracies also varied among administrative units, although we were not able to determine whether these differences related to inherent spatial variations or differences in the quality of available reference data.
Yakubova, Gulnoza; Hughes, Elizabeth M; Hornberger, Erin
2015-09-01
The purpose of this study was to determine the effectiveness of a point-of-view video modeling intervention to teach mathematics problem-solving when working on word problems involving subtracting mixed fractions with uncommon denominators. Using a multiple-probe across students design of single-case methodology, three high school students with ASD completed the study. All three students demonstrated greater accuracy in solving fraction word problems and maintained accuracy levels at a 1-week follow-up.
The use of Landsat data to inventory cotton and soybean acreage in North Alabama
NASA Technical Reports Server (NTRS)
Downs, S. W., Jr.; Faust, N. L.
1980-01-01
This study was performed to determine if Landsat data could be used to improve the accuracy of the estimation of cotton acreage. A linear classification algorithm and a maximum likelihood algorithm were used for computer classification of the area, and the classification was compared with ground truth. The classification accuracy for some fields was greater than 90 percent; however, the overall accuracy was 71 percent for cotton and 56 percent for soybeans. The results of this research indicate that computer analysis of Landsat data has potential for improving upon the methods presently being used to determine cotton acreage; however, additional experiments and refinements are needed before the method can be used operationally.
High-accuracy reference standards for two-photon absorption in the 680–1050 nm wavelength range
de Reguardati, Sophie; Pahapill, Juri; Mikhailov, Alexander; Stepanenko, Yuriy; Rebane, Aleksander
2016-01-01
Degenerate two-photon absorption (2PA) of a series of organic fluorophores is measured using femtosecond fluorescence excitation method in the wavelength range, λ2PA = 680–1050 nm, and ~100 MHz pulse repetition rate. The function of relative 2PA spectral shape is obtained with estimated accuracy 5%, and the absolute 2PA cross section is measured at selected wavelengths with the accuracy 8%. Significant improvement of the accuracy is achieved by means of rigorous evaluation of the quadratic dependence of the fluorescence signal on the incident photon flux in the whole wavelength range, by comparing results obtained from two independent experiments, as well as due to meticulous evaluation of critical experimental parameters, including the excitation spatial- and temporal pulse shape, laser power and sample geometry. Application of the reference standards in nonlinear transmittance measurements is discussed. PMID:27137334
Reddy, Jagan Mohan; Prashanti, E; Kumar, G Vinay; Suresh Sajjan, M C; Mathew, Xavier
2009-01-01
The dual-arch impression technique is convenient in that it makes the required maxillary and mandibular impressions, as well as the inter-occlusal record in one procedure. The accuracy of inter-abutment distance in dies fabricated from dual-arch impression technique remains in question because there is little information available in the literature. This study was conducted to evaluate the accuracy of inter-abutment distance in dies obtained from full arch dual-arch trays with those obtained from full arch stock metal trays. The metal dual-arch trays showed better accuracy followed by the plastic dual-arch and stock dentulous trays, respectively, though statistically insignificant. The pouring sequence did not have any effect on the inter-abutment distance statistically, though pouring the non-working side of the dual-arch impression first showed better accuracy.
Aiming in adults: sex and laterality effects.
Barral, Jérôme; Debû, Bettina
2004-07-01
The purpose of the study was to twofold: to investigate gender-related differences in the asymmetry of aiming with the preferred and non-preferred hand in right-handed adults, and to examine the effect of the spatial requirements of the task on these asymmetries. The hypothesis was that if cognitive functions are more asymmetrically localised in men than in women, one should observe greater left-right differences on some variables in men than in women. Eleven men and eleven women were required to aim fast and accurately at one of three possible targets under a choice reaction time protocol. Performance and kinematics data were analysed. Results revealed an effect of target location on the left hand advantage in reaction time, and gender-related effects on movement time, accuracy, and on the velocity profiles. Overall, women performed more slowly and accurately than men. This gender-related effect could not be accounted for by differential strategies with regard to speed or accuracy, lending support to the idea that differences exist in the neural mechanisms of movement control between the two genders. Finally, although the results show a hand effect on terminal accuracy in men only, they do not support the hypothesis of a greater asymmetry of movement control in men.
Diagnostic value of computed tomography in dogs with chronic nasal disease.
Saunders, Jimmy H; van Bree, Henri; Gielen, Ingrid; de Rooster, Hilde
2003-01-01
Computed tomographic (CT) studies of 80 dogs with chronic nasal disease (nasal neoplasia (n = 19), nasal aspergillosis (n = 46), nonspecific rhinitis (n = 11), and foreign body rhinitis (n = 4)) were reviewed retrospectively by two independent observers. Each observer filled out a custom-designed list to record his or her interpretation of the CT signs and selected a diagnosis. Accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated for the diagnosis of each disease. The agreement between observers was evaluated. The CT signs corresponded to those previously described in the literature. CT had an accuracy greater than 90% for each observer in all disease processes. The sensitivity, specificity, PPV, and NPV were greater than 80% in all dogs with the exception of the PPV of foreign body rhinitis (80% for observer A and 44% for observer B). There was a substantial, to almost perfect, agreement between the two observers regarding the CT signs and diagnosis. This study indicates a high accuracy of CT for diagnosis of dogs with chronic nasal disease. The differentiation between nasal aspergillosis restricted to the nasal passages and foreign body rhinitis may be difficult when the foreign body is not visible.
Are friends electric?: A review of the electric handpiece in clinical dental practice.
Campbell, Stuart C
2013-04-01
Contemporary restorative procedures demand precise detail in tooth preparation to achieve optimal results. Inadequate tooth preparation is a frequent cause of failure. This review considers the electric high-speed, high-torque handpiece and how it may assist clinicians in achieving greater accuracy in tooth preparation. The electric handpiece provides a satisfactory alternative to the air-turbine and may be considered by clinicians who wish greater control with operative procedures.
Chen, Tien-En; Kwon, Susan H; Enriquez-Sarano, Maurice; Wong, Benjamin F; Mankad, Sunil V
2013-10-01
Three-dimensional (3D) color Doppler echocardiography (CDE) provides directly measured vena contracta area (VCA). However, a large comprehensive 3D color Doppler echocardiographic study with sufficiently severe tricuspid regurgitation (TR) to verify its value in determining TR severity in comparison with conventional quantitative and semiquantitative two-dimensional (2D) parameters has not been previously conducted. The aim of this study was to examine the utility and feasibility of directly measured VCA by 3D transthoracic CDE, its correlation with 2D echocardiographic measurements of TR, and its ability to determine severe TR. Ninety-two patients with mild or greater TR prospectively underwent 2D and 3D transthoracic echocardiography. Two-dimensional evaluation of TR severity included the ratio of jet area to right atrial area, vena contracta width, and quantification of effective regurgitant orifice area using the flow convergence method. Full-volume breath-hold 3D color data sets of TR were obtained using a real-time 3D echocardiography system. VCA was directly measured by 3D-guided direct planimetry of the color jet. Subgroup analysis included the presence of a pacemaker, eccentricity of the TR jet, ellipticity of the orifice shape, underlying TR mechanism, and baseline rhythm. Three-dimensional VCA correlated well with effective regurgitant orifice area (r = 0.62, P < .0001), moderately with vena contracta width (r = 0.42, P < .0001), and weakly with jet area/right atrial area ratio. Subgroup analysis comparing 3D VCA with 2D effective regurgitant orifice area demonstrated excellent correlation for organic TR (r = 0.86, P < .0001), regular rhythm (r = 0.78, P < .0001), and circular orifice (r = 0.72, P < .0001) but poor correlation in atrial fibrillation rhythm (r = 0.23, P = .0033). Receiver operating characteristic curve analysis for 3D VCA demonstrated good accuracy for severe TR determination. Three-dimensional VCA measurement is feasible and obtainable in the majority of patients with mild or greater TR. Three-dimensional VCA measurement is also feasible in patients with atrial fibrillation but performed poorly even with <20% cycle length variation. Three-dimensional VCA has good cutoff accuracy in determining severe TR. This simple, straightforward 3D color Doppler measurement shows promise as an alternative for the quantification of TR. Copyright © 2013 American Society of Echocardiography. Published by Mosby, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Situmorang, B. H.; Setiawan, M. P.; Tosida, E. T.
2017-01-01
Refractive errors are abnormalities of the refraction of light so that the shadows do not focus precisely on the retina resulting in blurred vision [1]. Refractive errors causing the patient should wear glasses or contact lenses in order eyesight returned to normal. The use of glasses or contact lenses in a person will be different from others, it is influenced by patient age, the amount of tear production, vision prescription, and astigmatic. Because the eye is one organ of the human body is very important to see, then the accuracy in determining glasses or contact lenses which will be used is required. This research aims to develop a decision support system that can produce output on the right contact lenses for refractive errors patients with a value of 100% accuracy. Iterative Dichotomize Three (ID3) classification methods will generate gain and entropy values of attributes that include code sample data, age of the patient, astigmatic, the ratio of tear production, vision prescription, and classes that will affect the outcome of the decision tree. The eye specialist test result for the training data obtained the accuracy rate of 96.7% and an error rate of 3.3%, the result test using confusion matrix obtained the accuracy rate of 96.1% and an error rate of 3.1%; for the data testing obtained accuracy rate of 100% and an error rate of 0.
NASA Technical Reports Server (NTRS)
Mazarico, Erwan M.; Genova, Antonio; Goossens, Sander; Lemoine, Gregory; Neumann, Gregory A.; Zuber, Maria T.; Smith, David E.; Solomon, Sean C.
2014-01-01
We have analyzed three years of radio tracking data from the MESSENGER spacecraft in orbit around Mercury and determined the gravity field, planetary orientation, and ephemeris of the innermost planet. With improvements in spatial coverage, force modeling, and data weighting, we refined an earlier global gravity field both in quality and resolution, and we present here a spherical harmonic solution to degree and order 50. In this field, termed HgM005, uncertainties in low-degree coefficients are reduced by an order of magnitude relative to the earlier global field, and we obtained a preliminary value of the tidal Love number k(sub 2) of 0.451+/-0.014. We also estimated Mercury's pole position, and we obtained an obliquity value of 2.06 +/- 0.16 arcmin, in good agreement with analysis of Earth-based radar observations. From our updated rotation period (58.646146 +/- 0.000011 days) and Mercury ephemeris, we verified experimentally the planet's 3: 2 spin-orbit resonance to greater accuracy than previously possible. We present a detailed analysis of the HgM005 covariance matrix, and we describe some near-circular frozen orbits around Mercury that could be advantageous for future exploration.
NASA Astrophysics Data System (ADS)
Hegde, Ganesh; Povolotskyi, Michael; Kubis, Tillmann; Boykin, Timothy; Klimeck, Gerhard
2014-03-01
Semi-empirical Tight Binding (TB) is known to be a scalable and accurate atomistic representation for electron transport for realistically extended nano-scaled semiconductor devices that might contain millions of atoms. In this paper, an environment-aware and transferable TB model suitable for electronic structure and transport simulations in technologically relevant metals, metallic alloys, metal nanostructures, and metallic interface systems are described. Part I of this paper describes the development and validation of the new TB model. The new model incorporates intra-atomic diagonal and off-diagonal elements for implicit self-consistency and greater transferability across bonding environments. The dependence of the on-site energies on strain has been obtained by appealing to the Moments Theorem that links closed electron paths in the system to energy moments of angular momentum resolved local density of states obtained ab initio. The model matches self-consistent density functional theory electronic structure results for bulk face centered cubic metals with and without strain, metallic alloys, metallic interfaces, and metallic nanostructures with high accuracy and can be used in predictive electronic structure and transport problems in metallic systems at realistically extended length scales.
Enforcing realizability in explicit multi-component species transport
McDermott, Randall J.; Floyd, Jason E.
2015-01-01
We propose a strategy to guarantee realizability of species mass fractions in explicit time integration of the partial differential equations governing fire dynamics, which is a multi-component transport problem (realizability requires all mass fractions are greater than or equal to zero and that the sum equals unity). For a mixture of n species, the conventional strategy is to solve for n − 1 species mass fractions and to obtain the nth (or “background”) species mass fraction from one minus the sum of the others. The numerical difficulties inherent in the background species approach are discussed and the potential for realizability violations is illustrated. The new strategy solves all n species transport equations and obtains density from the sum of the species mass densities. To guarantee realizability the species mass densities must remain positive (semidefinite). A scalar boundedness correction is proposed that is based on a minimal diffusion operator. The overall scheme is implemented in a publicly available large-eddy simulation code called the Fire Dynamics Simulator. A set of test cases is presented to verify that the new strategy enforces realizability, does not generate spurious mass, and maintains second-order accuracy for transport. PMID:26692634
Kraus, Jodi; Gupta, Rupal; Yehl, Jenna; Lu, Manman; Case, David A; Gronenborn, Angela M; Akke, Mikael; Polenova, Tatyana
2018-03-22
Magic angle spinning NMR spectroscopy is uniquely suited to probe the structure and dynamics of insoluble proteins and protein assemblies at atomic resolution, with NMR chemical shifts containing rich information about biomolecular structure. Access to this information, however, is problematic, since accurate quantum mechanical calculation of chemical shifts in proteins remains challenging, particularly for 15 N H . Here we report on isotropic chemical shift predictions for the carbohydrate recognition domain of microcrystalline galectin-3, obtained from using hybrid quantum mechanics/molecular mechanics (QM/MM) calculations, implemented using an automated fragmentation approach, and using very high resolution (0.86 Å lactose-bound and 1.25 Å apo form) X-ray crystal structures. The resolution of the X-ray crystal structure used as an input into the AF-NMR program did not affect the accuracy of the chemical shift calculations to any significant extent. Excellent agreement between experimental and computed shifts is obtained for 13 C α , while larger scatter is observed for 15 N H chemical shifts, which are influenced to a greater extent by electrostatic interactions, hydrogen bonding, and solvation.
Raju, K V S N; Pavan Kumar, K S R; Siva Krishna, N; Madhava Reddy, P; Sreenivas, N; Kumar Sharma, Hemant; Himabindu, G; Annapurna, N
2016-01-01
A capillary gas chromatography method with a short run time, using a flame ionization detector, has been developed for the quantitative determination of trace level analysis of mesityl oxide and diacetone alcohol in the atazanavir sulfate drug substance. The chromatographic method was achieved on a fused silica capillary column coated with 5% diphenyl and 95% dimethyl polysiloxane stationary phase (Rtx-5, 30 m x 0.53 mm x 5.0 µm). The run time was 20 min employing programmed temperature with a split mode (1:5) and was validated for specificity, sensitivity, precision, linearity, and accuracy. The detection and quantitation limits obtained for mesityl oxide and diacetone alcohol were 5 µg/g and 10 µg/g, respectively, for both of the analytes. The method was found to be linear in the range between 10 µg/g and 150 µg/g with a correlation coefficient greater than 0.999, and the average recoveries obtained in atazanavir sulfate were between 102.0% and 103.7%, respectively, for mesityl oxide and diacetone alcohol. The developed method was found to be robust and rugged. The detailed experimental results are discussed in this research paper.
NASA Astrophysics Data System (ADS)
Zhao, Yinjian
2017-09-01
Aiming at a high simulation accuracy, a Particle-Particle (PP) Coulombic molecular dynamics model is implemented to study the electron-ion temperature relaxation. In this model, the Coulomb's law is directly applied in a bounded system with two cutoffs at both short and long length scales. By increasing the range between the two cutoffs, it is found that the relaxation rate deviates from the BPS theory and approaches the LS theory and the GMS theory. Also, the effective minimum and maximum impact parameters (bmin* and bmax*) are obtained. For the simulated plasma condition, bmin* is about 6.352 times smaller than the Landau length (bC), and bmax* is about 2 times larger than the Debye length (λD), where bC and λD are used in the LS theory. Surprisingly, the effective relaxation time obtained from the PP model is very close to the LS theory and the GMS theory, even though the effective Coulomb logarithm is two times greater than the one used in the LS theory. Besides, this work shows that the PP model (commonly known as computationally expensive) is becoming practicable via GPU parallel computing techniques.
Probabilistic Design of a Plate-Like Wing to Meet Flutter and Strength Requirements
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson; Krishnamurthy, T.; Mason, Brian H.; Smith, Steven A.; Naser, Ahmad S.
2002-01-01
An approach is presented for carrying out reliability-based design of a metallic, plate-like wing to meet strength and flutter requirements that are given in terms of risk/reliability. The design problem is to determine the thickness distribution such that wing weight is a minimum and the probability of failure is less than a specified value. Failure is assumed to occur if either the flutter speed is less than a specified allowable or the stress caused by a pressure loading is greater than a specified allowable. Four uncertain quantities are considered: wing thickness, calculated flutter speed, allowable stress, and magnitude of a uniform pressure load. The reliability-based design optimization approach described herein starts with a design obtained using conventional deterministic design optimization with margins on the allowables. Reliability is calculated using Monte Carlo simulation with response surfaces that provide values of stresses and flutter speed. During the reliability-based design optimization, the response surfaces and move limits are coordinated to ensure accuracy of the response surfaces. Studies carried out in the paper show the relationship between reliability and weight and indicate that, for the design problem considered, increases in reliability can be obtained with modest increases in weight.
Optimization of Scan Parameters to Reduce Acquisition Time for Diffusion Kurtosis Imaging at 1.5T.
Yokosawa, Suguru; Sasaki, Makoto; Bito, Yoshitaka; Ito, Kenji; Yamashita, Fumio; Goodwin, Jonathan; Higuchi, Satomi; Kudo, Kohsuke
2016-01-01
To shorten acquisition of diffusion kurtosis imaging (DKI) in 1.5-tesla magnetic resonance (MR) imaging, we investigated the effects of the number of b-values, diffusion direction, and number of signal averages (NSA) on the accuracy of DKI metrics. We obtained 2 image datasets with 30 gradient directions, 6 b-values up to 2500 s/mm(2), and 2 signal averages from 5 healthy volunteers and generated DKI metrics, i.e., mean, axial, and radial kurtosis (MK, K∥, and K⊥) maps, from various combinations of the datasets. These maps were estimated by using the intraclass correlation coefficient (ICC) with those from the full datasets. The MK and K⊥ maps generated from the datasets including only the b-value of 2500 s/mm(2) showed excellent agreement (ICC, 0.96 to 0.99). Under the same acquisition time and diffusion directions, agreement was better of MK, K∥, and K⊥ maps obtained with 3 b-values (0, 1000, and 2500 s/mm(2)) and 4 signal averages than maps obtained with any other combination of numbers of b-value and varied NSA. Good agreement (ICC > 0.6) required at least 20 diffusion directions in all the metrics. MK and K⊥ maps with ICC greater than 0.95 can be obtained at 1.5T within 10 min (b-value = 0, 1000, and 2500 s/mm(2); 20 diffusion directions; 4 signal averages; slice thickness, 6 mm with no interslice gap; number of slices, 12).
Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis
Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.
2015-01-01
Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505
Harris, Brett S; De Cecco, Carlo N; Schoepf, U Joseph; Steinberg, Daniel H; Bayer, Richard R; Krazinski, Aleksander W; Dyer, Kevin T; Sandhu, Monique K; Zile, Michael R; Meinel, Felix G
2015-04-01
To assess the accuracy of computed tomographic (CT) examinations performed for the purpose of transcatheter aortic valve replacement (TAVR) planning to diagnose obstructive coronary artery disease (CAD). With institutional review board approval, waivers of informed consent, and in compliance with HIPAA, 100 consecutive TAVR candidates (61 men, mean age 79.6 years ± 9.9) who underwent both TAVR planning CT (with a dual-source CT system) and coronary catheter (CC) angiographic imaging were retrospectively analyzed. At both modalities, the presence of stenosis in the native coronary arteries was assessed. Additionally, all coronary bypass grafts were rated as patent or occluded. With CC angiographic imaging as the reference standard, the accuracy of CT for lesion detection on a per-vessel and per-patient basis was calculated. The accuracy of CT for the assessment of graft patency was also analyzed. For per-vessel and per-patient analysis for the detection of stenosis that was 50% or more in the native coronary arteries, CT imaging had, respectively, 94.4% and 98.6% sensitivity, 68.4% and 55.6% specificity, 94.7% and 93.8% negative predictive value (NPV), and 67.0% and 85.7% positive predictive value. Per-patient sensitivity of stenosis 50% or greater with CT for greater than 70% stenosis at CC angiographic imaging was 100%. All 12 vessels in which percutaneous coronary intervention was performed were correctly identified as demonstrating stenosis 50% or greater with CT. There was agreement between CT and CC angiographic imaging regarding graft patency in 114 of 115 grafts identified with CC angiographic imaging. TAVR planning CT has high sensitivity and NPV in excluding obstructive CAD. An additional preprocedural CC angiographic examination may not be required in TAVR candidates with a CT examination that does not show obstructive CAD. © RSNA, 2014 Online supplemental material is available for this article.
SU-E-T-438: Frameless Cranial Stereotactic Radiosurgery Immobilization Effectiveness Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tseng, T; Green, S; Sheu, R
Purpose: To evaluate immobilization effectiveness of Brainlab frameless mask in cranial stereotactic radiosurgery (SRS). Methods: Two sets of setup images were collected pre-and post-treatment for 24 frameless SRS cases. The pre-treatment images were obtained after applying 2D-2D kV image-guided shifts with patients in treatment position and approved by physicians; the post-treatment images were taken immediately after treatment completion. All cases were treated on a Novalis linac with ExacTrac positioning system and Exact Couch. The two image sets were compared with the correctional shifts measured by ExacTrac 6D auto-fusion. The shift differences were considered patient motion within the frameless mask andmore » were used to evaluate its effectiveness for immobilization. Two-tailed paired t-test was applied for significance comparison. Results: The correctional shifts (mean±STD, median) of pre-and post-treatment images were 0.33±0.27mm, 0.26mm and 0.34±0.27mm, 0.23mm (p=0.740) in lateral direction; 0.32±0.29mm, 0.22mm and 0.48±0.30mm, 0.50mm (p=0.012) in longitudinal direction; 0.31±0.22mm, 0.24mm and 0.33±0.21mm, 0.36mm (p=0.623) in vertical direction. The radial correctional shifts (mean±STD, median) of pre -and post-treatment images were 0.60±0.38mm, 0.45mm and 0.75±0.31mm, 0.66mm (p=0.033). The shift differences (mean±STD, median, maximum) were 0.35±0.28mm, 0.3mm, 1.05mm, 0.34±0.28mm, 0.3mm, 1.00mm, 0.24±0.15mm, 0.21mm, 0.60mm and 0.61±0.32mm, 0.57mm, 1.40mm in lateral, longitudinal, vertical and radial direction, respectively. Two shifts greater than 1 mm (1.06mm and 1.02mm) were acquired from post-treatment images. However, the shift differences were only 0.09 and 0.19mm for these two shifts. Two patients with shift differences greater than 1mm (1.05 and 1.04mm) were observed and didn’t coincide with those two who had post-correctional shifts greater than 1mm. Conclusion: Image-guided SRS allowed us to set up patients with sub-millimeter accuracy relative to simulation position. However, patient motion during treatment could affect treatment accuracy. Our Result shows that Brainlab frameless mask provides reasonable patient immobilization and maintains the mean post-treatment position within sub-millimeter accuracy with some borderline results observed.« less
Effectiveness of Link Prediction for Face-to-Face Behavioral Networks
Tsugawa, Sho; Ohsaki, Hiroyuki
2013-01-01
Research on link prediction for social networks has been actively pursued. In link prediction for a given social network obtained from time-windowed observation, new link formation in the network is predicted from the topology of the obtained network. In contrast, recent advances in sensing technology have made it possible to obtain face-to-face behavioral networks, which are social networks representing face-to-face interactions among people. However, the effectiveness of link prediction techniques for face-to-face behavioral networks has not yet been explored in depth. To clarify this point, here we investigate the accuracy of conventional link prediction techniques for networks obtained from the history of face-to-face interactions among participants at an academic conference. Our findings were (1) that conventional link prediction techniques predict new link formation with a precision of 0.30–0.45 and a recall of 0.10–0.20, (2) that prolonged observation of social networks often degrades the prediction accuracy, (3) that the proposed decaying weight method leads to higher prediction accuracy than can be achieved by observing all records of communication and simply using them unmodified, and (4) that the prediction accuracy for face-to-face behavioral networks is relatively high compared to that for non-social networks, but not as high as for other types of social networks. PMID:24339956
Emotion perception accuracy and bias in face-to-face versus cyberbullying.
Ciucci, Enrica; Baroncelli, Andrea; Nowicki, Stephen
2014-01-01
The authors investigated the association of traditional and cyber forms of bullying and victimization with emotion perception accuracy and emotion perception bias. Four basic emotions were considered (i.e., happiness, sadness, anger, and fear); 526 middle school students (280 females; M age = 12.58 years, SD = 1.16 years) were recruited, and emotionality was controlled. Results indicated no significant findings for girls. Boys with higher levels of traditional bullying did not show any deficit in perception accuracy of emotions, but they were prone to identify happiness and fear in faces when a different emotion was expressed; in addition, male cyberbullying was related to greater accuracy in recognizing fear. In terms of the victims, cyber victims had a global problem in recognizing emotions and a specific problem in processing anger and fear. It was concluded that emotion perception accuracy and bias were associated with bullying and victimization for boys not only in traditional settings but also in the electronic ones. Implications of these findings for possible intervention are discussed.
3D Higher Order Modeling in the BEM/FEM Hybrid Formulation
NASA Technical Reports Server (NTRS)
Fink, P. W.; Wilton, D. R.
2000-01-01
Higher order divergence- and curl-conforming bases have been shown to provide significant benefits, in both convergence rate and accuracy, in the 2D hybrid finite element/boundary element formulation (P. Fink and D. Wilton, National Radio Science Meeting, Boulder, CO, Jan. 2000). A critical issue in achieving the potential for accuracy of the approach is the accurate evaluation of all matrix elements. These involve products of high order polynomials and, in some instances, singular Green's functions. In the 2D formulation, the use of a generalized Gaussian quadrature method was found to greatly facilitate the computation and to improve the accuracy of the boundary integral equation self-terms. In this paper, a 3D, hybrid electric field formulation employing higher order bases and higher order elements is presented. The improvements in convergence rate and accuracy, compared to those resulting from lower order modeling, are established. Techniques developed to facilitate the computation of the boundary integral self-terms are also shown to improve the accuracy of these terms. Finally, simple preconditioning techniques are used in conjunction with iterative solution procedures to solve the resulting linear system efficiently. In order to handle the boundary integral singularities in the 3D formulation, the parent element- either a triangle or rectangle-is subdivided into a set of sub-triangles with a common vertex at the singularity. The contribution to the integral from each of the sub-triangles is computed using the Duffy transformation to remove the singularity. This method is shown to greatly facilitate t'pe self-term computation when the bases are of higher order. In addition, the sub-triangles can be further divided to achieve near arbitrary accuracy in the self-term computation. An efficient method for subdividing the parent element is presented. The accuracy obtained using higher order bases is compared to that obtained using lower order bases when the number of unknowns is approximately equal. Also, convergence rates obtained using higher order bases are compared to those obtained with lower order bases for selected sample
Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks.
Rathinam, Muruhan; Sheppard, Patrick W; Khammash, Mustafa
2010-01-21
Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie's stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10,000 are demonstrated.
NASA Technical Reports Server (NTRS)
Smith, D. R.
1982-01-01
The Purdue Regional Objective Analysis of the Mesoscale (PROAM) is a Barness-type scheme for the analysis of surface meteorological data. Modifications are introduced to the original version in order to increase its flexibility and to permit greater ease of usage. The code was rewritten for an interactive computer environment. Furthermore, a multiple iteration technique suggested by Barnes was implemented for greater accuracy. PROAM was subjected to a series of experiments in order to evaluate its performance under a variety of analysis conditions. The tests include use of a known analytic temperature distribution in order to quantify error bounds for the scheme. Similar experiments were conducted using actual atmospheric data. Results indicate that the multiple iteration technique increases the accuracy of the analysis. Furthermore, the tests verify appropriate values for the analysis parameters in resolving meso-beta scale phenomena.
Price, Jodi; Hertzog, Christopher; Dunlosky, John
2008-09-01
Age-related differences in updating knowledge about strategy effectiveness after task experience have not been consistently found, perhaps because the magnitude of observed knowledge updating has been rather meager for both age groups. We examined whether creating homogeneous blocks of recall tests based on two strategies used at encoding (imagery and repetition) would enhance people's learning about strategy effects on recall. Younger and older adults demonstrated greater knowledge updating (as measured by questionnaire ratings of strategy effectiveness and by global judgments of performance) with blocked (versus random) testing. The benefit of blocked testing for absolute accuracy of global predictions was smaller for older than younger adults. However, individual differences in correlations of strategy effectiveness ratings and postdictions showed similar upgrades for both age groups. Older adults learn about imagery's superior effectiveness but do not accurately estimate the magnitude of its benefit, even after blocked testing.
Prior familiarity with components enhances unconscious learning of relations.
Scott, Ryan B; Dienes, Zoltan
2010-03-01
The influence of prior familiarity with components on the implicit learning of relations was examined using artificial grammar learning. Prior to training on grammar strings, participants were familiarized with either the novel symbols used to construct the strings or with irrelevant geometric shapes. Participants familiarized with the relevant symbols showed greater accuracy when judging the correctness of new grammar strings. Familiarity with elemental components did not increase conscious awareness of the basis for discriminations (structural knowledge) but increased accuracy even in its absence. The subjective familiarity of test strings predicted grammaticality judgments. However, prior exposure to relevant symbols did not increase overall test string familiarity or reliance on familiarity when making grammaticality judgments. Familiarity with the symbols increased the learning of relations between them (bigrams and trigrams) thus resulting in greater familiarity for grammatical versus ungrammatical strings. The results have important implications for models of implicit learning.
Matrix-vector multiplication using digital partitioning for more accurate optical computing
NASA Technical Reports Server (NTRS)
Gary, C. K.
1992-01-01
Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.
Comparison of two head-up displays in simulated standard and noise abatement night visual approaches
NASA Technical Reports Server (NTRS)
Cronn, F.; Palmer, E. A., III
1975-01-01
Situation and command head-up displays were evaluated for both standard and two segment noise abatement night visual approaches in a fixed base simulation of a DC-8 transport aircraft. The situation display provided glide slope and pitch attitude information. The command display provided glide slope information and flight path commands to capture a 3 deg glide slope. Landing approaches were flown in both zero wind and wind shear conditions. For both standard and noise abatement approaches, the situation display provided greater glidepath accuracy in the initial phase of the landing approaches, whereas the command display was more effective in the final approach phase. Glidepath accuracy was greater for the standard approaches than for the noise abatement approaches in all phases of the landing approach. Most of the pilots preferred the command display and the standard approach. Substantial agreement was found between each pilot's judgment of his performance and his actual performance.
Prediction algorithms for urban traffic control
DOT National Transportation Integrated Search
1979-02-01
The objectives of this study are to 1) review and assess the state-of-the-art of prediction algorithms for urban traffic control in terms of their accuracy and application, and 2) determine the prediction accuracy obtainable by examining the performa...
High-accuracy direct aerial platform orientation with tightly coupled GPS/INS system.
DOT National Transportation Integrated Search
2004-09-01
Obtaining sensor orientation by direct measurements is a rapidly emerging mapping technology. Modern GPS and INS systems allow for the direct determination of platform position and orientation at an unprecedented accuracy. In airborne surveying, airc...
Fan, Yong; Du, Jin Peng; Liu, Ji Jun; Zhang, Jia Nan; Qiao, Huan Huan; Liu, Shi Chang; Hao, Ding Jun
2018-06-01
A miniature spine-mounted robot has recently been introduced to further improve the accuracy of pedicle screw placement in spine surgery. However, the differences in accuracy between the robotic-assisted (RA) technique and the free-hand with fluoroscopy-guided (FH) method for pedicle screw placement are controversial. A meta-analysis was conducted to focus on this problem. Several randomized controlled trials (RCTs) and cohort studies involving RA and FH and published before January 2017 were searched for using the Cochrane Library, Ovid, Web of Science, PubMed, and EMBASE databases. A total of 55 papers were selected. After the full-text assessment, 45 clinical trials were excluded. The final meta-analysis included 10 articles. The accuracy of pedicle screw placement within the RA group was significantly greater than the accuracy within the FH group (odds ratio 95%, "perfect accuracy" confidence interval: 1.38-2.07, P < .01; odds ratio 95% "clinically acceptable" Confidence Interval: 1.17-2.08, P < .01). There are significant differences in accuracy between RA surgery and FH surgery. It was demonstrated that the RA technique is superior to the conventional method in terms of the accuracy of pedicle screw placement.
McGarvey, Ciaran; Harb, Ziad; Smith, Christian; Houghton, Russell; Corbett, Steven; Ajuied, Adil
2016-02-01
To compare the diagnostic accuracy of magnetic resonance imaging (MRI), 2-dimensional magnetic resonance arthrogram (MRA) and 3-dimensional isotropic MRA in the diagnosis of rotator cuff tears when performed exclusively at 3-T. A systematic review was undertaken of the Cochrane, MEDLINE and PubMed databases in accordance with the PRISMA guidelines. Studies comparing 3-T MRI or 3-T MRA (index tests) to arthroscopic surgical findings (reference test) were included. Methodological appraisal was performed using QUADAS 2. Pooled sensitivity and specificity were calculated and summary receiver-operating curves generated. Kappa coefficients quantified inter-observer reliability. Fourteen studies comprising 1332 patients were identified for inclusion. Twelve studies were retrospective and there were concerns regarding index test bias and applicability in nine and six studies respectively. Reference test bias was a concern in all studies. Both 3-T MRI and 3-T MRA showed similar excellent diagnostic accuracy for full-thickness supraspinatus tears. Concerning partial-thickness supraspinatus tears, 3-T 2D MRA was significantly more sensitive (86.6 vs. 80.5 %, p = 0.014) but significantly less specific (95.2 vs. 100 %, p < 0.001). There was a trend towards greater accuracy in the diagnosis of subscapularis tears with 3-T MRA. Three-Tesla 3D isotropic MRA showed similar accuracy to 3-T conventional 2D MRA. Three-Tesla MRI appeared equivalent to 3-T MRA in the diagnosis of full- and partial-thickness tears, although there was a trend towards greater accuracy in the diagnosis of subscapularis tears with 3-T MRA. Three-Tesla 3D isotropic MRA appears equivalent to 3-T 2D MRA for all types of tears.
Effects of accuracy constraints on reach-to-grasp movements in cerebellar patients.
Rand, M K; Shimansky, Y; Stelmach, G E; Bracha, V; Bloedel, J R
2000-11-01
Reach-to-grasp movements of patients with pathology restricted to the cerebellum were compared with those of normal controls. Two types of paradigms with different accuracy constraints were used to examine whether cerebellar impairment disrupts the stereotypic relationship between arm transport and grip aperture and whether the variability of this relationship is altered when greater accuracy is required. The movements were made to either a vertical dowel or to a cross bar of a small cross. All subjects were asked to reach for either target at a fast but comfortable speed, grasp the object between the index finger and thumb, and lift it a short distance off the table. In terms of the relationship between arm transport and grip aperture, the control subjects showed a high consistency in grip aperture and wrist velocity profiles from trial to trial for movements to both the dowel and the cross. The relationship between the maximum velocity of the wrist and the time at which grip aperture was maximal during the reach was highly consistent throughout the experiment. In contrast, the time of maximum grip aperture and maximum wrist velocity of the cerebellar patients was quite variable from trial to trial, and the relationship of these measurements also varied considerably. These abnormalities were present regardless of the accuracy requirement. In addition, the cerebellar patients required a significantly longer time to grasp and lift the objects than the control subjects. Furthermore, the patients exhibited a greater grip aperture during reach than the controls. These data indicate that the cerebellum contributes substantially to the coordination of movements required to perform reach-to-grasp movements. Specifically, the cerebellum is critical for executing this behavior with a consistent, well-timed relationship between the transport and grasp components. This contribution is apparent even when accuracy demands are minimal.
Kuchibhatla, Maragatha N.; Whitson, Heather E.; Batch, Bryan C.; Svetkey, Laura P.; Pieper, Carl F.; Kraus, William E.; Cohen, Harvey J.; Blazer, Dan G.
2010-01-01
Background. To ascertain accuracy of self-reported height, weight (and hence body mass index) in African American and white women and men older than 70 years of age. Method. The sample consisted of cognitively intact participants at the third in-person wave (1992–1993) of the Duke Established Populations for Epidemiologic Studies of the Elderly (age 71 and older, N = 1761; residents of five adjacent counties, one urban, four rural). During in-person, in-home interviews using trained interviewers, height and weight were self-reported (and measured later in the same visit using a standardized protocol), and information were obtained on race, sex, and age. Results. Accuracy of self-reported height and weight was high (intraclass correlation coefficient 0.85 and 0.97, respectively) but differed as a function of race and age. On average, all groups overestimated their height; whereas (non-Hispanic) white men and women underestimated their weight, African Americans overestimated their weight. Overestimation of height and weight was more marked in persons 85 years and older. Specificity for overweight (body mass index [kg/m2] ≥ 25) and obesity (body mass index ≥ 30) ranged from 0.90 to 0.99 for African Americans and whites, but sensitivity was better for African Americans (overweight: 0.81, obesity: 0.89), than for whites (0.66 and 0.57, respectively). Conclusions. Height and weight self-reported by African Americans and whites over the age of 70 can be used in epidemiological studies, with greater caution needed for self-reports of whites, and of persons 85 years of age or older. PMID:20530243
Wallace, Jason A; Wang, Yuhang; Shi, Chuanyin; Pastoor, Kevin J; Nguyen, Bao-Linh; Xia, Kai; Shen, Jana K
2011-12-01
Proton uptake or release controls many important biological processes, such as energy transduction, virus replication, and catalysis. Accurate pK(a) prediction informs about proton pathways, thereby revealing detailed acid-base mechanisms. Physics-based methods in the framework of molecular dynamics simulations not only offer pK(a) predictions but also inform about the physical origins of pK(a) shifts and provide details of ionization-induced conformational relaxation and large-scale transitions. One such method is the recently developed continuous constant pH molecular dynamics (CPHMD) method, which has been shown to be an accurate and robust pK(a) prediction tool for naturally occurring titratable residues. To further examine the accuracy and limitations of CPHMD, we blindly predicted the pK(a) values for 87 titratable residues introduced in various hydrophobic regions of staphylococcal nuclease and variants. The predictions gave a root-mean-square deviation of 1.69 pK units from experiment, and there were only two pK(a)'s with errors greater than 3.5 pK units. Analysis of the conformational fluctuation of titrating side-chains in the context of the errors of calculated pK(a) values indicate that explicit treatment of conformational flexibility and the associated dielectric relaxation gives CPHMD a distinct advantage. Analysis of the sources of errors suggests that more accurate pK(a) predictions can be obtained for the most deeply buried residues by improving the accuracy in calculating desolvation energies. Furthermore, it is found that the generalized Born implicit-solvent model underlying the current CPHMD implementation slightly distorts the local conformational environment such that the inclusion of an explicit-solvent representation may offer improvement of accuracy. Copyright © 2011 Wiley-Liss, Inc.
Kaiju, Taro; Doi, Keiichi; Yokota, Masashi; Watanabe, Kei; Inoue, Masato; Ando, Hiroshi; Takahashi, Kazutaka; Yoshida, Fumiaki; Hirata, Masayuki; Suzuki, Takafumi
2017-01-01
Electrocorticogram (ECoG) has great potential as a source signal, especially for clinical BMI. Until recently, ECoG electrodes were commonly used for identifying epileptogenic foci in clinical situations, and such electrodes were low-density and large. Increasing the number and density of recording channels could enable the collection of richer motor/sensory information, and may enhance the precision of decoding and increase opportunities for controlling external devices. Several reports have aimed to increase the number and density of channels. However, few studies have discussed the actual validity of high-density ECoG arrays. In this study, we developed novel high-density flexible ECoG arrays and conducted decoding analyses with monkey somatosensory evoked potentials (SEPs). Using MEMS technology, we made 96-channel Parylene electrode arrays with an inter-electrode distance of 700 μm and recording site area of 350 μm 2 . The arrays were mainly placed onto the finger representation area in the somatosensory cortex of the macaque, and partially inserted into the central sulcus. With electrical finger stimulation, we successfully recorded and visualized finger SEPs with a high spatiotemporal resolution. We conducted offline analyses in which the stimulated fingers and intensity were predicted from recorded SEPs using a support vector machine. We obtained the following results: (1) Very high accuracy (~98%) was achieved with just a short segment of data (~15 ms from stimulus onset). (2) High accuracy (~96%) was achieved even when only a single channel was used. This result indicated placement optimality for decoding. (3) Higher channel counts generally improved prediction accuracy, but the efficacy was small for predictions with feature vectors that included time-series information. These results suggest that ECoG signals with high spatiotemporal resolution could enable greater decoding precision or external device control.
Kaiju, Taro; Doi, Keiichi; Yokota, Masashi; Watanabe, Kei; Inoue, Masato; Ando, Hiroshi; Takahashi, Kazutaka; Yoshida, Fumiaki; Hirata, Masayuki; Suzuki, Takafumi
2017-01-01
Electrocorticogram (ECoG) has great potential as a source signal, especially for clinical BMI. Until recently, ECoG electrodes were commonly used for identifying epileptogenic foci in clinical situations, and such electrodes were low-density and large. Increasing the number and density of recording channels could enable the collection of richer motor/sensory information, and may enhance the precision of decoding and increase opportunities for controlling external devices. Several reports have aimed to increase the number and density of channels. However, few studies have discussed the actual validity of high-density ECoG arrays. In this study, we developed novel high-density flexible ECoG arrays and conducted decoding analyses with monkey somatosensory evoked potentials (SEPs). Using MEMS technology, we made 96-channel Parylene electrode arrays with an inter-electrode distance of 700 μm and recording site area of 350 μm2. The arrays were mainly placed onto the finger representation area in the somatosensory cortex of the macaque, and partially inserted into the central sulcus. With electrical finger stimulation, we successfully recorded and visualized finger SEPs with a high spatiotemporal resolution. We conducted offline analyses in which the stimulated fingers and intensity were predicted from recorded SEPs using a support vector machine. We obtained the following results: (1) Very high accuracy (~98%) was achieved with just a short segment of data (~15 ms from stimulus onset). (2) High accuracy (~96%) was achieved even when only a single channel was used. This result indicated placement optimality for decoding. (3) Higher channel counts generally improved prediction accuracy, but the efficacy was small for predictions with feature vectors that included time-series information. These results suggest that ECoG signals with high spatiotemporal resolution could enable greater decoding precision or external device control. PMID:28442997
NASA Astrophysics Data System (ADS)
Siahpolo, Navid; Gerami, Mohsen; Vahdani, Reza
2016-09-01
Evaluating the capability of elastic Load Patterns (LPs) including seismic codes and modified LPs such as Method of Modal Combination (MMC) and Upper Bound Pushover Analysis (UBPA) in estimating inelastic demands of non deteriorating steel moment frames is the main objective of this study. The Static Nonlinear Procedure (NSP) is implemented and the results of NSP are compared with Nonlinear Time History Analysis (NTHA). The focus is on the effects of near-fault pulselike ground motions. The primary demands of interest are the maximum floor displacement, the maximum story drift angle over the height, the maximum global ductility, the maximum inter-story ductility and the capacity curves. Five types of LPs are selected and the inelastic demands are calculated under four levels of inter-story target ductility ( μ t) using OpenSees software. The results show that the increase in μ t coincides with the migration of the peak demands over the height from the top to the bottom stories. Therefore, all LPs estimate the story lateral displacement accurately at the lower stories. The results are almost independent of the number of stories. While, the inter-story drift angle (IDR) obtained from MMC method has the most appropriate accuracy among the other LPs. Although, the accuracy of this method decreases with increasing μ t so that with increasing number of stories, IDR is smaller or greater than the values resulted from NTHA depending on the position of captured results. In addition, increasing μ t decreases the accuracy of all LPs in determination of critical story position. In this case, the MMC method has the best coincidence with distribution of inter-story ductility over the height.
Simon, Liliana; Saint-Louis, Patrick; Amre, Devendra K; Lacroix, Jacques; Gauvin, France
2008-07-01
To compare the accuracy of procalcitonin and C-reactive protein as diagnostic markers of bacterial infection in critically ill children at the onset of systemic inflammatory response syndrome (SIRS). Prospective cohort study. Tertiary care, university-affiliated pediatric intensive care unit (PICU). Consecutive patients with SIRS. From June to December 2002, all PICU patients were screened daily to include cases of SIRS. At inclusion (onset of SIRS), procalcitonin and C-reactive protein levels as well as an array of cultures were obtained. Diagnosis of bacterial infection was made a posteriori by an adjudicating process (consensus of experts unaware of the results of procalcitonin and C-reactive protein). Baseline and daily data on severity of illness, organ dysfunction, and outcome were collected. Sixty-four patients were included in the study and were a posteriori divided into the following groups: bacterial SIRS (n = 25) and nonbacterial SIRS (n = 39). Procalcitonin levels were significantly higher in patients with bacterial infection compared with patients without bacterial infection (p = .01). The area under the receiver operating characteristic curve for procalcitonin was greater than that for C-reactive protein (0.71 vs. 0.65, respectively). A positive procalcitonin level (>or=2.5 ng/mL), when added to bedside clinical judgment, increased the likelihood of bacterial infection from 39% to 92%, while a negative C-reactive protein level (<40 mg/L) decreased the probability of bacterial infection from 39% to 2%. Procalcitonin is better than C-reactive protein for differentiating bacterial from nonbacterial SIRS in critically ill children, although the accuracy of both tests is moderate. Diagnostic accuracy could be enhanced by combining these tests with bedside clinical judgment.
Chen, Baisheng; Wu, Huanan; Li, Sam Fong Yau
2014-03-01
To overcome the challenging task to select an appropriate pathlength for wastewater chemical oxygen demand (COD) monitoring with high accuracy by UV-vis spectroscopy in wastewater treatment process, a variable pathlength approach combined with partial-least squares regression (PLSR) was developed in this study. Two new strategies were proposed to extract relevant information of UV-vis spectral data from variable pathlength measurements. The first strategy was by data fusion with two data fusion levels: low-level data fusion (LLDF) and mid-level data fusion (MLDF). Predictive accuracy was found to improve, indicated by the lower root-mean-square errors of prediction (RMSEP) compared with those obtained for single pathlength measurements. Both fusion levels were found to deliver very robust PLSR models with residual predictive deviations (RPD) greater than 3 (i.e. 3.22 and 3.29, respectively). The second strategy involved calculating the slopes of absorbance against pathlength at each wavelength to generate slope-derived spectra. Without the requirement to select the optimal pathlength, the predictive accuracy (RMSEP) was improved by 20-43% as compared to single pathlength spectroscopy. Comparing to nine-factor models from fusion strategy, the PLSR model from slope-derived spectroscopy was found to be more parsimonious with only five factors and more robust with residual predictive deviation (RPD) of 3.72. It also offered excellent correlation of predicted and measured COD values with R(2) of 0.936. In sum, variable pathlength spectroscopy with the two proposed data analysis strategies proved to be successful in enhancing prediction performance of COD in wastewater and showed high potential to be applied in on-line water quality monitoring. Copyright © 2013 Elsevier B.V. All rights reserved.
Accuracy of straight leg raise and slump tests in detecting lumbar disc herniation: a pilot study.
M'kumbuzi, V R P; Ntawukuriryayo, J T; Haminana, J D; Munyandamutsa, J; Nzakizwanimana, E
2012-01-01
To determine the accuracy of the Straight Leg Raise (SLR) and slump tests in detecting Lumbar Disc Herniation (LDH). Cross-sectional diagnostic accuracy study. Two referral hospitals in Kigali, Rwanda: King Faisal Hospital and Centre Hospitalier Universitaire de Kigali. All patients aged 18 to 70 who had an MRI and who were experiencing pain in the low back, leg or low back and leg. Closed Magnetic Resonance Imaging (MRI) investigations for each patient as witnessed by a radiologist who read the image were recorded by the first researcher and blinded to other researchers. The SLR and slump tests were performed three times on each patient by independent testers who were blinded to the result of the first test. The test order was randomized for each subject and the two tests were separated by one day wash-out period. Data were analyzed using a 2x2 table to ascertain diagnostic statistics including sensitivity and specificity with 95% confidence intervals. Thirty three from a possible 37 patients mean age 41.58 ± 10 years completed all of the tests. The sensitivity of SLR was greater (0.875; CI: 0.690-0.957) than that of the slump tests (0.800; CI: 0.6087-0.911) (p = 0.01) in detecting LDH. The specificity for SLR was 0.429 (CI: 0.158-0.750) and for slump was 0.714 (CI: 0.359-0.918). Substantial agreement (K = 0.774) was obtained between the SLR and MRI. The SLR was more accurate in detecting LDH. Further validation of this pilot finding is required by studying a larger sample.
Ma, Liheng; Zhan, Dejun; Jiang, Guangwen; Fu, Sihua; Jia, Hui; Wang, Xingshu; Huang, Zongsheng; Zheng, Jiaxing; Hu, Feng; Wu, Wei; Qin, Shiqiao
2015-09-01
The attitude accuracy of a star sensor decreases rapidly when star images become motion-blurred under dynamic conditions. Existing techniques concentrate on a single frame of star images to solve this problem and improvements are obtained to a certain extent. An attitude-correlated frames (ACF) approach, which concentrates on the features of the attitude transforms of the adjacent star image frames, is proposed to improve upon the existing techniques. The attitude transforms between different star image frames are measured by the strap-down gyro unit precisely. With the ACF method, a much larger star image frame is obtained through the combination of adjacent frames. As a result, the degradation of attitude accuracy caused by motion-blurring are compensated for. The improvement of the attitude accuracy is approximately proportional to the square root of the number of correlated star image frames. Simulations and experimental results indicate that the ACF approach is effective in removing random noises and improving the attitude determination accuracy of the star sensor under highly dynamic conditions.
Estimation of accuracy of time synchronization obtained by means of clock transportation
NASA Astrophysics Data System (ADS)
Zhang, Yuzhen; Ma, Dekang; Jin, Wenjing; Zhao, Gang; Huang, Peicheng
A portable clock experiment was carried out in October 1985 between Shanghai Observatory and Beijing Observatory using a small quartz clock made in Switzerland. The accuracy of time synchronization in 5 days is 70.18 microsec and the accuracy of determining the transmission time of short wave is satisfactory for reduction of the astronomical observations to the same master clock.
NASA Technical Reports Server (NTRS)
Folkner, W. M.; Border, J. S.; Nandi, S.; Zukor, K. S.
1993-01-01
A new radio metric positioning technique has demonstrated improved orbit determination accuracy for the Magellan and Pioneer Venus Orbiter orbiters. The new technique, known as Same-Beam Interferometry (SBI), is applicable to the positioning of multiple planetary rovers, landers, and orbiters which may simultaneously be observed in the same beamwidth of Earth-based radio antennas. Measurements of carrier phase are differenced between spacecraft and between receiving stations to determine the plane-of-sky components of the separation vector(s) between the spacecraft. The SBI measurements complement the information contained in line-of-sight Doppler measurements, leading to improved orbit determination accuracy. Orbit determination solutions have been obtained for a number of 48-hour data arcs using combinations of Doppler, differenced-Doppler, and SBI data acquired in the spring of 1991. Orbit determination accuracy is assessed by comparing orbit solutions from adjacent data arcs. The orbit solution differences are shown to agree with expected orbit determination uncertainties. The results from this demonstration show that the orbit determination accuracy for Magellan obtained by using Doppler plus SBI data is better than the accuracy achieved using Doppler plus differenced-Doppler by a factor of four and better than the accuracy achieved using only Doppler by a factor of eighteen. The orbit determination accuracy for Pioneer Venus Orbiter using Doppler plus SBI data is better than the accuracy using only Doppler data by 30 percent.
Optimizing Tsunami Forecast Model Accuracy
NASA Astrophysics Data System (ADS)
Whitmore, P.; Nyland, D. L.; Huang, P. Y.
2015-12-01
Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.
Ultrasound versus liver function tests for diagnosis of common bile duct stones.
Gurusamy, Kurinchi Selvan; Giljaca, Vanja; Takwoingi, Yemisi; Higgie, David; Poropat, Goran; Štimac, Davor; Davidson, Brian R
2015-02-26
Ultrasound and liver function tests (serum bilirubin and serum alkaline phosphatase) are used as screening tests for the diagnosis of common bile duct stones in people suspected of having common bile duct stones. There has been no systematic review of the diagnostic accuracy of ultrasound and liver function tests. To determine and compare the accuracy of ultrasound versus liver function tests for the diagnosis of common bile duct stones. We searched MEDLINE, EMBASE, Science Citation Index Expanded, BIOSIS, and Clinicaltrials.gov to September 2012. We searched the references of included studies to identify further studies and systematic reviews identified from various databases (Database of Abstracts of Reviews of Effects, Health Technology Assessment, Medion, and ARIF (Aggressive Research Intelligence Facility)). We did not restrict studies based on language or publication status, or whether data were collected prospectively or retrospectively. We included studies that provided the number of true positives, false positives, false negatives, and true negatives for ultrasound, serum bilirubin, or serum alkaline phosphatase. We only accepted studies that confirmed the presence of common bile duct stones by extraction of the stones (irrespective of whether this was done by surgical or endoscopic methods) for a positive test result, and absence of common bile duct stones by surgical or endoscopic negative exploration of the common bile duct, or symptom-free follow-up for at least six months for a negative test result as the reference standard in people suspected of having common bile duct stones. We included participants with or without prior diagnosis of cholelithiasis; with or without symptoms and complications of common bile duct stones, with or without prior treatment for common bile duct stones; and before or after cholecystectomy. At least two authors screened abstracts and selected studies for inclusion independently. Two authors independently collected data from each study. Where meta-analysis was possible, we used the bivariate model to summarise sensitivity and specificity. Five studies including 523 participants reported the diagnostic accuracy of ultrasound. One studies (262 participants) compared the accuracy of ultrasound, serum bilirubin and serum alkaline phosphatase in the same participants. All the studies included people with symptoms. One study included only participants without previous cholecystectomy but this information was not available from the remaining studies. All the studies were of poor methodological quality. The sensitivities for ultrasound ranged from 0.32 to 1.00, and the specificities ranged from 0.77 to 0.97. The summary sensitivity was 0.73 (95% CI 0.44 to 0.90) and the specificity was 0.91 (95% CI 0.84 to 0.95). At the median pre-test probability of common bile duct stones of 0.408, the post-test probability (95% CI) associated with positive ultrasound tests was 0.85 (95% CI 0.75 to 0.91), and negative ultrasound tests was 0.17 (95% CI 0.08 to 0.33).The single study of liver function tests reported diagnostic accuracy at two cut-offs for bilirubin (greater than 22.23 μmol/L and greater than twice the normal limit) and two cut-offs for alkaline phosphatase (greater than 125 IU/L and greater than twice the normal limit). This study also assessed ultrasound and reported higher sensitivities for bilirubin and alkaline phosphatase at both cut-offs but the specificities of the markers were higher at only the greater than twice the normal limit cut-off. The sensitivity for ultrasound was 0.32 (95% CI 0.15 to 0.54), bilirubin (cut-off greater than 22.23 μmol/L) was 0.84 (95% CI 0.64 to 0.95), and alkaline phosphatase (cut-off greater than 125 IU/L) was 0.92 (95% CI 0.74 to 0.99). The specificity for ultrasound was 0.95 (95% CI 0.91 to 0.97), bilirubin (cut-off greater than 22.23 μmol/L) was 0.91 (95% CI 0.86 to 0.94), and alkaline phosphatase (cut-off greater than 125 IU/L) was 0.79 (95% CI 0.74 to 0.84). No study reported the diagnostic accuracy of a combination of bilirubin and alkaline phosphatase, or combinations with ultrasound. Many people may have common bile duct stones in spite of having a negative ultrasound or liver function test. Such people may have to be re-tested with other modalities if the clinical suspicion of common bile duct stones is very high because of their symptoms. False-positive results are also possible and further non-invasive testing is recommended to confirm common bile duct stones to avoid the risks of invasive testing.It should be noted that these results were based on few studies of poor methodological quality and the results for ultrasound varied considerably between studies. Therefore, the results should be interpreted with caution. Further studies of high methodological quality are necessary to determine the diagnostic accuracy of ultrasound and liver function tests.
NASA Astrophysics Data System (ADS)
Guha, Daipayan; Jakubovic, Raphael; Gupta, Shaurya; Yang, Victor X. D.
2017-02-01
Computer-assisted navigation (CAN) may guide spinal surgeries, reliably reducing screw breach rates. Definitions of screw breach, if reported, vary widely across studies. Absolute quantitative error is theoretically a more precise and generalizable metric of navigation accuracy, but has been computed variably and reported in fewer than 25% of clinical studies of CAN-guided pedicle screw accuracy. We reviewed a prospectively-collected series of 209 pedicle screws placed with CAN guidance to characterize the correlation between clinical pedicle screw accuracy, based on postoperative imaging, and absolute quantitative navigation accuracy. We found that acceptable screw accuracy was achieved for significantly fewer screws based on 2mm grade vs. Heary grade, particularly in the lumbar spine. Inter-rater agreement was good for the Heary classification and moderate for the 2mm grade, significantly greater among radiologists than surgeon raters. Mean absolute translational/angular accuracies were 1.75mm/3.13° and 1.20mm/3.64° in the axial and sagittal planes, respectively. There was no correlation between clinical and absolute navigation accuracy, in part because surgeons appear to compensate for perceived translational navigation error by adjusting screw medialization angle. Future studies of navigation accuracy should therefore report absolute translational and angular errors. Clinical screw grades based on post-operative imaging, if reported, may be more reliable if performed in multiple by radiologist raters.
DOT National Transportation Integrated Search
2004-09-01
Obtaining sensor orientation by direct measurements is : a rapidly emerging mapping technology. Modern GPS : and INS systems allow for the direct determination of : platform position and orientation at an unprecedented : accuracy. In airborne surveyi...
Generation of a high-accuracy regional DEM based on ALOS/PRISM imagery of East Antarctica
NASA Astrophysics Data System (ADS)
Shiramizu, Kaoru; Doi, Koichiro; Aoyama, Yuichi
2017-12-01
A digital elevation model (DEM) is used to estimate ice-flow velocities for an ice sheet and glaciers via Differential Interferometric Synthetic Aperture Radar (DInSAR) processing. The accuracy of DInSAR-derived displacement estimates depends upon the accuracy of the DEM. Therefore, we used stereo optical images, obtained with a panchromatic remote-sensing instrument for stereo mapping (PRISM) sensor mounted onboard the Advanced Land Observing Satellite (ALOS), to produce a new DEM ("PRISM-DEM") of part of the coastal region of Lützow-Holm Bay in Dronning Maud Land, East Antarctica. We verified the accuracy of the PRISM-DEM by comparing ellipsoidal heights with those of existing DEMs and values obtained by satellite laser altimetry (ICESat/GLAS) and Global Navigation Satellite System surveying. The accuracy of the PRISM-DEM is estimated to be 2.80 m over ice sheet, 4.86 m over individual glaciers, and 6.63 m over rock outcrops. By comparison, the estimated accuracy of the ASTER-GDEM, widely used in polar regions, is 33.45 m over ice sheet, 14.61 m over glaciers, and 19.95 m over rock outcrops. For displacement measurements made along the radar line-of-sight by DInSAR, in conjunction with ALOS/PALSAR data, the accuracy of the PRISM-DEM and ASTER-GDEM correspond to estimation errors of <6.3 mm and <31.8 mm, respectively.
Factors affecting GEBV accuracy with single-step Bayesian models.
Zhou, Lei; Mrode, Raphael; Zhang, Shengli; Zhang, Qin; Li, Bugao; Liu, Jian-Feng
2018-01-01
A single-step approach to obtain genomic prediction was first proposed in 2009. Many studies have investigated the components of GEBV accuracy in genomic selection. However, it is still unclear how the population structure and the relationships between training and validation populations influence GEBV accuracy in terms of single-step analysis. Here, we explored the components of GEBV accuracy in single-step Bayesian analysis with a simulation study. Three scenarios with various numbers of QTL (5, 50, and 500) were simulated. Three models were implemented to analyze the simulated data: single-step genomic best linear unbiased prediction (GBLUP; SSGBLUP), single-step BayesA (SS-BayesA), and single-step BayesB (SS-BayesB). According to our results, GEBV accuracy was influenced by the relationships between the training and validation populations more significantly for ungenotyped animals than for genotyped animals. SS-BayesA/BayesB showed an obvious advantage over SSGBLUP with the scenarios of 5 and 50 QTL. SS-BayesB model obtained the lowest accuracy with the 500 QTL in the simulation. SS-BayesA model was the most efficient and robust considering all QTL scenarios. Generally, both the relationships between training and validation populations and LD between markers and QTL contributed to GEBV accuracy in the single-step analysis, and the advantages of single-step Bayesian models were more apparent when the trait is controlled by fewer QTL.
Training and Required Reading Management Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Jerel
2009-08-13
This tool manages training and required reading for groups, facilities, etc âÃÂàabilities beyond the site training systems. TRRMTool imports training data from controlled site data sources/systems and provides greater management and reporting. Clients have been able to greatly reduce the time and effort required to manage training, have greater accuracy, foster individual accountability, and be proactive in verifying training of support personnel, to maintain compliance.
Dundas, Eva M.; Plaut, David C.; Behrmann, Marlene
2014-01-01
The adult human brain would appear to have specialized and independent neural systems for the visual processing of words and faces. Extensive evidence has demonstrated greater selectivity for written words in the left over right hemisphere, and, conversely, greater selectivity for faces in the right over left hemisphere. This study examines the emergence of these complementary neural profiles, as well as the possible relationship between them. Using behavioral and neurophysiological measures, in adults, we observed the standard finding of greater accuracy and a larger N170 ERP component in the left over right hemisphere for words, and conversely, greater accuracy and a larger N170 in the right over the left hemisphere for faces. We also found that, although children aged 7-12 years revealed the adult hemispheric pattern for words, they showed neither a behavioral nor a neural hemispheric superiority for faces. Of particular interest, the magnitude of their N170 for faces in the right hemisphere was related to that of the N170 for words in their left hemisphere. These findings suggest that the hemispheric organization of face recognition and of word recognition do not develop independently, and that word lateralization may precede and drive later face lateralization. A theoretical account for the findings, in which competition for visual representations unfolds over the course of development, is discussed. PMID:24933662
Assessing the accuracy of weather radar to track intense rain cells in the Greater Lyon area, France
NASA Astrophysics Data System (ADS)
Renard, Florent; Chapon, Pierre-Marie; Comby, Jacques
2012-01-01
The Greater Lyon is a dense area located in the Rhône Valley in the south east of France. The conurbation counts 1.3 million inhabitants and the rainfall hazard is a great concern. However, until now, studies on rainfall over the Greater Lyon have only been based on the network of rain gauges, despite the presence of a C-band radar located in the close vicinity. Consequently, the first aim of this study was to investigate the hydrological quality of this radar. This assessment, based on comparison of radar estimations and rain-gauges values concludes that the radar data has overall a good quality since 2006. Given this good accuracy, this study made a next step and investigated the characteristics of intense rain cells that are responsible of the majority of floods in the Greater Lyon area. Improved knowledge on these rainfall cells is important to anticipate dangerous events and to improve the monitoring of the sewage system. This paper discusses the analysis of the ten most intense rainfall events in the 2001-2010 period. Spatial statistics pointed towards straight and linear movements of intense rainfall cells, independently on the ground surface conditions and the topography underneath. The speed of these cells was found nearly constant during a rainfall event, but depend from event to ranges on average from 25 to 66 km/h.
Dundas, Eva M; Plaut, David C; Behrmann, Marlene
2014-08-01
The adult human brain would appear to have specialized and independent neural systems for the visual processing of words and faces. Extensive evidence has demonstrated greater selectivity for written words in the left over right hemisphere, and, conversely, greater selectivity for faces in the right over left hemisphere. This study examines the emergence of these complementary neural profiles, as well as the possible relationship between them. Using behavioral and neurophysiological measures, in adults, we observed the standard finding of greater accuracy and a larger N170 ERP component in the left over right hemisphere for words, and conversely, greater accuracy and a larger N170 in the right over the left hemisphere for faces. We also found that although children aged 7-12 years revealed the adult hemispheric pattern for words, they showed neither a behavioral nor a neural hemispheric superiority for faces. Of particular interest, the magnitude of their N170 for faces in the right hemisphere was related to that of the N170 for words in their left hemisphere. These findings suggest that the hemispheric organization of face recognition and of word recognition does not develop independently, and that word lateralization may precede and drive later face lateralization. A theoretical account for the findings, in which competition for visual representations unfolds over the course of development, is discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kara, I. V.
This paper describes a simplified numerical model of passive artificial Earth satellite (AES) motion. The model accuracy is determined using the International Laser Ranging Service (ILRS) highprecision coordinates. Those data are freely available on http://ilrs.gsfc.nasa.gov. The differential equations of the AES motion are solved by the Everhart numerical method of 17th and 19th orders with the integration step automatic correction. The comparison between the AES coordinates computed with the motion model and the ILRS coordinates enabled to determine the accuracy of the ephemerides obtained. As a result, the discrepancy of the computed Etalon-1 ephemerides from the ILRS data is about 10'' for a one-year ephemeris.
Effects of Emotion on Associative Recognition: Valence and Retention Interval Matter
Pierce, Benton H.; Kensinger, Elizabeth A.
2011-01-01
In two experiments, we examined the effects of emotional valence and arousal on associative binding. Participants studied negative, positive, and neutral word pairs, followed by an associative recognition test. In Experiment 1, with a short-delayed test, accuracy for intact pairs was equivalent across valences, whereas accuracy for rearranged pairs was lower for negative than for positive and neutral pairs. In Experiment 2, we tested participants after a one-week delay and found that accuracy was greater for intact negative than for intact neutral pairs, whereas rearranged pair accuracy was equivalent across valences. These results suggest that, although negative emotional valence impairs associative binding after a short delay, it may improve binding after a longer delay. The results also suggest that valence, as well as arousal, needs to be considered when examining the effects of emotion on associative memory. PMID:21401233
Testing of a technique for remotely measuring water salinity in an estuarine environment
NASA Technical Reports Server (NTRS)
Thomann, G. C.
1975-01-01
An aircraft experiment was flown on November 7, 1973 to test a technique for remote water salinity measurement. Apparent temperatures at 21 cm and 8-14 micron wavelengths were recorded on eight runs over a line along which the salinity varied from 5 to 30%. Boat measurements were used for calibration and accuracy calculations. Overall RMS accuracy over the complete range of salinities was 3.6%. Overall RMS accuracy for salinities greater than 10%, where the technique is more sensitive, was 2.6%. Much of this error is believed to be due to inability to exactly locate boat and aircraft positions. The standard deviation over the eight runs for salinities or = 10% is 1.4%; this error contains a component due to mislocation of the aircraft also. It is believed that operational use of the technique is possible with accuracies of 1-2%.
High accuracy wavelength calibration for a scanning visible spectrometer.
Scotti, Filippo; Bell, Ronald E
2010-10-01
Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤0.2 Å. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of ∼0.25 Å has been demonstrated. With the addition of a high resolution (0.075 arc sec) optical encoder on the grating stage, greater precision (∼0.005 Å) is possible, allowing absolute velocity measurements within ∼0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.
Badke, Yvonne M; Bates, Ronald O; Ernst, Catherine W; Fix, Justin; Steibel, Juan P
2014-04-16
Genomic selection has the potential to increase genetic progress. Genotype imputation of high-density single-nucleotide polymorphism (SNP) genotypes can improve the cost efficiency of genomic breeding value (GEBV) prediction for pig breeding. Consequently, the objectives of this work were to: (1) estimate accuracy of genomic evaluation and GEBV for three traits in a Yorkshire population and (2) quantify the loss of accuracy of genomic evaluation and GEBV when genotypes were imputed under two scenarios: a high-cost, high-accuracy scenario in which only selection candidates were imputed from a low-density platform and a low-cost, low-accuracy scenario in which all animals were imputed using a small reference panel of haplotypes. Phenotypes and genotypes obtained with the PorcineSNP60 BeadChip were available for 983 Yorkshire boars. Genotypes of selection candidates were masked and imputed using tagSNP in the GeneSeek Genomic Profiler (10K). Imputation was performed with BEAGLE using 128 or 1800 haplotypes as reference panels. GEBV were obtained through an animal-centric ridge regression model using de-regressed breeding values as response variables. Accuracy of genomic evaluation was estimated as the correlation between estimated breeding values and GEBV in a 10-fold cross validation design. Accuracy of genomic evaluation using observed genotypes was high for all traits (0.65-0.68). Using genotypes imputed from a large reference panel (accuracy: R(2) = 0.95) for genomic evaluation did not significantly decrease accuracy, whereas a scenario with genotypes imputed from a small reference panel (R(2) = 0.88) did show a significant decrease in accuracy. Genomic evaluation based on imputed genotypes in selection candidates can be implemented at a fraction of the cost of a genomic evaluation using observed genotypes and still yield virtually the same accuracy. On the other side, using a very small reference panel of haplotypes to impute training animals and candidates for selection results in lower accuracy of genomic evaluation.
Modeling Individual Differences in Response Time and Accuracy in Numeracy
Ratcliff, Roger; Thompson, Clarissa A.; McKoon, Gail
2015-01-01
In the study of numeracy, some hypotheses have been based on response time (RT) as a dependent variable and some on accuracy, and considerable controversy has arisen about the presence or absence of correlations between RT and accuracy, between RT or accuracy and individual differences like IQ and math ability, and between various numeracy tasks. In this article, we show that an integration of the two dependent variables is required, which we accomplish with a theory-based model of decision making. We report data from four tasks: numerosity discrimination, number discrimination, memory for two-digit numbers, and memory for three-digit numbers. Accuracy correlated across tasks, as did RTs. However, the negative correlations that might be expected between RT and accuracy were not obtained; if a subject was accurate, it did not mean that they were fast (and vice versa). When the diffusion decision-making model was applied to the data (Ratcliff, 1978), we found significant correlations across the tasks between the quality of the numeracy information (drift rate) driving the decision process and between the speed/ accuracy criterion settings, suggesting that similar numeracy skills and similar speed-accuracy settings are involved in the four tasks. In the model, accuracy is related to drift rate and RT is related to speed-accuracy criteria, but drift rate and criteria are not related to each other across subjects. This provides a theoretical basis for understanding why negative correlations were not obtained between accuracy and RT. We also manipulated criteria by instructing subjects to maximize either speed or accuracy, but still found correlations between the criteria settings between and within tasks, suggesting that the settings may represent an individual trait that can be modulated but not equated across subjects. Our results demonstrate that a decision-making model may provide a way to reconcile inconsistent and sometimes contradictory results in numeracy research. PMID:25637690
Investigation of a Coupled Arrhenius-Type/Rossard Equation of AH36 Material
Qin, Qin; Tian, Ming-Liang; Zhang, Peng
2017-01-01
High-temperature tensile testing of AH36 material in a wide range of temperatures (1173–1573 K) and strain rates (10−4–10−2 s−1) has been obtained by using a Gleeble system. These experimental stress-strain data have been adopted to develop the constitutive equation. The constitutive equation of AH36 material was suggested based on the modified Arrhenius-type equation and the modified Rossard equation respectively. The results indicate that the constitutive equation is strongly influenced by temperature and strain, especially strain. Moreover, there is a good agreement between the predicted data of the modified Arrhenius-type equation and the experimental results when the strain is greater than 0.02. There is also good agreement between the predicted data of the Rossard equation and the experimental results when the strain is less than 0.02. Therefore, a coupled equation where the modified Arrhenius-type equation and Rossard equation are combined has been proposed to describe the constitutive equation of AH36 material according to the different strain values in order to improve the accuracy. The correlation coefficient between the computed and experimental flow stress data was 0.998. The minimum value of the average absolute relative error shows the high accuracy of the coupled equation compared with the two modified equations. PMID:28772767
Tidal deformation, Orbital Dynamics and JIMO
NASA Astrophysics Data System (ADS)
Ratcliff, J. T.; Wu, X.; Williams, J. G.
2003-12-01
Observations of Europa, Ganymede and Callisto obtained from encounters by the Galileo spacecraft strongly suggest the possibility of liquid oceans under the icy shells of these Jovian satellites. The strong tidal environments in which these moons are found and the fact that a planetary body with internal fluid undergoes greater deformation than an otherwise solid body make a compelling case for using tidal observations as a method for ocean detection. Given the high degree of uncertainty in our knowledge of the interiors of these moons, a comprehensive geodetic program measuring different physical signatures related to tidal deformation and interior structure is preferred to using separate and various interior parameters that may not be as closely tied to actual measurable quantities. Potential and displacement tidal Love numbers, libration amplitudes of the surface ice shell and rocky mantle, static topography and gravity fields and other quantities should all be included in the measurement objectives. Many geodetic techniques rely heavily upon orbital positions of the spacecraft. Their accurate determination depend on factors such as the orbital configuration, the gravity fields of the icy moons, as well as the duration and geometry of tracking. Given the competing science, engineering and planetary protection demands, orbital accuracy subject to constraints has become a critical mission design issue. Orbit determination simulations and covariance analyses will be used to investigate the achievable accuracies of spacecraft position and geodetic signatures under different orbital and tracking scenarios.
Dynamic adaptive chemistry with operator splitting schemes for reactive flow simulations
NASA Astrophysics Data System (ADS)
Ren, Zhuyin; Xu, Chao; Lu, Tianfeng; Singer, Michael A.
2014-04-01
A numerical technique that uses dynamic adaptive chemistry (DAC) with operator splitting schemes to solve the equations governing reactive flows is developed and demonstrated. Strang-based splitting schemes are used to separate the governing equations into transport fractional substeps and chemical reaction fractional substeps. The DAC method expedites the numerical integration of reaction fractional substeps by using locally valid skeletal mechanisms that are obtained using the directed relation graph (DRG) reduction method to eliminate unimportant species and reactions from the full mechanism. Second-order temporal accuracy of the Strang-based splitting schemes with DAC is demonstrated on one-dimensional, unsteady, freely-propagating, premixed methane/air laminar flames with detailed chemical kinetics and realistic transport. The use of DAC dramatically reduces the CPU time required to perform the simulation, and there is minimal impact on solution accuracy. It is shown that with DAC the starting species and resulting skeletal mechanisms strongly depend on the local composition in the flames. In addition, the number of retained species may be significant only near the flame front region where chemical reactions are significant. For the one-dimensional methane/air flame considered, speed-up factors of three and five are achieved over the entire simulation for GRI-Mech 3.0 and USC-Mech II, respectively. Greater speed-up factors are expected for larger chemical kinetics mechanisms.
Thirumala, Parthasarathy D; Thiagarajan, Karthy; Gedela, Satyanarayana; Crammond, Donald J; Balzer, Jeffrey R
2016-03-01
The 30 day stroke rate following carotid endarterectomy (CEA) ranges between 2-6%. Such periprocedural strokes are associated with a three-fold increased risk of mortality. Our primary aim was to determine the diagnostic accuracy of electroencephalogram (EEG) in predicting perioperative strokes through meta-analysis of existing literature. An extensive search for relevant literature was undertaken using PubMed and Web of Science databases. Studies were included after screening using predetermined criteria. Data was extracted and analyzed. Summary sensitivity, specificity and diagnostic odds ratio were obtained. Subgroup analysis of studies using eight or more EEG channels was done. Perioperative stroke rate for the cohort of 8765 patients was 1.75%. Pooled sensitivity and specificity of EEG changes in predicting these strokes were 52% (95% confidence interval [CI], 43-61%) and 84% (95% CI, 81-86%) respectively. Summary estimates of the subgroup were similar. The diagnostic odds ratio was 5.85 (95% CI, 3.71-9.22). For the observed stroke rate, the positive likelihood ratio was 3.25 while the negative predictive value was 98.99%. According to these results, patients with perioperative strokes have six times greater odds of experiencing an intraoperative change in EEG during CEA. EEG monitoring was found to be highly specific in predicting perioperative strokes after CEA. Copyright © 2015 Elsevier Ltd. All rights reserved.
Comparative analysis of semantic localization accuracies between adult and pediatric DICOM CT images
NASA Astrophysics Data System (ADS)
Robertson, Duncan; Pathak, Sayan D.; Criminisi, Antonio; White, Steve; Haynor, David; Chen, Oliver; Siddiqui, Khan
2012-02-01
Existing literature describes a variety of techniques for semantic annotation of DICOM CT images, i.e. the automatic detection and localization of anatomical structures. Semantic annotation facilitates enhanced image navigation, linkage of DICOM image content and non-image clinical data, content-based image retrieval, and image registration. A key challenge for semantic annotation algorithms is inter-patient variability. However, while the algorithms described in published literature have been shown to cope adequately with the variability in test sets comprising adult CT scans, the problem presented by the even greater variability in pediatric anatomy has received very little attention. Most existing semantic annotation algorithms can only be extended to work on scans of both adult and pediatric patients by adapting parameters heuristically in light of patient size. In contrast, our approach, which uses random regression forests ('RRF'), learns an implicit model of scale variation automatically using training data. In consequence, anatomical structures can be localized accurately in both adult and pediatric CT studies without the need for parameter adaptation or additional information about patient scale. We show how the RRF algorithm is able to learn scale invariance from a combined training set containing a mixture of pediatric and adult scans. Resulting localization accuracy for both adult and pediatric data remains comparable with that obtained using RRFs trained and tested using only adult data.
Characterization and speciation of mercury-bearing mine wastes using X-ray absorption spectroscopy
Kim, C.S.; Brown, Gordon E.; Rytuba, J.J.
2000-01-01
Mining of mercury deposits located in the California Coast Range has resulted in the release of mercury to the local environment and water supplies. The solubility, transport, and potential bioavailability of mercury are controlled by its chemical speciation, which can be directly determined for samples with total mercury concentrations greater than 100 mg kg-1 (ppm) using X-ray absorption spectroscopy (XAS). This technique has the additional benefits of being non-destructive to the sample, element-specific, relatively sensitive at low concentrations, and requiring minimal sample preparation. In this study, Hg L(III)-edge extended X-ray absorption fine structure (EXAFS) spectra were collected for several mercury mine tailings (calcines) in the California Coast Range. Total mercury concentrations of samples analyzed ranged from 230 to 1060 ppm. Speciation data (mercury phases present and relative abundances) were obtained by comparing the spectra from heterogeneous, roasted (calcined) mine tailings samples with a spectral database of mercury minerals and sorbed mercury complexes. Speciation analyses were also conducted on known mixtures of pure mercury minerals in order to assess the quantitative accuracy of the technique. While some calcine samples were found to consist exclusively of mercuric sulfide, others contain additional, more soluble mercury phases, indicating a greater potential for the release of mercury into solution. Also, a correlation was observed between samples from hot-spring mercury deposits, in which chloride levels are elevated, and the presence of mercury-chloride species as detected by the speciation analysis. The speciation results demonstrate the ability of XAS to identify multiple mercury phases in a heterogeneous sample, with a quantitative accuracy of ??25% for the mercury-containing phases considered. Use of this technique, in conjunction with standard microanalytical techniques such as X-ray diffraction and electron probe microanalysis, is beneficial in the prioritization and remediation of mercury-contaminated mine sites. (C) 2000 Elsevier Science B.V.
Quantifying the Validity of Routine Neonatal Healthcare Data in the Greater Accra Region, Ghana
Kayode, Gbenga A.; Amoakoh-Coleman, Mary; Brown-Davies, Charles; Grobbee, Diederick E.; Agyepong, Irene Akua; Ansah, Evelyn; Klipstein-Grobusch, Kerstin
2014-01-01
Objectives The District Health Information Management System–2 (DHIMS–2) is the database for storing health service data in Ghana, and similar to other low and middle income countries, paper-based data collection is being used by the Ghana Health Service. As the DHIMS-2 database has not been validated before this study aimed to evaluate its validity. Methods Seven out of ten districts in the Greater Accra Region were randomly sampled; the district hospital and a polyclinic in each district were recruited for validation. Seven pre-specified neonatal health indicators were considered for validation: antenatal registrants, deliveries, total births, live birth, stillbirth, low birthweight, and neonatal death. Data were extracted on these health indicators from the primary data (hospital paper-registers) recorded from January to March 2012. We examined all the data captured during this period as these data have been uploaded to the DHIMS-2 database. The differences between the values of the health indicators obtained from the primary data and that of the facility and DHIMS–2 database were used to assess the accuracy of the database while its completeness was estimated by the percentage of missing data in the primary data. Results About 41,000 data were assessed and in almost all the districts, the error rates of the DHIMS-2 data were less than 2.1% while the percentages of missing data were below 2%. At the regional level, almost all the health indicators had an error rate below 1% while the overall error rate of the DHIMS-2 database was 0.68% (95% C I = 0.61–0.75) and the percentage of missing data was 3.1% (95% C I = 2.96–3.24). Conclusion This study demonstrated that the percentage of missing data in the DHIMS-2 database was negligible while its accuracy was close to the acceptable range for high quality data. PMID:25144222
2009-01-01
Background The characterisation, or binning, of metagenome fragments is an important first step to further downstream analysis of microbial consortia. Here, we propose a one-dimensional signature, OFDEG, derived from the oligonucleotide frequency profile of a DNA sequence, and show that it is possible to obtain a meaningful phylogenetic signal for relatively short DNA sequences. The one-dimensional signal is essentially a compact representation of higher dimensional feature spaces of greater complexity and is intended to improve on the tetranucleotide frequency feature space preferred by current compositional binning methods. Results We compare the fidelity of OFDEG against tetranucleotide frequency in both an unsupervised and semi-supervised setting on simulated metagenome benchmark data. Four tests were conducted using assembler output of Arachne and phrap, and for each, performance was evaluated on contigs which are greater than or equal to 8 kbp in length and contigs which are composed of at least 10 reads. Using both G-C content in conjunction with OFDEG gave an average accuracy of 96.75% (semi-supervised) and 95.19% (unsupervised), versus 94.25% (semi-supervised) and 82.35% (unsupervised) for tetranucleotide frequency. Conclusion We have presented an observation of an alternative characteristic of DNA sequences. The proposed feature representation has proven to be more beneficial than the existing tetranucleotide frequency space to the metagenome binning problem. We do note, however, that our observation of OFDEG deserves further anlaysis and investigation. Unsupervised clustering revealed OFDEG related features performed better than standard tetranucleotide frequency in representing a relevant organism specific signal. Further improvement in binning accuracy is given by semi-supervised classification using OFDEG. The emphasis on a feature-driven, bottom-up approach to the problem of binning reveals promising avenues for future development of techniques to characterise short environmental sequences without bias toward cultivable organisms. PMID:19958473
Kim, Jung Hoon; Lee, Jae Young; Baek, Jee Hyun; Eun, Hyo Won; Kim, Young Jae; Han, Joon Koo; Choi, Byung Ihn
2015-02-01
OBJECTIVE. The purposes of this study were to compare staging accuracy of high-resolution sonography (HRUS) with combined low- and high-MHz transducers with that of conventional sonography for gallbladder cancer and to investigate the differences in the imaging findings of neoplastic and nonneoplastic gallbladder polyps. MATERIALS AND METHODS. Our study included 37 surgically proven gallbladder cancer (T1a = 7, T1b = 2, T2 = 22, T3 = 6), including 15 malignant neoplastic polyps and 73 surgically proven polyps (neoplastic = 31, nonneoplastic = 42) that underwent HRUS and conventional transabdominal sonography. Two radiologists assessed T-category and predefined polyp findings on HRUS and conventional transabdominal sonography. Statistical analyses were performed using chi-square and McNemar tests. RESULTS. The diagnostic accuracy for the T category was T1a = 92-95%, T1b = 89-95%, T2 = 78-86%, and T3 = 84-89%, all with good agreement (κ = 0.642) using HRUS. The diagnostic accuracy for differentiating T1 from T2 or greater than T2 was 92% and 89% on HRUS and 65% and 70% with conventional transabdominal sonography. Statistically common findings for neoplastic polyps included size greater than 1 cm, single lobular surface, vascular core, hypoechoic polyp, and hypoechoic foci (p < 0.05). The value of HRUS in the differential diagnosis of a gallbladder polyp was more clearly depicted internal echo foci than conventional transabdominal sonography (39 vs 21). A polyp size greater than 1 cm was independently associated with a neoplastic polyp (odds ratio = 7.5, p = 0.02). The AUC of a polyp size greater than 1 cm was 0.877. The sensitivity and specificity were 66.67% and 89.13%, respectively. CONCLUSION. HRUS is a simple method that enables accurate T categorization of gallbladder carcinoma. It provides high-resolution images of gallbladder polyps and may have a role in stratifying the risk for malignancy.
Des Roches, Carrie A; Mitko, Annette; Kiran, Swathi
2017-01-01
An advantage of rehabilitation administered on computers or tablets is that the tasks can be self-administered and the cueing required to complete the tasks can be monitored. Though there are many types of cueing, few studies have examined how participants' response to rehabilitation is influenced by self-administered cueing, which is cueing that is always available but the individual decides when and which cue to administer. In this study, participants received a tablet-based rehabilitation where the tasks were selfpaced and remotely monitored by a clinician. The results of the effectiveness of this study were published previously (Des Roches et al., 2015). The current study looks at the effect of cues on accuracy and rehabilitation outcomes. Fifty-one individuals with aphasia completed a 10-week program using Constant Therapy on an iPad targeted at improving language and cognitive deficits. Three questions were examined. The first examined the effect of cues on accuracy collapsed across time. Results showed a trend where the greater the cue use, the lower the accuracy, although some participants showed the opposite effect. This analysis divided participants into profiles based on cue use and accuracy. The second question examined how each profile differed in percent cue use and on standardized measures at baseline. Results showed that the four profiles were significantly different in frequency of cues and scores on WAB-R, CLQT, BNT, and ASHA-FACS, indicating that participants with lower scores on the standardized tests used a higher percentage of cues, which were not beneficial, while participants with higher scores on the standardized tests used a lower frequency of cues, which were beneficial. The third question examined how the relationship between cues and accuracy was affected by the course of treatment. Results showed that both more and less severe participants showed a decrease in cue use and an increase in accuracy over time, though more severe participants continued to used a greater number of cues. It is possible that self-administered cues help some individuals to access information that is otherwise inaccessible, even if there is not an immediate effect. Ultimately, the results demonstrate the need for individually modifying the levels of assistance during rehabilitation. time, though more severe participants continued to used a greater number of cues. It is possible that self-administered cues help some individuals to access information that is otherwise inaccessible, even if there is not an immediate effect. Ultimately, the results demonstrate the need for individually modifying the levels of assistance during rehabilitation.
Baxter, Suzanne D; Smith, Albert F; Hitchcock, David B; Guinn, Caroline H; Royer, Julie A; Collins, Kathleen L; Smith, Alyssa L; Puryear, Megan P; Vaadi, Kate K; Finney, Christopher J; Miller, Patricia H
2015-09-01
Dietary recall accuracy is related to retention interval (RI) (i.e., time between to-be-reported meals and the interview), and possibly to prompts. To the best of our knowledge, no study has evaluated their combined effect. The combined influence of RI and prompts on children's recall accuracy was investigated in this study. Two RIs [short (prior-24-h recall obtained in afternoon) and long (previous-day recall obtained in morning)] were crossed with 4 prompts [forward (distant-to-recent), meal-name (breakfast, lunch, etc.), open (no instructions), and reverse (recent-to-distant)], creating 8 conditions. Fourth-grade children (n = 480; 50% girls) were randomly selected from consenting children at 10 schools in 4 districts in a southern state during 3 school years (2011-2012, 2012-2013, and 2013-2014). Each child was observed eating school-provided breakfast and lunch, and interviewed one time under 1 of the 8 conditions. Condition assignment was constrained so that each had 60 children (30 girls). Accuracy measures were food-item omission and intrusion rates, and energy correspondence rate and inflation ratio. For each measure, linear models determined effects of RI, prompt, gender, and interactions (2-way, 3-way); race/ethnicity, school year, and district were control variables. RI (P values < 0.015) and prompt (P values < 0.005) were significant for all 4 accuracy measures. RI × prompt (P values < 0.001) was significant for 3 accuracy measures (not intrusion rate). Prompt × gender (P = 0.005) was significant for omission rate. RI × prompt × gender was significant for intrusion rate and inflation ratio (P values < 0.001). For the short vs. long RI across prompts and genders, accuracy was better by 33-50% for each accuracy measure. To obtain the most accurate recalls possible from children, studies should be designed to use a short rather than long RI. Prompts affect children's recall accuracy, although the effectiveness of different prompts depends on RI and varies by gender: at a short RI, the choice of prompts has little systematic effect on accuracy, whereas at a long RI, reverse prompts may elicit the most accurate recalls. © 2015 American Society for Nutrition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Yongjun; Lim, Jonghyuck; Kim, Namkug
2013-05-15
Purpose: To investigate the effect of using different computed tomography (CT) scanners on the accuracy of high-resolution CT (HRCT) images in classifying regional disease patterns in patients with diffuse lung disease, support vector machine (SVM) and Bayesian classifiers were applied to multicenter data. Methods: Two experienced radiologists marked sets of 600 rectangular 20 Multiplication-Sign 20 pixel regions of interest (ROIs) on HRCT images obtained from two scanners (GE and Siemens), including 100 ROIs for each of local patterns of lungs-normal lung and five of regional pulmonary disease patterns (ground-glass opacity, reticular opacity, honeycombing, emphysema, and consolidation). Each ROI was assessedmore » using 22 quantitative features belonging to one of the following descriptors: histogram, gradient, run-length, gray level co-occurrence matrix, low-attenuation area cluster, and top-hat transform. For automatic classification, a Bayesian classifier and a SVM classifier were compared under three different conditions. First, classification accuracies were estimated using data from each scanner. Next, data from the GE and Siemens scanners were used for training and testing, respectively, and vice versa. Finally, all ROI data were integrated regardless of the scanner type and were then trained and tested together. All experiments were performed based on forward feature selection and fivefold cross-validation with 20 repetitions. Results: For each scanner, better classification accuracies were achieved with the SVM classifier than the Bayesian classifier (92% and 82%, respectively, for the GE scanner; and 92% and 86%, respectively, for the Siemens scanner). The classification accuracies were 82%/72% for training with GE data and testing with Siemens data, and 79%/72% for the reverse. The use of training and test data obtained from the HRCT images of different scanners lowered the classification accuracy compared to the use of HRCT images from the same scanner. For integrated ROI data obtained from both scanners, the classification accuracies with the SVM and Bayesian classifiers were 92% and 77%, respectively. The selected features resulting from the classification process differed by scanner, with more features included for the classification of the integrated HRCT data than for the classification of the HRCT data from each scanner. For the integrated data, consisting of HRCT images of both scanners, the classification accuracy based on the SVM was statistically similar to the accuracy of the data obtained from each scanner. However, the classification accuracy of the integrated data using the Bayesian classifier was significantly lower than the classification accuracy of the ROI data of each scanner. Conclusions: The use of an integrated dataset along with a SVM classifier rather than a Bayesian classifier has benefits in terms of the classification accuracy of HRCT images acquired with more than one scanner. This finding is of relevance in studies involving large number of images, as is the case in a multicenter trial with different scanners.« less
The diagnostic accuracy of multiparametric MRI to determine pediatric brain tumor grades and types.
Koob, Mériam; Girard, Nadine; Ghattas, Badih; Fellah, Slim; Confort-Gouny, Sylviane; Figarella-Branger, Dominique; Scavarda, Didier
2016-04-01
Childhood brain tumors show great histological variability. The goal of this retrospective study was to assess the diagnostic accuracy of multimodal MR imaging (diffusion, perfusion, MR spectroscopy) in the distinction of pediatric brain tumor grades and types. Seventy-six patients (range 1 month to 18 years) with brain tumors underwent multimodal MR imaging. Tumors were categorized by grade (I-IV) and by histological type (A-H). Multivariate statistical analysis was performed to evaluate the diagnostic accuracy of single and combined MR modalities, and of single imaging parameters to distinguish the different groups. The highest diagnostic accuracy for tumor grading was obtained with diffusion-perfusion (73.24%) and for tumor typing with diffusion-perfusion-MR spectroscopy (55.76%). The best diagnostic accuracy was obtained for tumor grading in I and IV and for tumor typing in embryonal tumor and pilocytic astrocytoma. Poor accuracy was seen in other grades and types. ADC and rADC were the best parameters for tumor grading and typing followed by choline level with an intermediate echo time, CBV for grading and Tmax for typing. Multiparametric MR imaging can be accurate in determining tumor grades (primarily grades I and IV) and types (mainly pilocytic astrocytomas and embryonal tumors) in children.
Differences between wavefront and subjective refraction for infrared light.
Teel, Danielle F W; Jacobs, Robert J; Copland, James; Neal, Daniel R; Thibos, Larry N
2014-10-01
To determine the accuracy of objective wavefront refractions for predicting subjective refractions for monochromatic infrared light. Objective refractions were obtained with a commercial wavefront aberrometer (COAS, Wavefront Sciences). Subjective refractions were obtained for 30 subjects with a speckle optometer validated against objective Zernike wavefront refractions on a physical model eye (Teel et al., Design and validation of an infrared Badal optometer for laser speckle, Optom Vis Sci 2008;85:834-42). Both instruments used near-infrared (NIR) radiation (835 nm for COAS, 820 nm for the speckle optometer) to avoid correction for ocular chromatic aberration. A 3-mm artificial pupil was used to reduce complications attributed to higher-order ocular aberrations. For comparison with paraxial (Seidel) and minimum root-mean-square (Zernike) wavefront refractions, objective refractions were also determined for a battery of 29 image quality metrics by computing the correcting lens that optimizes retinal image quality. Objective Zernike refractions were more myopic than subjective refractions for 29 of 30 subjects. The population mean discrepancy was -0.26 diopters (D) (SEM = 0.03 D). Paraxial (Seidel) objective refractions tended to be hyperopically biased (mean discrepancy = +0.20 D, SEM = 0.06 D). Refractions based on retinal image quality were myopically biased for 28 of 29 metrics. The mean bias across all 31 measures was -0.24 D (SEM = 0.03). Myopic bias of objective refractions was greater for eyes with brown irises compared with eyes with blue irises. Our experimental results are consistent with the hypothesis that reflected NIR light captured by the aberrometer originates from scattering sources located posterior to the entrance apertures of cone photoreceptors, near the retinal pigment epithelium. The larger myopic bias for brown eyes suggests that a greater fraction of NIR light is reflected from choroidal melanin in brown eyes compared with blue eyes.
Debnath, Mithu; Iungo, G. Valerio; Ashton, Ryan; ...
2017-02-06
Vertical profiles of 3-D wind velocity are retrieved from triple range-height-indicator (RHI) scans performed with multiple simultaneous scanning Doppler wind lidars. This test is part of the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign carried out at the Boulder Atmospheric Observatory. The three wind velocity components are retrieved and then compared with the data acquired through various profiling wind lidars and high-frequency wind data obtained from sonic anemometers installed on a 300 m meteorological tower. The results show that the magnitude of the horizontal wind velocity and the wind direction obtained from the triple RHI scans are generally retrieved withmore » good accuracy. Furthermore, poor accuracy is obtained for the evaluation of the vertical velocity, which is mainly due to its typically smaller magnitude and to the error propagation connected with the data retrieval procedure and accuracy in the experimental setup.« less
Competitive Deep-Belief Networks for Underwater Acoustic Target Recognition
Shen, Sheng; Yao, Xiaohui; Sheng, Meiping; Wang, Chen
2018-01-01
Underwater acoustic target recognition based on ship-radiated noise belongs to the small-sample-size recognition problems. A competitive deep-belief network is proposed to learn features with more discriminative information from labeled and unlabeled samples. The proposed model consists of four stages: (1) A standard restricted Boltzmann machine is pretrained using a large number of unlabeled data to initialize its parameters; (2) the hidden units are grouped according to categories, which provides an initial clustering model for competitive learning; (3) competitive training and back-propagation algorithms are used to update the parameters to accomplish the task of clustering; (4) by applying layer-wise training and supervised fine-tuning, a deep neural network is built to obtain features. Experimental results show that the proposed method can achieve classification accuracy of 90.89%, which is 8.95% higher than the accuracy obtained by the compared methods. In addition, the highest accuracy of our method is obtained with fewer features than other methods. PMID:29570642
The Theory and Practice of Estimating the Accuracy of Dynamic Flight-Determined Coefficients
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1981-01-01
Means of assessing the accuracy of maximum likelihood parameter estimates obtained from dynamic flight data are discussed. The most commonly used analytical predictors of accuracy are derived and compared from both statistical and simplified geometrics standpoints. The accuracy predictions are evaluated with real and simulated data, with an emphasis on practical considerations, such as modeling error. Improved computations of the Cramer-Rao bound to correct large discrepancies due to colored noise and modeling error are presented. The corrected Cramer-Rao bound is shown to be the best available analytical predictor of accuracy, and several practical examples of the use of the Cramer-Rao bound are given. Engineering judgement, aided by such analytical tools, is the final arbiter of accuracy estimation.
Task motivation influences alpha suppression following errors.
Compton, Rebecca J; Bissey, Bryn; Worby-Selim, Sharoda
2014-07-01
The goal of the present research is to examine the influence of motivation on a novel error-related neural marker, error-related alpha suppression (ERAS). Participants completed an attentionally demanding flanker task under conditions that emphasized either speed or accuracy or under conditions that manipulated the monetary value of errors. Conditions in which errors had greater motivational value produced greater ERAS, that is, greater alpha suppression following errors compared to correct trials. A second study found that a manipulation of task difficulty did not affect ERAS. Together, the results confirm that ERAS is both a robust phenomenon and one that is sensitive to motivational factors. Copyright © 2014 Society for Psychophysiological Research.
Accuracy of pulse oximetry in children.
Ross, Patrick A; Newth, Christopher J L; Khemani, Robinder G
2014-01-01
For children with cyanotic congenital heart disease or acute hypoxemic respiratory failure, providers frequently make decisions based on pulse oximetry, in the absence of an arterial blood gas. The study objective was to measure the accuracy of pulse oximetry in the saturations from pulse oximetry (SpO2) range of 65% to 97%. This institutional review board-approved prospective, multicenter observational study in 5 PICUs included 225 mechanically ventilated children with an arterial catheter. With each arterial blood gas sample, SpO2 from pulse oximetry and arterial oxygen saturations from CO-oximetry (SaO2) were simultaneously obtained if the SpO2 was ≤ 97%. The lowest SpO2 obtained in the study was 65%. In the range of SpO2 65% to 97%, 1980 simultaneous values for SpO2 and SaO2 were obtained. The bias (SpO2 - SaO2) varied through the range of SpO2 values. The bias was greatest in the SpO2 range 81% to 85% (336 samples, median 6%, mean 6.6%, accuracy root mean squared 9.1%). SpO2 measurements were close to SaO2 in the SpO2 range 91% to 97% (901 samples, median 1%, mean 1.5%, accuracy root mean squared 4.2%). Previous studies on pulse oximeter accuracy in children present a single number for bias. This study identified that the accuracy of pulse oximetry varies significantly as a function of the SpO2 range. Saturations measured by pulse oximetry on average overestimate SaO2 from CO-oximetry in the SpO2 range of 76% to 90%. Better pulse oximetry algorithms are needed for accurate assessment of children with saturations in the hypoxemic range.
Estimating discharge in rivers using remotely sensed hydraulic information
Bjerklie, D.M.; Moller, D.; Smith, L.C.; Dingman, S.L.
2005-01-01
A methodology to estimate in-bank river discharge exclusively from remotely sensed hydraulic data is developed. Water-surface width and maximum channel width measured from 26 aerial and digital orthophotos of 17 single channel rivers and 41 SAR images of three braided rivers were coupled with channel slope data obtained from topographic maps to estimate the discharge. The standard error of the discharge estimates were within a factor of 1.5-2 (50-100%) of the observed, with the mean estimate accuracy within 10%. This level of accuracy was achieved using calibration functions developed from observed discharge. The calibration functions use reach specific geomorphic variables, the maximum channel width and the channel slope, to predict a correction factor. The calibration functions are related to channel type. Surface velocity and width information, obtained from a single C-band image obtained by the Jet Propulsion Laboratory's (JPL's) AirSAR was also used to estimate discharge for a reach of the Missouri River. Without using a calibration function, the estimate accuracy was +72% of the observed discharge, which is within the expected range of uncertainty for the method. However, using the observed velocity to calibrate the initial estimate improved the estimate accuracy to within +10% of the observed. Remotely sensed discharge estimates with accuracies reported in this paper could be useful for regional or continental scale hydrologic studies, or in regions where ground-based data is lacking. ?? 2004 Elsevier B.V. All rights reserved.
ION BEAM FOCUSING MEANS FOR CALUTRON
Backus, J.G.
1959-06-01
An ion beam focusing arrangement for calutrons is described. It provides a virtual focus of origin for the ion beam so that the ions may be withdrawn from an arc plasma of considerable width providing greater beam current and accuracy. (T.R.H.)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Gregory; Mistrick, Ph.D., Richard; Lee, Eleanor
2011-01-21
We describe two methods which rely on bidirectional scattering distribution functions (BSDFs) to model the daylighting performance of complex fenestration systems (CFS), enabling greater flexibility and accuracy in evaluating arbitrary assemblies of glazing, shading, and other optically-complex coplanar window systems. Two tools within Radiance enable a) efficient annual performance evaluations of CFS, and b) accurate renderings of CFS despite the loss of spatial resolution associated with low-resolution BSDF datasets for inhomogeneous systems. Validation, accuracy, and limitations of the methods are discussed.
Wollaston prism phase-stepping point diffraction interferometer and method
Rushford, Michael C.
2004-10-12
A Wollaston prism phase-stepping point diffraction interferometer for testing a test optic. The Wollaston prism shears light into reference and signal beams, and provides phase stepping at increased accuracy by translating the Wollaston prism in a lateral direction with respect to the optical path. The reference beam produced by the Wollaston prism is directed through a pinhole of a diaphragm to produce a perfect spherical reference wave. The spherical reference wave is recombined with the signal beam to produce an interference fringe pattern of greater accuracy.
Feasibility of developing LSI microcircuit reliability prediction models
NASA Technical Reports Server (NTRS)
Ryerson, C. M.
1972-01-01
In the proposed modeling approach, when any of the essential key factors are not known initially, they can be approximated in various ways with a known impact on the accuracy of the final predictions. For example, on any program where reliability predictions are started at interim states of project completion, a-priori approximate estimates of the key factors are established for making preliminary predictions. Later these are refined for greater accuracy as subsequent program information of a more definitive nature becomes available. Specific steps to develop, validate and verify these new models are described.
Medication reconciliation in a rural trauma population.
Miller, S Lee; Miller, Stephanie; Balon, Jennifer; Helling, Thomas S
2008-11-01
Medication errors during hospitalization can lead to adverse drug events. Because of preoccupation by health care providers with life-threatening injuries, trauma patients may be particularly prone to medication errors. Medication reconciliation on admission can result in decreased medication errors and adverse drug events in this patient population. The purpose of this study is to determine the accuracy of medication histories obtained on trauma patients by initial health care providers compared to a medication reconciliation process by a designated clinical pharmacist after the patient's admission and secondarily to determine whether trauma-associated factors affected medication accuracy. This was a prospective enrollment study during 13 months in which trauma patients admitted to a Level I trauma center were enrolled in a stepwise medication reconciliation process by the clinical pharmacist. The setting was a rural Level I trauma center. Patients admitted to the trauma service were studied. The intervention was medication reconciliation by a clinical pharmacist. The main outcome measure was accuracy of medication history by initial trauma health care providers compared to a medication reconciliation process by a clinical pharmacist who compared all sources, including telephone calls to pharmacies. Patients taking no medications (whether correctly identified as such or not) were not analyzed in these results. Variables examined included admission medication list accuracy, age, trauma team activation mode, Injury Severity Score, and Glasgow Coma Scale (GCS) score. Two hundred thirty-four patients were enrolled. Eighty-four of 234 patients (36%) had an Injury Severity Score greater than 15. Medications were reconciled within an average of 3 days of admission (range 1 to 8) by the clinical pharmacist. Overall, medications as reconciled by the clinical pharmacist were recorded correctly for 15% of patients. Admission trauma team medication lists were inaccurate in 224 of 234 cases (96%). Admitting nurses' lists were more accurate than the trauma team's (11% versus 4%; 95% confidence interval 2.5% to 11.2%). Errors were found by the clinical pharmacist in medication name, strength, route, and frequency. No patients (0/20) with admission GCS less than 13 had accurate medication lists. Seventy of 84 patients (83%) with an Injury Severity Score greater than 15 had inaccurate medication lists. Ten of 234 patients (4%) were ordered wrong medications, and 1 adverse drug event (hypoglycemia) occurred. The median duration of the reconciliation process was 2 days. Only 12% of cases were completed in 1 day, and almost 25% required 3 or more (maximum 8) days. This study showed that medication history recorded on admission was inaccurate. This patient population overall was susceptible to medication inaccuracies from multiple sources, even with duplication of medication histories by initial health care providers. Medication reconciliation for trauma patients by a clinical pharmacist may improve safety and prevent adverse drug events but did not occur quickly in this setting.
NASA Astrophysics Data System (ADS)
Dabove, Paolo; Manzino, Ambrogio Maria
2015-04-01
The use of GPS/GNSS instruments is a common practice in the world at both a commercial and academic research level. Since last ten years, Continuous Operating Reference Stations (CORSs) networks were born in order to achieve the possibility to extend a precise positioning more than 15 km far from the master station. In this context, the Geomatics Research Group of DIATI at the Politecnico di Torino has carried out several experiments in order to evaluate the achievable precision obtainable with different GNSS receivers (geodetic and mass-market) and antennas if a CORSs network is considered. This work starts from the research above described, in particular focusing the attention on the usefulness of single frequency permanent stations in order to thicken the existing CORSs, especially for monitoring purposes. Two different types of CORSs network are available today in Italy: the first one is the so called "regional network" and the second one is the "national network", where the mean inter-station distances are about 25/30 and 50/70 km respectively. These distances are useful for many applications (e.g. mobile mapping) if geodetic instruments are considered but become less useful if mass-market instruments are used or if the inter-station distance between master and rover increases. In this context, some innovative GNSS networks were developed and tested, analyzing the performance of rover's positioning in terms of quality, accuracy and reliability both in real-time and post-processing approach. The use of single frequency GNSS receivers leads to have some limits, especially due to a limited baseline length, the possibility to obtain a correct fixing of the phase ambiguity for the network and to fix the phase ambiguity correctly also for the rover. These factors play a crucial role in order to reach a positioning with a good level of accuracy (as centimetric o better) in a short time and with an high reliability. The goal of this work is to investigate about the real effect and how is the contribute of L1 mass-market permanent stations to the CORSs Network both for geodetic and low-cost receivers; in particular is described how the use of the network products which are generated by the network (in real-time and post-processing) can improve the accuracy and precision of a rover 5, 10 and 15 km far from the nearest station. Some tests have been carried out considering different types of receivers (geodetic and mass market) and antennas (patch and geodetic). The tests have been conducted considering several positioning approaches (static, stop and go and real time) in order to make the analysis more complete. Good and interesting results were obtained: the followed approach will be useful for many types of applications (landslides monitoring, traffic control), especially where the inter-station distances of GNSS permanent station are greater than 30 km.
Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng
2017-06-20
The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.
NASA Astrophysics Data System (ADS)
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José
2017-05-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational efficiency, the OTSEM is more efficient than the Fekete-based TSEM, although it is slightly costlier than the QSEM when a comparable numerical accuracy is required.
Outcome Prediction in Mathematical Models of Immune Response to Infection.
Mai, Manuel; Wang, Kun; Huber, Greg; Kirby, Michael; Shattuck, Mark D; O'Hern, Corey S
2015-01-01
Clinicians need to predict patient outcomes with high accuracy as early as possible after disease inception. In this manuscript, we show that patient-to-patient variability sets a fundamental limit on outcome prediction accuracy for a general class of mathematical models for the immune response to infection. However, accuracy can be increased at the expense of delayed prognosis. We investigate several systems of ordinary differential equations (ODEs) that model the host immune response to a pathogen load. Advantages of systems of ODEs for investigating the immune response to infection include the ability to collect data on large numbers of 'virtual patients', each with a given set of model parameters, and obtain many time points during the course of the infection. We implement patient-to-patient variability v in the ODE models by randomly selecting the model parameters from distributions with coefficients of variation v that are centered on physiological values. We use logistic regression with one-versus-all classification to predict the discrete steady-state outcomes of the system. We find that the prediction algorithm achieves near 100% accuracy for v = 0, and the accuracy decreases with increasing v for all ODE models studied. The fact that multiple steady-state outcomes can be obtained for a given initial condition, i.e. the basins of attraction overlap in the space of initial conditions, limits the prediction accuracy for v > 0. Increasing the elapsed time of the variables used to train and test the classifier, increases the prediction accuracy, while adding explicit external noise to the ODE models decreases the prediction accuracy. Our results quantify the competition between early prognosis and high prediction accuracy that is frequently encountered by clinicians.
Deep Learning Method for Denial of Service Attack Detection Based on Restricted Boltzmann Machine.
Imamverdiyev, Yadigar; Abdullayeva, Fargana
2018-06-01
In this article, the application of the deep learning method based on Gaussian-Bernoulli type restricted Boltzmann machine (RBM) to the detection of denial of service (DoS) attacks is considered. To increase the DoS attack detection accuracy, seven additional layers are added between the visible and the hidden layers of the RBM. Accurate results in DoS attack detection are obtained by optimization of the hyperparameters of the proposed deep RBM model. The form of the RBM that allows application of the continuous data is used. In this type of RBM, the probability distribution of the visible layer is replaced by a Gaussian distribution. Comparative analysis of the accuracy of the proposed method with Bernoulli-Bernoulli RBM, Gaussian-Bernoulli RBM, deep belief network type deep learning methods on DoS attack detection is provided. Detection accuracy of the methods is verified on the NSL-KDD data set. Higher accuracy from the proposed multilayer deep Gaussian-Bernoulli type RBM is obtained.
A high-accuracy optical linear algebra processor for finite element applications
NASA Technical Reports Server (NTRS)
Casasent, D.; Taylor, B. K.
1984-01-01
Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.
Towards SSVEP-based, portable, responsive Brain-Computer Interface.
Kaczmarek, Piotr; Salomon, Pawel
2015-08-01
A Brain-Computer Interface in motion control application requires high system responsiveness and accuracy. SSVEP interface consisted of 2-8 stimuli and 2 channel EEG amplifier was presented in this paper. The observed stimulus is recognized based on a canonical correlation calculated in 1 second window, ensuring high interface responsiveness. A threshold classifier with hysteresis (T-H) was proposed for recognition purposes. Obtained results suggest that T-H classifier enables to significantly increase classifier performance (resulting in accuracy of 76%, while maintaining average false positive detection rate of stimulus different then observed one between 2-13%, depending on stimulus frequency). It was shown that the parameters of T-H classifier, maximizing true positive rate, can be estimated by gradient-based search since the single maximum was observed. Moreover the preliminary results, performed on a test group (N=4), suggest that for T-H classifier exists a certain set of parameters for which the system accuracy is similar to accuracy obtained for user-trained classifier.
Zhang, Xiaoheng; Wang, Lirui; Cao, Yao; Wang, Pin; Zhang, Cheng; Yang, Liuyang; Li, Yongming; Zhang, Yanling; Cheng, Oumei
2018-02-01
Diagnosis of Parkinson's disease (PD) based on speech data has been proved to be an effective way in recent years. However, current researches just care about the feature extraction and classifier design, and do not consider the instance selection. Former research by authors showed that the instance selection can lead to improvement on classification accuracy. However, no attention is paid on the relationship between speech sample and feature until now. Therefore, a new diagnosis algorithm of PD is proposed in this paper by simultaneously selecting speech sample and feature based on relevant feature weighting algorithm and multiple kernel method, so as to find their synergy effects, thereby improving classification accuracy. Experimental results showed that this proposed algorithm obtained apparent improvement on classification accuracy. It can obtain mean classification accuracy of 82.5%, which was 30.5% higher than the relevant algorithm. Besides, the proposed algorithm detected the synergy effects of speech sample and feature, which is valuable for speech marker extraction.
High-pressure liquid chromatography analysis of antibiotic susceptibility disks.
Hagel, R B; Waysek, E H; Cort, W M
1979-01-01
The analysis of antibiotic susceptibility disks by high-pressure liquid chromatography (HPLC) was investigated. Methods are presented for the potency determination of mecillinam, ampicillin, carbenicillin, and cephalothin alone and in various combinations. Good agreement between HPLC and microbiological data is observed for potency determinations with recoveries of greater than 95%. Relative standard deviations of lower than 2% are recorded for each HPLC method. HPLC methods offer improved accuracy and greater precision when compared to the standard microbiological methods of analysis for susceptibility disks. PMID:507793
Willoughby, Karen A; McAndrews, Mary Pat; Rovet, Joanne F
2014-07-01
Autobiographical memory (AM) is a highly constructive cognitive process that often contains memory errors. No study has specifically examined AM accuracy in children with abnormal development of the hippocampus, a crucial brain region for AM retrieval. Thus, the present study investigated AM accuracy in 68 typically and atypically developing children using a staged autobiographical event, the Children's Autobiographical Interview, and structural magnetic resonance imaging. The atypically developing group consisted of 17 children (HYPO) exposed during gestation to insufficient maternal thyroid hormone (TH), a critical substrate for hippocampal development, and 25 children with congenital hypothyroidism (CH), who were compared to 26 controls. Groups differed significantly in the number of accurate episodic details recalled and proportion accuracy scores, with controls having more accurate recollections of the staged event than both TH-deficient groups. Total hippocampal volumes and anterior hippocampal volumes were positively correlated with proportion accuracy scores, but not total accurate episodic details, in HYPO and CH. In addition, greater severity of TH deficiency predicted lower proportion accuracy scores in both HYPO and CH. Overall, these results indicate that children with early TH deficiency have deficits in AM accuracy and that the anterior hippocampus may play a particularly important role in accurate AM retrieval. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Cost and accuracy of advanced breeding trial designs in apple
Harshman, Julia M; Evans, Kate M; Hardner, Craig M
2016-01-01
Trialing advanced candidates in tree fruit crops is expensive due to the long-term nature of the planting and labor-intensive evaluations required to make selection decisions. How closely the trait evaluations approximate the true trait value needs balancing with the cost of the program. Designs of field trials of advanced apple candidates in which reduced number of locations, the number of years and the number of harvests per year were modeled to investigate the effect on the cost and accuracy in an operational breeding program. The aim was to find designs that would allow evaluation of the most additional candidates while sacrificing the least accuracy. Critical percentage difference, response to selection, and correlated response were used to examine changes in accuracy of trait evaluations. For the quality traits evaluated, accuracy and response to selection were not substantially reduced for most trial designs. Risk management influences the decision to change trial design, and some designs had greater risk associated with them. Balancing cost and accuracy with risk yields valuable insight into advanced breeding trial design. The methods outlined in this analysis would be well suited to other horticultural crop breeding programs. PMID:27019717
Facial emotion recognition and borderline personality pathology.
Meehan, Kevin B; De Panfilis, Chiara; Cain, Nicole M; Antonucci, Camilla; Soliani, Antonio; Clarkin, John F; Sambataro, Fabio
2017-09-01
The impact of borderline personality pathology on facial emotion recognition has been in dispute; with impaired, comparable, and enhanced accuracy found in high borderline personality groups. Discrepancies are likely driven by variations in facial emotion recognition tasks across studies (stimuli type/intensity) and heterogeneity in borderline personality pathology. This study evaluates facial emotion recognition for neutral and negative emotions (fear/sadness/disgust/anger) presented at varying intensities. Effortful control was evaluated as a moderator of facial emotion recognition in borderline personality. Non-clinical multicultural undergraduates (n = 132) completed a morphed facial emotion recognition task of neutral and negative emotional expressions across different intensities (100% Neutral; 25%/50%/75% Emotion) and self-reported borderline personality features and effortful control. Greater borderline personality features related to decreased accuracy in detecting neutral faces, but increased accuracy in detecting negative emotion faces, particularly at low-intensity thresholds. This pattern was moderated by effortful control; for individuals with low but not high effortful control, greater borderline personality features related to misattributions of emotion to neutral expressions, and enhanced detection of low-intensity emotional expressions. Individuals with high borderline personality features may therefore exhibit a bias toward detecting negative emotions that are not or barely present; however, good self-regulatory skills may protect against this potential social-cognitive vulnerability. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Ramratan, Wendy S; Rabin, Laura A; Wang, Cuiling; Zimmerman, Molly E; Katz, Mindy J; Lipton, Richard B; Buschke, Herman
2012-03-01
Individuals with amnestic mild cognitive impairment (aMCI) show deficits on traditional episodic memory tasks and reductions in speed of performance on reaction time tasks. We present results on a novel task, the Cued-Recall Retrieval Speed Task (CRRST), designed to simultaneously measure level and speed of retrieval. A total of 390 older adults (mean age, 80.2 years), learned 16 words based on corresponding categorical cues. In the retrieval phase, we measured accuracy (% correct) and retrieval speed/reaction time (RT; time from cue presentation to voice onset of a correct response) across 6 trials. Compared to healthy elderly adults (HEA, n = 303), those with aMCI (n = 87) exhibited poorer performance in retrieval speed (difference = -0.13; p < .0001) and accuracy on the first trial (difference = -0.19; p < .0001), and their rate of improvement in retrieval speed was slower over subsequent trials. Those with aMCI also had greater within-person variability in processing speed (variance ratio = 1.22; p = .0098) and greater between-person variability in accuracy (variance ratio = 2.08; p = .0001) relative to HEA. Results are discussed in relation to the possibility that computer-based measures of cued-learning and processing speed variability may facilitate early detection of dementia in at-risk older adults.
Online Information Search Performance and Search Strategies in a Health Problem-Solving Scenario.
Sharit, Joseph; Taha, Jessica; Berkowsky, Ronald W; Profita, Halley; Czaja, Sara J
2015-01-01
Although access to Internet health information can be beneficial, solving complex health-related problems online is challenging for many individuals. In this study, we investigated the performance of a sample of 60 adults ages 18 to 85 years in using the Internet to resolve a relatively complex health information problem. The impact of age, Internet experience, and cognitive abilities on measures of search time, amount of search, and search accuracy was examined, and a model of Internet information seeking was developed to guide the characterization of participants' search strategies. Internet experience was found to have no impact on performance measures. Older participants exhibited longer search times and lower amounts of search but similar search accuracy performance as their younger counterparts. Overall, greater search accuracy was related to an increased amount of search but not to increased search duration and was primarily attributable to higher cognitive abilities, such as processing speed, reasoning ability, and executive function. There was a tendency for those who were younger, had greater Internet experience, and had higher cognitive abilities to use a bottom-up (i.e., analytic) search strategy, although use of a top-down (i.e., browsing) strategy was not necessarily unsuccessful. Implications of the findings for future studies and design interventions are discussed.
Online Information Search Performance and Search Strategies in a Health Problem-Solving Scenario
Sharit, Joseph; Taha, Jessica; Berkowsky, Ronald W.; Profita, Halley; Czaja, Sara J.
2017-01-01
Although access to Internet health information can be beneficial, solving complex health-related problems online is challenging for many individuals. In this study, we investigated the performance of a sample of 60 adults ages 18 to 85 years in using the Internet to resolve a relatively complex health information problem. The impact of age, Internet experience, and cognitive abilities on measures of search time, amount of search, and search accuracy was examined, and a model of Internet information seeking was developed to guide the characterization of participants’ search strategies. Internet experience was found to have no impact on performance measures. Older participants exhibited longer search times and lower amounts of search but similar search accuracy performance as their younger counterparts. Overall, greater search accuracy was related to an increased amount of search but not to increased search duration and was primarily attributable to higher cognitive abilities, such as processing speed, reasoning ability, and executive function. There was a tendency for those who were younger, had greater Internet experience, and had higher cognitive abilities to use a bottom-up (i.e., analytic) search strategy, although use of a top-down (i.e., browsing) strategy was not necessarily unsuccessful. Implications of the findings for future studies and design interventions are discussed. PMID:29056885
McGinley, Jennifer L; Goldie, Patricia A; Greenwood, Kenneth M; Olney, Sandra J
2003-02-01
Physical therapists routinely observe gait in clinical practice. The purpose of this study was to determine the accuracy and reliability of observational assessments of push-off in gait after stroke. Eighteen physical therapists and 11 subjects with hemiplegia following a stroke participated in the study. Measurements of ankle power generation were obtained from subjects following stroke using a gait analysis system. Concurrent videotaped gait performances were observed by the physical therapists on 2 occasions. Ankle power generation at push-off was scored as either normal or abnormal using two 11-point rating scales. These observational ratings were correlated with the measurements of peak ankle power generation. A high correlation was obtained between the observational ratings and the measurements of ankle power generation (mean Pearson r=.84). Interobserver reliability was moderately high (mean intraclass correlation coefficient [ICC (2,1)]=.76). Intraobserver reliability also was high, with a mean ICC (2,1) of.89 obtained. Physical therapists were able to make accurate and reliable judgments of push-off in videotaped gait of subjects following stroke using observational assessment. Further research is indicated to explore the accuracy and reliability of data obtained with observational gait analysis as it occurs in clinical practice.
An accuracy assessment of Cartesian-mesh approaches for the Euler equations
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A critical assessment of the accuracy of Cartesian-mesh approaches for steady, transonic solutions of the Euler equations of gas dynamics is made. An exact solution of the Euler equations (Ringleb's flow) is used not only to infer the order of the truncation error of the Cartesian-mesh approaches, but also to compare the magnitude of the discrete error directly to that obtained with a structured mesh approach. Uniformly and adaptively refined solutions using a Cartesian-mesh approach are obtained and compared to each other and to uniformly refined structured mesh results. The effect of cell merging is investigated as well as the use of two different K-exact reconstruction procedures. The solution methodology of the schemes is explained and tabulated results are presented to compare the solution accuracies.
A Detailed Evaluation of a Laser Triangulation Ranging System for Mobile Robots
1983-08-01
System Accuracy Factors ..................10 2.1.2 Detector "Cone of Vision" Problem ..................... 10 2. 1.3 Laser Triangulation Justification... product of these advances. Since 1968, when the effort began under a NASA grant, the project has undergone many changes both in the design goals and in...MD Vision System Accuracy Factors The accuracy of the data obtained by triangulation system depends on essentially three independent factors . They
NASA Astrophysics Data System (ADS)
Tumbur, O.; Safri, Z.; Hassan, R.
2018-03-01
Different types of left ventricular hypertrophy geometry are associated with different risk of cardiovascular disease. The purpose of this study was to determine the role of various ECG voltages of LVH to distinguish the type of LVH geometry. A cross-sectional study from June to November 2015, 100 patients in Adam Malik Hospital Medan. The result of LVH ECG criteria of Sokolow-Lyon was not met then obtained normal left ventricular geometry with 60% sensitivity, 72.22% specificity, and 71% accuracy. The eccentric type of LVH is obtained when the Cornel Voltage is not met; the sensitivity is 25%, specificity 71.88%, and 55% accuracy. Concentric geometric hypertrophy when the RV6/V5> 1 ratio is satisfied, the sensitivity is 55.56%, specificity 56.36%, and 56% accuracy. The RV6/V5>1 ratio was not met, the concentric geometry type of hypertrophy remodeling was determined with a sensitivity of 55.56%, specificity 49.45%, and 50% accuracy. Conclusions, various LVHECG criteria distinguish the type of LVH geometry. Sokolow-Lyon and Cornel Voltage sensitivity and specificity are better than the RV6/V5 ratio.
NASA Technical Reports Server (NTRS)
Haynie, C. C.
1980-01-01
Simple gage, used with template, can help inspectors determine whether three-dimensional curved surface has correct contour. Gage was developed as aid in explosive forming of Space Shuttle emergency-escape hatch. For even greater accuracy, wedge can be made of metal and calibrated by indexing machine.
Validation of geometric accuracy of Global Land Survey (GLS) 2000 data
Rengarajan, Rajagopalan; Sampath, Aparajithan; Storey, James C.; Choate, Michael J.
2015-01-01
The Global Land Survey (GLS) 2000 data were generated from Geocover™ 2000 data with the aim of producing a global data set of accuracy better than 25 m Root Mean Square Error (RMSE). An assessment and validation of accuracy of GLS 2000 data set, and its co-registration with Geocover™ 2000 data set is presented here. Since the availability of global data sets that have higher nominal accuracy than the GLS 2000 is a concern, the data sets were assessed in three tiers. In the first tier, the data were compared with the Geocover™ 2000 data. This comparison provided a means of localizing regions of higher differences. In the second tier, the GLS 2000 data were compared with systematically corrected Landsat-7 scenes that were obtained in a time period when the spacecraft pointing information was extremely accurate. These comparisons localize regions where the data are consistently off, which may indicate regions of higher errors. The third tier consisted of comparing the GLS 2000 data against higher accuracy reference data. The reference data were the Digital Ortho Quads over the United States, orthorectified SPOT data over Australia, and high accuracy check points obtained using triangulation bundle adjustment of Landsat-7 images over selected sites around the world. The study reveals that the geometric errors in Geocover™ 2000 data have been rectified in GLS 2000 data, and that the accuracy of GLS 2000 data can be expected to be better than 25 m RMSE for most of its constituent scenes.
Stranieri, Andrew; Abawajy, Jemal; Kelarev, Andrei; Huda, Shamsul; Chowdhury, Morshed; Jelinek, Herbert F
2013-07-01
This article addresses the problem of determining optimal sequences of tests for the clinical assessment of cardiac autonomic neuropathy (CAN). We investigate the accuracy of using only one of the recommended Ewing tests to classify CAN and the additional accuracy obtained by adding the remaining tests of the Ewing battery. This is important as not all five Ewing tests can always be applied in each situation in practice. We used new and unique database of the diabetes screening research initiative project, which is more than ten times larger than the data set used by Ewing in his original investigation of CAN. We utilized decision trees and the optimal decision path finder (ODPF) procedure for identifying optimal sequences of tests. We present experimental results on the accuracy of using each one of the recommended Ewing tests to classify CAN and the additional accuracy that can be achieved by adding the remaining tests of the Ewing battery. We found the best sequences of tests for cost-function equal to the number of tests. The accuracies achieved by the initial segments of the optimal sequences for 2, 3 and 4 categories of CAN are 80.80, 91.33, 93.97 and 94.14, and respectively, 79.86, 89.29, 91.16 and 91.76, and 78.90, 86.21, 88.15 and 88.93. They show significant improvement compared to the sequence considered previously in the literature and the mathematical expectations of the accuracies of a random sequence of tests. The complete outcomes obtained for all subsets of the Ewing features are required for determining optimal sequences of tests for any cost-function with the use of the ODPF procedure. We have also found two most significant additional features that can increase the accuracy when some of the Ewing attributes cannot be obtained. The outcomes obtained can be used to determine the optimal sequences of tests for each individual cost-function by following the ODPF procedure. The results show that the best single Ewing test for diagnosing CAN is the deep breathing heart rate variation test. Optimal sequences found for the cost-function equal to the number of tests guarantee that the best accuracy is achieved after any number of tests and provide an improvement in comparison with the previous ordering of tests or a random sequence. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Kanning, G.; Cicolani, L. S.; Schmidt, S. F.
1983-01-01
Translational state estimation in terminal area operations, using a set of commonly available position, air data, and acceleration sensors, is described. Kalman filtering is applied to obtain maximum estimation accuracy from the sensors but feasibility in real-time computations requires a variety of approximations and devices aimed at minimizing the required computation time with only negligible loss of accuracy. Accuracy behavior throughout the terminal area, its relation to sensor accuracy, its effect on trajectory tracking errors and control activity in an automatic flight control system, and its adequacy in terms of existing criteria for various terminal area operations are examined. The principal investigative tool is a simulation of the system.
Elkovitch, Natasha; Viljoen, Jodi L; Scalora, Mario J; Ullman, Daniel
2008-01-01
As courts often rely on clinicians when differentiating between sexually abusive youth at a low versus high risk of reoffense, understanding factors that contribute to accuracy in assessment of risk is imperative. The present study built on existing research by examining (1) the accuracy of clinical judgments of risk made after completing risk assessment instruments, (2) whether instrument-informed clinical judgments made with a high degree of confidence are associated with greater accuracy, and (3) the risk assessment instruments and subscales most predictive of clinical judgments. Raters assessed each youth's (n = 166) risk of reoffending after completing the SAVRY and J-SOAP-II. Raters were not able to predict detected cases of either sexual recidivism or nonsexual violent recidivism above chance, and a high degree of rater confidence was not associated with higher levels of accuracy. Total scores on the J-SOAP-II were predictive of instrument-informed clinical judgments of sexual risk, and total scores on the SAVRY of nonsexual risk.
Design considerations and validation of the MSTAR absolute metrology system
NASA Astrophysics Data System (ADS)
Peters, Robert D.; Lay, Oliver P.; Dubovitsky, Serge; Burger, Johan; Jeganathan, Muthu
2004-08-01
Absolute metrology measures the actual distance between two optical fiducials. A number of methods have been employed, including pulsed time-of-flight, intensity-modulated optical beam, and two-color interferometry. The rms accuracy is currently limited to ~5 microns. Resolving the integer number of wavelengths requires a 1-sigma range accuracy of ~0.1 microns. Closing this gap has a large pay-off: the range (length measurement) accuracy can be increased substantially using the unambiguous optical phase. The MSTAR sensor (Modulation Sideband Technology for Absolute Ranging) is a new system for measuring absolute distance, capable of resolving the integer cycle ambiguity of standard interferometers, and making it possible to measure distance with sub-nanometer accuracy. In this paper, we present recent experiments that use dispersed white light interferometry to independently validate the zero-point of the system. We also describe progress towards reducing the size of optics, and stabilizing the laser wavelength for operation over larger target ranges. MSTAR is a general-purpose tool for conveniently measuring length with much greater accuracy than was previously possible, and has a wide range of possible applications.
NASA Astrophysics Data System (ADS)
Blake, Samantha L.; Walker, S. Hunter; Muddiman, David C.; Hinks, David; Beck, Keith R.
2011-12-01
Color Index Disperse Yellow 42 (DY42), a high-volume disperse dye for polyester, was used to compare the capabilities of the LTQ-Orbitrap XL and the LTQ-FT-ICR with respect to mass measurement accuracy (MMA), spectral accuracy, and sulfur counting. The results of this research will be used in the construction of a dye database for forensic purposes; the additional spectral information will increase the confidence in the identification of unknown dyes found in fibers at crime scenes. Initial LTQ-Orbitrap XL data showed MMAs greater than 3 ppm and poor spectral accuracy. Modification of several Orbitrap installation parameters (e.g., deflector voltage) resulted in a significant improvement of the data. The LTQ-FT-ICR and LTQ-Orbitrap XL (after installation parameters were modified) exhibited MMA ≤ 3 ppm, good spectral accuracy (χ2 values for the isotopic distribution ≤ 2), and were correctly able to ascertain the number of sulfur atoms in the compound at all resolving powers investigated for AGC targets of 5.00 × 105 and 1.00 × 106.
Role of interoceptive accuracy in topographical changes in emotion-induced bodily sensations
Jung, Won-Mo; Ryu, Yeonhee; Lee, Ye-Seul; Wallraven, Christian; Chae, Younbyoung
2017-01-01
The emotion-associated bodily sensation map is composed of a specific topographical distribution of bodily sensations to categorical emotions. The present study investigated whether or not interoceptive accuracy was associated with topographical changes in this map following emotion-induced bodily sensations. This study included 31 participants who observed short video clips containing emotional stimuli and then reported their sensations on the body map. Interoceptive accuracy was evaluated with a heartbeat detection task and the spatial patterns of bodily sensations to specific emotions, including anger, fear, disgust, happiness, sadness, and neutral, were visualized using Statistical Parametric Mapping (SPM) analyses. Distinct patterns of bodily sensations were identified for different emotional states. In addition, positive correlations were found between the magnitude of sensation in emotion-specific regions and interoceptive accuracy across individuals. A greater degree of interoceptive accuracy was associated with more specific topographical changes after emotional stimuli. These results suggest that the awareness of one’s internal bodily states might play a crucial role as a required messenger of sensory information during the affective process. PMID:28877218
A new method of differential structural analysis of gamma-family basic parameters
NASA Technical Reports Server (NTRS)
Melkumian, L. G.; Ter-Antonian, S. V.; Smorodin, Y. A.
1985-01-01
The maximum likelihood method is used for the first time to restore parameters of electron photon cascades registered on X-ray films. The method permits one to carry out a structural analysis of the gamma quanta family darkening spots independent of the gamma quanta overlapping degree, and to obtain maximum admissible accuracies in estimating the energies of the gamma quanta composing a family. The parameter estimation accuracy weakly depends on the value of the parameters themselves and exceeds by an order of the values obtained by integral methods.
High Accuracy, Two-Dimensional Read-Out in Multiwire Proportional Chambers
DOE R&D Accomplishments Database
Charpak, G.; Sauli, F.
1973-02-14
In most applications of proportional chambers, especially in high-energy physics, separate chambers are used for measuring different coordinates. In general one coordinate is obtained by recording the pulses from the anode wires around which avalanches have grown. Several methods have been imagined for obtaining the position of an avalanche along a wire. In this article a method is proposed which leads to the same range of accuracies and may be preferred in some cases. The problem of accurate measurements for large-size chamber is also discussed.
Mikhailova, E S; Slavutskaya, A V; Gerasimenko, N Yu
2012-08-30
The gender differences in accuracy, reaction time (RT) and amplitude of the early P1 and N1 components of ERPs during recognition of previously memorized objects after their spatial transformation were examined. We used three levels of the spatial transformation: a displacement of object details in radial direction, and a displacement in combination with rotation of the details by ±0° to 45° and ±45° to 90°. The accuracy and the RT data showed a similarity of task performance in males and females. The effect of rotation was significantly greater than the effect of simple displacement, and the accuracy decreased, and the RT increased with the rotation angle in both genders. At the same time we found significant sex differences in the early stage of visual processing. In males the P1 peak amplitude at the P3/P4 sites increased significantly during the recognition of spatially transformed objects, and the wider the angle of rotation the greater the P1 peak amplitude. In contrast, in females the P1 peak amplitude did not depend on the rotation of figure details. The N1 amplitude revealed no gender differences, although the object transformation evoked somewhat greater changes in the N1 at the O1/O2 sites in females compared to males. This new fact that only males demonstrated the sensitivity of early perceptual stage to the transformation of objects adds information about the neurobiological basis of different strategies in the visual processing used by each gender. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Comparison of temporal to pulmonary artery temperature in febrile patients.
Furlong, Donna; Carroll, Diane L; Finn, Cynthia; Gay, Diane; Gryglik, Christine; Donahue, Vivian
2015-01-01
As a routine part of clinical care, temperature measurement is a key indicator of illness. With the criterion standard of temperature measurement from the pulmonary artery catheter thermistor (PAT), which insertion of PAT carries significant risk to the patient, a noninvasive method that is accurate and precise is needed. The purpose of this study was to measure the precision and accuracy of 2 commonly used methods of collecting body temperature: PAT considered the criterion standard and the temporal artery thermometer (TAT) in those patients with a temperature greater than 100.4°F. This is a repeated-measures design with each patient with a PAT in the intensive care unit acting as their own control to investigate the difference in PAT readings and readings from TAT in the core mode. Accuracy and precision were analyzed. There were 60 subjects, 41 males and 19 females, with mean age of 60.8 years, and 97% (n = 58) were post-cardiac surgery. There was a statistically significant difference between PAT and TAT (101.0°F [SD, 0.5°F] vs 100.5°F [SD, 0.8°F]; bias, -0.49°F; P < .001). Differences in temperature between the 2 methods were clinically significant (ie, >0.9°F different) in 15 of 60 cases (25%). No TAT measurements were 0.9 F greater than the corresponding PAT measurement (0%; 95% confidence interval, 0%-6%). These data demonstrate the accuracy of TAT when compared with PAT in those with temperatures of 100.4°F or greater. This study demonstrates that TAT set to core mode is accurate with a 0.5°F lower temperature than PAT. There was 25% in variability in precision of TAT.
Lempert, Henrietta
2016-05-01
Second language (L2) learners often have persistent difficulty with agreement between the number of the subject and the number of the verb. This study tested whether deviant L2 verb number agreement reflects maturational constraints on acquiring new grammatical features or resource limitations that impede access to the representations of L2 grammatical features. L1-Chinese undergraduate students at three age of arrival (AoA) levels were tested for online verb agreement accuracy by completing preambles in three animacy combinations: animate-inanimate [AI; e.g., The officer(s) from the station(s)], inanimate-animate [IA; e.g., The letters from the lawyer(s)], and inanimate-inanimate [II; e.g., The poster(s) from the museum(s)]. AI should be less costly to process than IA or II sequences, because animacy supports the subject in AI but competes with the subject for control of agreement in IA sequences, and is neutralized in II. Agreement accuracy was greater overall for AI than for IA or II, and although an AoA-related increase in erroneous agreement after plural subjects occurred for IA and II, there were no AoA effects for AI. Higher scores on memory tasks were associated with greater agreement accuracy, and the memory tasks significantly predicted variance in erroneous agreement when AoA was partialed out. The fact that even late learners can do verb agreement in the case of AI demonstrates that they can acquire new grammatical features. The greater difficulty with agreement in the case of IA or II than of AI, in conjunction with the results for the memory tasks, supports the resource limitation hypothesis.
Large scale Wyoming transportation data: a resource planning tool
O'Donnell, Michael S.; Fancher, Tammy S.; Freeman, Aaron T.; Ziegler, Abra E.; Bowen, Zachary H.; Aldridge, Cameron L.
2014-01-01
The U.S. Geological Survey Fort Collins Science Center created statewide roads data for the Bureau of Land Management Wyoming State Office using 2009 aerial photography from the National Agriculture Imagery Program. The updated roads data resolves known concerns of omission, commission, and inconsistent representation of map scale, attribution, and ground reference dates which were present in the original source data. To ensure a systematic and repeatable approach of capturing roads on the landscape using on-screen digitizing from true color National Agriculture Imagery Program imagery, we developed a photogrammetry key and quality assurance/quality control protocols. Therefore, the updated statewide roads data will support the Bureau of Land Management’s resource management requirements with a standardized map product representing 2009 ground conditions. The updated Geographic Information System roads data set product, represented at 1:4,000 and +/- 10 meters spatial accuracy, contains 425,275 kilometers within eight attribute classes. The quality control of these products indicated a 97.7 percent accuracy of aspatial information and 98.0 percent accuracy of spatial locations. Approximately 48 percent of the updated roads data was corrected for spatial errors of greater than 1 meter relative to the pre-existing road data. Twenty-six percent of the updated roads involved correcting spatial errors of greater than 5 meters and 17 percent of the updated roads involved correcting spatial errors of greater than 9 meters. The Bureau of Land Management, other land managers, and researchers can use these new statewide roads data set products to support important studies and management decisions regarding land use changes, transportation and planning needs, transportation safety, wildlife applications, and other studies.
Gong, Gordon; Mattevada, Sravan; O'Bryant, Sid E
2014-04-01
Exposure to arsenic causes many diseases. Most Americans in rural areas use groundwater for drinking, which may contain arsenic above the currently allowable level, 10µg/L. It is cost-effective to estimate groundwater arsenic levels based on data from wells with known arsenic concentrations. We compared the accuracy of several commonly used interpolation methods in estimating arsenic concentrations in >8000 wells in Texas by the leave-one-out-cross-validation technique. Correlation coefficient between measured and estimated arsenic levels was greater with inverse distance weighted (IDW) than kriging Gaussian, kriging spherical or cokriging interpolations when analyzing data from wells in the entire Texas (p<0.0001). Correlation coefficient was significantly lower with cokriging than any other methods (p<0.006) for wells in Texas, east Texas or the Edwards aquifer. Correlation coefficient was significantly greater for wells in southwestern Texas Panhandle than in east Texas, and was higher for wells in Ogallala aquifer than in Edwards aquifer (p<0.0001) regardless of interpolation methods. In regression analysis, the best models are when well depth and/or elevation were entered into the model as covariates regardless of area/aquifer or interpolation methods, and models with IDW are better than kriging in any area/aquifer. In conclusion, the accuracy in estimating groundwater arsenic level depends on both interpolation methods and wells' geographic distributions and characteristics in Texas. Taking well depth and elevation into regression analysis as covariates significantly increases the accuracy in estimating groundwater arsenic level in Texas with IDW in particular. Published by Elsevier Inc.
Skill in Precipitation Forecasting in the National Weather Service.
NASA Astrophysics Data System (ADS)
Charba, Jerome P.; Klein, William H.
1980-12-01
All known long-term records of forecasting performance for different types of precipitation forecasts in the National Weather Service were examined for relative skill and secular trends in skill. The largest upward trends were achieved by local probability of precipitation (PoP) forecasts for the periods 24-36 h and 36-48 h after 0000 and 1200 GMT. Over the last 13 years, the skill of these forecasts has improved at an average rate of 7.2% per 10-year interval. Over the same period, improvement has been smaller in local PoP skill in the 12-24 h range (2.0% per 10 years) and in the accuracy of "Yea/No" forecasts of measurable precipitation. The overall trend in accuracy of centralized quantitative precipitation forecasts of 0.5 in and 1.0 in has been slightly upward at the 0-24 h range and strongly upward at the 24-48 h range. Most of the improvement in these forecasts has been achieved from the early 1970s to the present. Strong upward accuracy trends in all types of precipitation forecasts within the past eight years are attributed primarily to improvements in numerical and statistical centralized guidance forecasts.The skill and accuracy of both measurable and quantitative precipitation forecasts is 35-55% greater during the cool season than during the warm season. Also, the secular rate of improvement of the cool season precipitation forecasts is 50-110% greater than that of the warm season. This seasonal difference in performance reflects the relative difficulty of forecasting predominantly stratiform precipitation of the cool season and convective precipitation of the warm season.
Synthesis fidelity and time-varying spectral change in vowels
NASA Astrophysics Data System (ADS)
Assmann, Peter F.; Katz, William F.
2005-02-01
Recent studies have shown that synthesized versions of American English vowels are less accurately identified when the natural time-varying spectral changes are eliminated by holding the formant frequencies constant over the duration of the vowel. A limitation of these experiments has been that vowels produced by formant synthesis are generally less accurately identified than the natural vowels after which they are modeled. To overcome this limitation, a high-quality speech analysis-synthesis system (STRAIGHT) was used to synthesize versions of 12 American English vowels spoken by adults and children. Vowels synthesized with STRAIGHT were identified as accurately as the natural versions, in contrast with previous results from our laboratory showing identification rates 9%-12% lower for the same vowels synthesized using the cascade formant model. Consistent with earlier studies, identification accuracy was not reduced when the fundamental frequency was held constant across the vowel. However, elimination of time-varying changes in the spectral envelope using STRAIGHT led to a greater reduction in accuracy (23%) than was previously found with cascade formant synthesis (11%). A statistical pattern recognition model, applied to acoustic measurements of the natural and synthesized vowels, predicted both the higher identification accuracy for vowels synthesized using STRAIGHT compared to formant synthesis, and the greater effects of holding the formant frequencies constant over time with STRAIGHT synthesis. Taken together, the experiment and modeling results suggest that formant estimation errors and incorrect rendering of spectral and temporal cues by cascade formant synthesis contribute to lower identification accuracy and underestimation of the role of time-varying spectral change in vowels. .
Raico Gallardo, Yolanda Natali; da Silva-Olivio, Isabela Rodrigues Teixeira; Mukai, Eduardo; Morimoto, Susana; Sesma, Newton; Cordaro, Luca
2017-05-01
To systematically assess the current dental literature comparing the accuracy of computer-aided implant surgery when using different supporting tissues (tooth, mucosa, or bone). Two reviewers searched PubMed (1972 to January 2015) and the Cochrane Central Register of Controlled Trials (Central) (2002 to January 2015). For the assessment of accuracy, studies were included with the following outcome measures: (i) angle deviation, (ii) deviation at the entry point, and (iii) deviation at the apex. Eight clinical studies from the 1602 articles initially identified met the inclusion criteria for the qualitative analysis. Four studies (n = 599 implants) were evaluated using meta-analysis. The bone-supported guides showed a statistically significant greater deviation in angle (P < 0.001), entry point (P = 0.01), and the apex (P = 0.001) when compared to the tooth-supported guides. Conversely, when only retrospective studies were analyzed, not significant differences are revealed in the deviation of the entry point and apex. The mucosa-supported guides indicated a statistically significant greater reduction in angle deviation (P = 0.02), deviation at the entry point (P = 0.002), and deviation at the apex (P = 0.04) when compared to the bone-supported guides. Between the mucosa- and tooth-supported guides, there were no statistically significant differences for any of the outcome measures. It can be concluded that the tissue of the guide support influences the accuracy of computer-aided implant surgery. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Shah, Shabir A; Naqash, Talib Amin; Padmanabhan, T V; Subramanium; Lambodaran; Nazir, Shazana
2014-03-01
The sole objective of casting procedure is to provide a metallic duplication of missing tooth structure, with as great accuracy as possible. The ability to produce well fitting castings require strict adherence to certain fundamentals. A study was undertaken to comparatively evaluate the effect on casting accuracy by subjecting the invested wax patterns to burnout after different time intervals. The effect on casting accuracy using metal ring into a pre heated burnout furnace and using split ring was also carried. The readings obtained were tabulated and subjected to statistical analysis.
Modelling Accuracy of a Car Steering Mechanism with Rack and Pinion and McPherson Suspension
NASA Astrophysics Data System (ADS)
Knapczyk, J.; Kucybała, P.
2016-08-01
Modelling accuracy of a car steering mechanism with a rack and pinion and McPherson suspension is analyzed. Geometrical parameters of the model are described by using the coordinates of centers of spherical joints, directional unit vectors and axis points of revolute, cylindrical and prismatic joints. Modelling accuracy is assumed as the differences between the values of the wheel knuckle position and orientation coordinates obtained using a simulation model and the corresponding measured values. The sensitivity analysis of the parameters on the model accuracy is illustrated by two numerical examples.
Rutvisuttinunt, Wiriya; Chinnawirotpisan, Piyawan; Simasathien, Sriluck; Shrestha, Sanjaya K; Yoon, In-Kyu; Klungthong, Chonticha; Fernandez, Stefan
2013-11-01
Active global surveillance and characterization of influenza viruses are essential for better preparation against possible pandemic events. Obtaining comprehensive information about the influenza genome can improve our understanding of the evolution of influenza viruses and emergence of new strains, and improve the accuracy when designing preventive vaccines. This study investigated the use of deep sequencing by the next-generation sequencing (NGS) Illumina MiSeq Platform to obtain complete genome sequence information from influenza virus isolates. The influenza virus isolates were cultured from 6 respiratory acute clinical specimens collected in Thailand and Nepal. DNA libraries obtained from each viral isolate were mixed and all were sequenced simultaneously. Total information of 2.6 Gbases was obtained from a 455±14 K/mm2 density with 95.76% (8,571,655/8,950,724 clusters) of the clusters passing quality control (QC) filters. Approximately 93.7% of all sequences from Read1 and 83.5% from Read2 contained high quality sequences that were ≥Q30, a base calling QC score standard. Alignments analysis identified three seasonal influenza A H3N2 strains, one 2009 pandemic influenza A H1N1 strain and two influenza B strains. The nearly entire genomes of all six virus isolates yielded equal or greater than 600-fold sequence coverage depth. MiSeq Platform identified seasonal influenza A H3N2, 2009 pandemic influenza A H1N1and influenza B in the DNA library mixtures efficiently. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
Essa, Essa; Makki, Nader; Bittenbender, Peter; Capers, Quinn; George, Barry; Rushing, Gregory; Crestanello, Juan; Boudoulas, Konstantinos Dean; Lilly, Scott M
2016-12-01
Assessment of the femoral and iliac arteries is essential prior to transcatheter aortic valve replacement (TAVR). It is critical for establishing candidacy for a femoral approach, and can help predict vascular complications. Although computed tomography angiography (CTA) is the standard imaging modality, it has limitations. This study compared CTA with intravascular ultrasound (IVUS) in patients undergoing TAVR evaluation. Fifteen patients undergoing pre-TAVR coronary angiography and hemodynamic assessment were recruited. Following coronary angiography, patients underwent distal aortography, bilateral iliac and femoral arteriography, and IVUS assessment. Vascular tortuosity, minimum lumen diameter, and cross-sectional area were obtained and the findings were compared with those obtained from CTA. Correlation between IVUS and CTA was strong for minimum luminal diameter (r=0.62). Concordance was also strong between CTA and invasive iliofemoral angiography for assessment of tortuosity (r=0.75). Utilizing Bland-Altman analysis, vessel diameters obtained by IVUS were consistently greater than those obtained by CTA. The angiography and IVUS strategy was associated with a lower overall mean contrast utilization (29 cc vs 100 cc; P<.001), reduced mean radiation exposure (527 mGy vs 998 mGy; P=.045), and no significant difference in mean test duration (13.3 minutes vs 10 minutes; P=.12). For femoral and iliac arterial assessment prior to TAVR, IVUS is a viable alternative to CTA with comparable accuracy, and the potential for less contrast use and less radiation exposure. IVUS is also a valuable adjunct to CTA in patients with borderline femoral access diameters or considerable CTA artifacts.
Mendonca, Derick A; Naidoo, Sybill D; Skolnick, Gary; Skladman, Rachel; Woo, Albert S
2013-07-01
Craniofacial anthropometry by direct caliper measurements is a common method of quantifying the morphology of the cranial vault. New digital imaging modalities including computed tomography and three-dimensional photogrammetry are similarly being used to obtain craniofacial surface measurements. This study sought to compare the accuracy of anthropometric measurements obtained by calipers versus 2 methods of digital imaging.Standard anterior-posterior, biparietal, and cranial index measurements were directly obtained on 19 participants with an age range of 1 to 20 months. Computed tomographic scans and three-dimensional photographs were both obtained on each child within 2 weeks of the clinical examination. Two analysts measured the anterior-posterior and biparietal distances on the digital images. Measures of reliability and bias between the modalities were calculated and compared.Caliper measurements were found to underestimate the anterior-posterior and biparietal distances as compared with those of the computed tomography and the three-dimensional photogrammetry (P < 0.001). Cranial index measurements between the computed tomography and the calipers differed by up to 6%. The difference between the 2 modalities was statistically significant (P = 0.021). The biparietal and cranial index results were similar between the digital modalities, but the anterior-posterior measurement was greater with the three-dimensional photogrammetry (P = 0.002). The coefficients of variation for repeated measures based on the computed tomography and the three-dimensional photogrammetry were 0.008 and 0.007, respectively.In conclusion, measurements based on digital modalities are generally reliable and interchangeable. Caliper measurements lead to underestimation of anterior-posterior and biparietal values compared with digital imaging.
Henderson, Heather A.; Newell, Lisa; Jaime, Mark; Mundy, Peter
2015-01-01
Higher-functioning participants with and without autism spectrum disorder (ASD) viewed a series of face stimuli, made decisions regarding the affect of each face, and indicated their confidence in each decision. Confidence significantly predicted accuracy across all participants, but this relation was stronger for participants with typical development than participants with ASD. In the hierarchical linear modeling analysis, there were no differences in face processing accuracy between participants with and without ASD, but participants with ASD were more confident in their decisions. These results suggest that individuals with ASD have metacognitive impairments and are overconfident in face processing. Additionally, greater metacognitive awareness was predictive of better face processing accuracy, suggesting that metacognition may be a pivotal skill to teach in interventions. PMID:26496991
A new method to obtain ground control points based on SRTM data
NASA Astrophysics Data System (ADS)
Wang, Pu; An, Wei; Deng, Xin-pu; Zhang, Xi
2013-09-01
The GCPs are widely used in remote sense image registration and geometric correction. Normally, the DRG and DOM are the major data source from which GCPs are extracted. But the high accuracy products of DRG and DOM are usually costly to obtain. Some of the production are free, yet without any guarantee. In order to balance the cost and the accuracy, the paper proposes a method of extracting the GCPs from SRTM data. The method consist of artificial assistance, binarization, data resample and reshape. With artificial assistance to find out which part of SRTM data could be used as GCPs, such as the islands or sharp coast line. By utilizing binarization algorithm , the shape information of the region is obtained while other information is excluded. Then the binary data is resampled to a suitable resolution required by specific application. At last, the data would be reshaped according to satellite imaging type to obtain the GCPs which could be used. There are three advantages of the method proposed in the paper. Firstly, the method is easy for implementation. Unlike the DRG data or DOM data that charges a lot, the SRTM data is totally free to access without any constricts. Secondly, the SRTM has a high accuracy about 90m that is promised by its producer, so the GCPs got from it can also obtain a high quality. Finally, given the SRTM data covers nearly all the land surface of earth between latitude -60° and latitude +60°, the GCPs which are produced by the method can cover most important regions of the world. The method which obtain GCPs from SRTM data can be used in meteorological satellite image or some situation alike, which have a relative low requirement about the accuracy. Through plenty of simulation test, the method is proved convenient and effective.
Use of a BOD oxygen probe for estimating primary productivity
Raymond L. Czaplewski; Michael Parker
1973-01-01
The accuracy of a BOD oxygen probe for field measurements of primary production by the light and dark bottle oxygen technique is analyzed. A figure is presented with which to estimate the number of replicate bottles needed to obtain a given accuracy in estimating photosynthetic rates.
Demystifying the Clinical Diagnosis of Greater Trochanteric Pain Syndrome in Women.
Ganderton, Charlotte; Semciw, Adam; Cook, Jill; Pizzari, Tania
2017-06-01
To evaluate the diagnostic accuracy of 10 clinical tests that can be used in the diagnosis of greater trochanteric pain syndrome (GTPS) in women, and to compare these clinical tests to magnetic resonance imaging (MRI) findings. Twenty-eight participants with GTPS (49.5 ± 22.0 years) and 18 asymptomatic participants (mean age ± standard deviation [SD], 52.5 ± 22.8 years) were included. A blinded physiotherapist performed 10 pain provocation tests potentially diagnostic for GTPS-palpation of the greater trochanter, resisted external derotation test, modified resisted external derotation test, standard and modified Ober's tests, Patrick's or FABER test, resisted hip abduction, single-leg stance test, and the resisted hip internal rotation test. A sample of 16 symptomatic and 17 asymptomatic women undertook a hip MRI scan. Gluteal tendons were evaluated and categorized as no pathology, mild tendinosis, moderate tendinosis/partial tear, or full-thickness tear. Clinical test analyses show high specificity, high positive predictive value, low to moderate sensitivity, and negative predictive value for most clinical tests. All symptomatic and 88% of asymptomatic participants had pathological gluteal tendon changes on MRI, from mild tendinosis to full-thickness tear. The study found the Patrick's or FABER test, palpation of the greater trochanter, resisted hip abduction, and the resisted external derotation test to have the highest diagnostic test accuracy for GTPS. Tendon pathology on MRI is seen in both symptomatic and asymptomatic women.
A Hybrid Approach for the Automated Finishing of Bacterial Genomes
Robins, William P.; Chin, Chen-Shan; Webster, Dale; Paxinos, Ellen; Hsu, David; Ashby, Meredith; Wang, Susana; Peluso, Paul; Sebra, Robert; Sorenson, Jon; Bullard, James; Yen, Jackie; Valdovino, Marie; Mollova, Emilia; Luong, Khai; Lin, Steven; LaMay, Brianna; Joshi, Amruta; Rowe, Lori; Frace, Michael; Tarr, Cheryl L.; Turnsek, Maryann; Davis, Brigid M; Kasarskis, Andrew; Mekalanos, John J.; Waldor, Matthew K.; Schadt, Eric E.
2013-01-01
Dramatic improvements in DNA sequencing technology have revolutionized our ability to characterize most genomic diversity. However, accurate resolution of large structural events has remained challenging due to the comparatively shorter read lengths of second-generation technologies. Emerging third-generation sequencing technologies, which yield markedly increased read length on rapid time scales and for low cost, have the potential to address assembly limitations. Here we combine sequencing data from second- and third-generation DNA sequencing technologies to assemble the two-chromosome genome of a recent Haitian cholera outbreak strain into two nearly finished contigs at > 99.9% accuracy. Complex regions with clinically significant structure were completely resolved. In separate control assemblies on experimental and simulated data for the canonical N16961 reference we obtain 14 and 8 scaffolds greater than 1kb, respectively, correcting several errors in the underlying source data. This work provides a blueprint for the next generation of rapid microbial identification and full-genome assembly. PMID:22750883
Efficient Calculation of Exact Exchange Within the Quantum Espresso Software Package
NASA Astrophysics Data System (ADS)
Barnes, Taylor; Kurth, Thorsten; Carrier, Pierre; Wichmann, Nathan; Prendergast, David; Kent, Paul; Deslippe, Jack
Accurate simulation of condensed matter at the nanoscale requires careful treatment of the exchange interaction between electrons. In the context of plane-wave DFT, these interactions are typically represented through the use of approximate functionals. Greater accuracy can often be obtained through the use of functionals that incorporate some fraction of exact exchange; however, evaluation of the exact exchange potential is often prohibitively expensive. We present an improved algorithm for the parallel computation of exact exchange in Quantum Espresso, an open-source software package for plane-wave DFT simulation. Through the use of aggressive load balancing and on-the-fly transformation of internal data structures, our code exhibits speedups of approximately an order of magnitude for practical calculations. Additional optimizations are presented targeting the many-core Intel Xeon-Phi ``Knights Landing'' architecture, which largely powers NERSC's new Cori system. We demonstrate the successful application of the code to difficult problems, including simulation of water at a platinum interface and computation of the X-ray absorption spectra of transition metal oxides.
New results and techniques in space radio astronomy.
NASA Technical Reports Server (NTRS)
Alexander, J. K.
1971-01-01
The methods and results of early space radioastronomy experiments are reviewed, with emphasis on the RAE 1 spacecraft which was designed specifically and exclusively for radio astronomical studies. The RAE 1 carries two gravity-gradient-stabilized 229-m traveling-wave V-antennas, a 37-m dipole antenna, and a number of radiometer systems to provide measurements over the 0.2 to 9.2 MHz frequency range with a time resolution of 0.5 sec and an absolute accuracy of plus or minus 25%. Observations of solar bursts at frequencies down to 0.2 MHz provide new information on the density, plasma velocity, and dynamics of coronal streamers out to distances greater than 50 solar radii. New information on the distribution of the ionized component of the interstellar medium is being obtained from galactic continuum background maps at frequencies around 4 MHz. Cosmic noise background spectra measured down to 0.5 MHz produce new estimates on the interstellar flux of cosmic rays, on magnetic fields in the galactic halo, and on distant extragalactic radio sources.
Pandey, Shilpa; Hakky, Michael; Kwak, Ellie; Jara, Hernan; Geyer, Carl A; Erbay, Sami H
2013-05-01
Neurovascular imaging studies are routinely used for the assessment of headaches and changes in mental status, stroke workup, and evaluation of the arteriovenous structures of the head and neck. These imaging studies are being performed with greater frequency as the aging population continues to increase. Magnetic resonance (MR) angiographic imaging techniques are helpful in this setting. However, mastering these techniques requires an in-depth understanding of the basic principles of physics, complex flow patterns, and the correlation of MR angiographic findings with conventional MR imaging findings. More than one imaging technique may be used to solve difficult cases, with each technique contributing unique information. Unfortunately, incorporating findings obtained with multiple imaging modalities may add to the diagnostic challenge. To ensure diagnostic accuracy, it is essential that the radiologist carefully evaluate the details provided by these modalities in light of basic physics principles, the fundamentals of various imaging techniques, and common neurovascular imaging pitfalls. ©RSNA, 2013.
Intentionality in Healing--The Voices of Men in Nursing: A Grounded Theory Investigation.
Zahourek, Rothlyn P
2015-12-01
The purpose of this study was to evaluate and potentially modify or expand a previously developed theory: Intentionality: The Matrix of Healing (IMH) using a sample of men in nursing. A modified grounded theory approach described by Chen and Boore (2009) and by Amsteus (2014). Twelve men in nursing were recruited. Each was interviewed at least once and their feedback solicited to determine accuracy of interpretation. Results were compared and contrasted to those obtained from the earlier research with six female nurses and their patients. Both groups viewed intentionality as different from, and greater than, intention. Intentionality reflects the whole person's values, goals, and experiences. The men emphasized the importance of reflective spiritual practices, developing self-awareness, being aware of the stress experienced by males in a female profession, and the role of action in manifesting intentionality in healing. The theory is substantiated with minor changes in emphases. Further study is warranted to expand the understanding of this basic concept in nursing and healing. © The Author(s) 2015.
Prediction of sweetness and amino acid content in soybean crops from hyperspectral imagery
NASA Astrophysics Data System (ADS)
Monteiro, Sildomar Takahashi; Minekawa, Yohei; Kosugi, Yukio; Akazawa, Tsuneya; Oda, Kunio
Hyperspectral image data provides a powerful tool for non-destructive crop analysis. This paper investigates a hyperspectral image data-processing method to predict the sweetness and amino acid content of soybean crops. Regression models based on artificial neural networks were developed in order to calculate the level of sucrose, glucose, fructose, and nitrogen concentrations, which can be related to the sweetness and amino acid content of vegetables. A performance analysis was conducted comparing regression models obtained using different preprocessing methods, namely, raw reflectance, second derivative, and principal components analysis. This method is demonstrated using high-resolution hyperspectral data of wavelengths ranging from the visible to the near infrared acquired from an experimental field of green vegetable soybeans. The best predictions were achieved using a nonlinear regression model of the second derivative transformed dataset. Glucose could be predicted with greater accuracy, followed by sucrose, fructose and nitrogen. The proposed method provides the possibility to provide relatively accurate maps predicting the chemical content of soybean crop fields.
Determination of patulin in commercial apple juice by micellar electrokinetic chromatography.
Murillo, M; González-Peñas, E; Amézqueta, S
2008-01-01
A novel and validated micellar electrokinetic capillary chromatography (MEKC) method using ultraviolet detection (UV) has been applied to the quantitative analysis of patulin (PAT) in commercial apple juice. Patulin was extracted from samples with an ethylacetate solution. The micellar electrokinetic capillary chromatography (MECK) parameters studied for method optimization were buffer composition, voltage, temperature, and a separation between PAT and 5-hydroxymethylfurfural (HMF) (main interference in apple juice PAT analysis) peaks until reaching baseline. The method passes a series of validation tests including selectivity, linearity, limit of detection and quantification (0.7 and 2.5 microgL(-1), respectively), precision (within and between-day variability) and recovery (80.2% RSD=4%), accuracy, and robustness. This method was successfully applied to the measurement of 20 apple juice samples obtained from different supermarkets. One hundred percent of the samples were contaminated with a level greater than the limit of detection, with mean and median values of 41.3 and 35.7 microgL(-1), respectively.
Extended census transform histogram for land-use scene classification
NASA Astrophysics Data System (ADS)
Yuan, Baohua; Li, Shijin
2017-04-01
With the popular use of high-resolution satellite images, more and more research efforts have been focused on land-use scene classification. In scene classification, effective visual features can significantly boost the final performance. As a typical texture descriptor, the census transform histogram (CENTRIST) has emerged as a very powerful tool due to its effective representation ability. However, the most prominent limitation of CENTRIST is its small spatial support area, which may not necessarily be adept at capturing the key texture characteristics. We propose an extended CENTRIST (eCENTRIST), which is made up of three subschemes in a greater neighborhood scale. The proposed eCENTRIST not only inherits the advantages of CENTRIST but also encodes the more useful information of local structures. Meanwhile, multichannel eCENTRIST, which can capture the interactions from multichannel images, is developed to obtain higher categorization accuracy rates. Experimental results demonstrate that the proposed method can achieve competitive performance when compared to state-of-the-art methods.
NASA Technical Reports Server (NTRS)
Allred, J. W.; Fleck, V. J.
1992-01-01
A new lightweight Rotary Balance System is presently being fabricated and installed as part of a major upgrade to the existing 20 Foot Vertical Spin Tunnel. This upgrade to improve model testing productivity of the only free spinning vertical wind tunnel includes a modern fan/drive and tunnel control system, an updated video recording system, and the new rotary balance system. The rotary balance is a mechanical apparatus which enables the measurement of aerodynamic force and moment data under spinning conditions (100 rpm). This data is used in spin analysis and is vital to the implementation of large amplitude maneuvering simulations required for all new high performance aircraft. The new rotary balance system described in this report will permit greater test efficiency and improved data accuracy. Rotary Balance testing with the model enclosed in a tare bag can also be performed to obtain resulting model forces from the spinning operation. The rotary balance system will be stored against the tunnel sidewall during free flight model testing.
Experimental identification of closely spaced modes using NExT-ERA
NASA Astrophysics Data System (ADS)
Hosseini Kordkheili, S. A.; Momeni Massouleh, S. H.; Hajirezayi, S.; Bahai, H.
2018-01-01
This article presents a study on the capability of the time domain OMA method, NExT-ERA, to identify closely spaced structural dynamic modes. A survey in the literature reveals that few experimental studies have been conducted on the effectiveness of the NExT-ERA methodology in case of closely spaced modes specifically. In this paper we present the formulation for NExT-ERA. This formulation is then implemented in an algorithm and a code, developed in house to identify the modal parameters of different systems using their generated time history data. Some numerical models are firstly investigated to validate the code. Two different case studies involving a plate with closely spaced modes and a pulley ring with greater extent of closeness in repeated modes are presented. Both structures are excited by random impulses under the laboratory condition. The resulting time response acceleration data are then used as input in the developed code to extract modal parameters of the structures. The accuracy of the results is checked against those obtained from experimental tests.
NASA Astrophysics Data System (ADS)
Podestá, R.; Pacheco, A. M.; Alvis Rojas, H.; Quinteros, J.; Podestá, F.; Albornoz, E.; Navarro, A.; Luna, M.
2018-01-01
This work shows the strategy followed for the co-location of the Satellite Laser Ranging (SLR) ILRS 7406 telescope and the antenna of the permanent Global Positioning System (GPS) station, located at the Félix Aguilar Astronomical Observatory (OAFA) in San Juan, Argentina. The accomplishment of the co-location consisted in the design, construction, measurement, adjustment and compensation of a geodesic net between the stations SLR and GPS, securing support points solidly built in the soil. The co-location allows the coordinates of the station to be obtained by combining the data of both SLR and GPS techniques, achieving a greater degree of accuracy than individually. The International Earth Rotation and Reference Systems Service (IERS) considers the co-located stations as the most valuable and important points for the maintenance of terrestrial reference systems and their connection with the celestial ones. The 3 mm precision required by the IERS has been successfully achieved.
NASA Astrophysics Data System (ADS)
Zhang, Ka; Sheng, Yehua; Wang, Meizhen; Fu, Suxia
2018-05-01
The traditional multi-view vertical line locus (TMVLL) matching method is an object-space-based method that is commonly used to directly acquire spatial 3D coordinates of ground objects in photogrammetry. However, the TMVLL method can only obtain one elevation and lacks an accurate means of validating the matching results. In this paper, we propose an enhanced multi-view vertical line locus (EMVLL) matching algorithm based on positioning consistency for aerial or space images. The algorithm involves three components: confirming candidate pixels of the ground primitive in the base image, multi-view image matching based on the object space constraints for all candidate pixels, and validating the consistency of the object space coordinates with the multi-view matching result. The proposed algorithm was tested using actual aerial images and space images. Experimental results show that the EMVLL method successfully solves the problems associated with the TMVLL method, and has greater reliability, accuracy and computing efficiency.
Automatic load forecasting. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, D.J.; Vemuri, S.
A method which lends itself to on-line forecasting of hourly electric loads is presented and the results of its use are compared to models developed using the Box-Jenkins method. The method consists of processing the historical hourly loads with a sequential least-squares estimator to identify a finite order autoregressive model which in turn is used to obtain a parsimonious autoregressive-moving average model. A procedure is also defined for incorporating temperature as a variable to improve forecasts where loads are temperature dependent. The method presented has several advantages in comparison to the Box-Jenkins method including much less human intervention and improvedmore » model identification. The method has been tested using three-hourly data from the Lincoln Electric System, Lincoln, Nebraska. In the exhaustive analyses performed on this data base this method produced significantly better results than the Box-Jenkins method. The method also proved to be more robust in that greater confidence could be placed in the accuracy of models based upon the various measures available at the identification stage.« less
NASA Astrophysics Data System (ADS)
Gao, Yan; Marpu, Prashanth; Morales Manila, Luis M.
2014-11-01
This paper assesses the suitability of 8-band Worldview-2 (WV2) satellite data and object-based random forest algorithm for the classification of avocado growth stages in Mexico. We tested both pixel-based with minimum distance (MD) and maximum likelihood (MLC) and object-based with Random Forest (RF) algorithm for this task. Training samples and verification data were selected by visual interpreting the WV2 images for seven thematic classes: fully grown, middle stage, and early stage of avocado crops, bare land, two types of natural forests, and water body. To examine the contribution of the four new spectral bands of WV2 sensor, all the tested classifications were carried out with and without the four new spectral bands. Classification accuracy assessment results show that object-based classification with RF algorithm obtained higher overall higher accuracy (93.06%) than pixel-based MD (69.37%) and MLC (64.03%) method. For both pixel-based and object-based methods, the classifications with the four new spectral bands (overall accuracy obtained higher accuracy than those without: overall accuracy of object-based RF classification with vs without: 93.06% vs 83.59%, pixel-based MD: 69.37% vs 67.2%, pixel-based MLC: 64.03% vs 36.05%, suggesting that the four new spectral bands in WV2 sensor contributed to the increase of the classification accuracy.
Bias in estimating accuracy of a binary screening test with differential disease verification
Brinton, John T.; Ringham, Brandy M.; Glueck, Deborah H.
2011-01-01
SUMMARY Sensitivity, specificity, positive and negative predictive value are typically used to quantify the accuracy of a binary screening test. In some studies it may not be ethical or feasible to obtain definitive disease ascertainment for all subjects using a gold standard test. When a gold standard test cannot be used an imperfect reference test that is less than 100% sensitive and specific may be used instead. In breast cancer screening, for example, follow-up for cancer diagnosis is used as an imperfect reference test for women where it is not possible to obtain gold standard results. This incomplete ascertainment of true disease, or differential disease verification, can result in biased estimates of accuracy. In this paper, we derive the apparent accuracy values for studies subject to differential verification. We determine how the bias is affected by the accuracy of the imperfect reference test, the percent who receive the imperfect reference standard test not receiving the gold standard, the prevalence of the disease, and the correlation between the results for the screening test and the imperfect reference test. It is shown that designs with differential disease verification can yield biased estimates of accuracy. Estimates of sensitivity in cancer screening trials may be substantially biased. However, careful design decisions, including selection of the imperfect reference test, can help to minimize bias. A hypothetical breast cancer screening study is used to illustrate the problem. PMID:21495059
Locketz, Garrett D; Li, Peter M M C; Fischbein, Nancy J; Holdsworth, Samantha J; Blevins, Nikolas H
2016-10-01
A method to optimize imaging of cholesteatoma by combining the strengths of available modalities will improve diagnostic accuracy and help to target treatment. To assess whether fusing Periodically Rotated Overlapping Parallel Lines With Enhanced Reconstruction (PROPELLER) diffusion-weighted magnetic resonance imaging (DW-MRI) with corresponding temporal bone computed tomography (CT) images could increase cholesteatoma diagnostic and localization accuracy across 6 distinct anatomical regions of the temporal bone. Case series and preliminary technology evaluation of adults with preoperative temporal bone CT and PROPELLER DW-MRI scans who underwent surgery for clinically suggested cholesteatoma at a tertiary academic hospital. When cholesteatoma was encountered surgically, the precise location was recorded in a diagram of the middle ear and mastoid. For each patient, the 3 image data sets (CT, PROPELLER DW-MRI, and CT-MRI fusion) were reviewed in random order for the presence or absence of cholesteatoma by an investigator blinded to operative findings. If cholesteatoma was deemed present on review of each imaging modality, the location of the lesion was mapped presumptively. Image analysis was then compared with surgical findings. Twelve adults (5 women and 7 men; median [range] age, 45.5 [19-77] years) were included. The use of CT-MRI fusion had greater diagnostic sensitivity (0.88 vs 0.75), positive predictive value (0.88 vs 0.86), and negative predictive value (0.75 vs 0.60) than PROPELLER DW-MRI alone. Image fusion also showed increased overall localization accuracy when stratified across 6 distinct anatomical regions of the temporal bone (localization sensitivity and specificity, 0.76 and 0.98 for CT-MRI fusion vs 0.58 and 0.98 for PROPELLER DW-MRI). For PROPELLER DW-MRI, there were 15 true-positive, 45 true-negative, 1 false-positive, and 11 false-negative results; overall accuracy was 0.83. For CT-MRI fusion, there were 20 true-positive, 45 true-negative, 1 false-positive, and 6 false-negative results; overall accuracy was 0.90. The poor anatomical spatial resolution of DW-MRI makes precise localization of cholesteatoma within the middle ear and mastoid a diagnostic challenge. This study suggests that the bony anatomic detail obtained via CT coupled with the excellent sensitivity and specificity of PROPELLER DW-MRI for cholesteatoma can improve both preoperative identification and localization of disease over DW-MRI alone.
The efficiency of genome-wide selection for genetic improvement of net merit.
Togashi, K; Lin, C Y; Yamazaki, T
2011-10-01
Four methods of selection for net merit comprising 2 correlated traits were compared in this study: 1) EBV-only index (I₁), which consists of the EBV of both traits (i.e., traditional 2-trait BLUP selection); 2) GEBV-only index (I₂), which comprises the genomic EBV (GEBV) of both traits; 3) GEBV-assisted index (I₃), which combines both the EBV and the GEBV of both traits; and 4) GBV-assisted index (I₄), which combines both the EBV and the true genomic breeding value (GBV) of both traits. Comparisons of these indices were based on 3 evaluation criteria [selection accuracy, genetic response (ΔH), and relative efficiency] under 64 scenarios that arise from combining 2 levels of genetic correlation (r(G)), 2 ratios of genetic variances between traits, 2 ratios of the genomic variance to total genetic variances for trait 1, 4 accuracies of EBV, and 2 proportions of r(G) explained by the GBV. Both selection accuracy and genetic responses of the indices I₁, I₃, and I₄ increased as the accuracy of EBV increased, but the efficiency of the indices I₃ and I₄ relative to I₁ decreased as the accuracy of EBV increased. The relative efficiency of both I₃ and I₄ was generally greater when the accuracy of EBV was 0.6 than when it was 0.9, suggesting that the genomic markers are most useful to assist selection when the accuracy of EBV is low. The GBV-assisted index I₄ was superior to the GEBV-assisted I₃ in all 64 cases examined, indicating the importance of improving the accuracy of prediction of genomic breeding values. Other parameters being identical, increasing the genetic variance of a high heritability trait would increase the genetic response of the genomic indices (I₂, I₃, and I₄). The genetic responses to I₂, I₃, and I(4) was greater when the genetic correlation between traits was positive (r(G) = 0.5) than when it was negative (r(G) = -0.5). The results of this study indicate that the effectiveness of the GEBV-assisted index I₃ is affected by heritability of and genetic correlation between traits, the ratio of genetic variances between traits, the genomic-genetic variance ratio of each index trait, the proportion of genetic correlation accounted for by the genomic markers, and the accuracy of predictions of both EBV and GBV. However, most of these affecting factors are genetic characteristics of a population that is beyond the control of the breeders. The key factor subject to manipulation is to maximize both the proportion of the genetic variance explained by GEBV and the accuracy of both GEBV and EBV. The developed procedures provide means to investigate the efficiency of various genomic indices for any given combination of the genetic factors studied.
NASA Astrophysics Data System (ADS)
Dondurur, Mehmet
The primary objective of this study was to determine the degree to which modern SAR systems can be used to obtain information about the Earth's vegetative resources. Information obtainable from microwave synthetic aperture radar (SAR) data was compared with that obtainable from LANDSAT-TM and SPOT data. Three hypotheses were tested: (a) Classification of land cover/use from SAR data can be accomplished on a pixel-by-pixel basis with the same overall accuracy as from LANDSAT-TM and SPOT data. (b) Classification accuracy for individual land cover/use classes will differ between sensors. (c) Combining information derived from optical and SAR data into an integrated monitoring system will improve overall and individual land cover/use class accuracies. The study was conducted with three data sets for the Sleeping Bear Dunes test site in the northwestern part of Michigan's lower peninsula, including an October 1982 LANDSAT-TM scene, a June 1989 SPOT scene and C-, L- and P-Band radar data from the Jet Propulsion Laboratory AIRSAR. Reference data were derived from the Michigan Resource Information System (MIRIS) and available color infrared aerial photos. Classification and rectification of data sets were done using ERDAS Image Processing Programs. Classification algorithms included Maximum Likelihood, Mahalanobis Distance, Minimum Spectral Distance, ISODATA, Parallelepiped, and Sequential Cluster Analysis. Classified images were rectified as necessary so that all were at the same scale and oriented north-up. Results were analyzed with contingency tables and percent correctly classified (PCC) and Cohen's Kappa (CK) as accuracy indices using CSLANT and ImagePro programs developed for this study. Accuracy analyses were based upon a 1.4 by 6.5 km area with its long axis east-west. Reference data for this subscene total 55,770 15 by 15 m pixels with sixteen cover types, including seven level III forest classes, three level III urban classes, two level II range classes, two water classes, one wetland class and one agriculture class. An initial analysis was made without correcting the 1978 MIRIS reference data to the different dates of the TM, SPOT and SAR data sets. In this analysis, highest overall classification accuracy (PCC) was 87% with the TM data set, with both SPOT and C-Band SAR at 85%, a difference statistically significant at the 0.05 level. When the reference data were corrected for land cover change between 1978 and 1991, classification accuracy with the C-Band SAR data increased to 87%. Classification accuracy differed from sensor to sensor for individual land cover classes, Combining sensors into hypothetical multi-sensor systems resulted in higher accuracies than for any single sensor. Combining LANDSAT -TM and C-Band SAR yielded an overall classification accuracy (PCC) of 92%. The results of this study indicate that C-Band SAR data provide an acceptable substitute for LANDSAT-TM or SPOT data when land cover information is desired of areas where cloud cover obscures the terrain. Even better results can be obtained by integrating TM and C-Band SAR data into a multi-sensor system.
Simple and Effective Algorithms: Computer-Adaptive Testing.
ERIC Educational Resources Information Center
Linacre, John Michael
Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…
Ecological risk assessors face increasing demands to assess more chemicals, with greater speed and accuracy, and to do so using fewer resources and experimental animals. New approaches in biological and computational sciences are being developed to generate mechanistic informatio...
Ecological risk assessors face increasing demands to assess more chemicals, with greater speed and accuracy, and to do so using fewer resources and experimental animals. New approaches in biological and computational sciences may be able to generate mechanistic information that ...
Laukka, Petri; Neiberg, Daniel; Elfenbein, Hillary Anger
2014-06-01
The possibility of cultural differences in the fundamental acoustic patterns used to express emotion through the voice is an unanswered question central to the larger debate about the universality versus cultural specificity of emotion. This study used emotionally inflected standard-content speech segments expressing 11 emotions produced by 100 professional actors from 5 English-speaking cultures. Machine learning simulations were employed to classify expressions based on their acoustic features, using conditions where training and testing were conducted on stimuli coming from either the same or different cultures. A wide range of emotions were classified with above-chance accuracy in cross-cultural conditions, suggesting vocal expressions share important characteristics across cultures. However, classification showed an in-group advantage with higher accuracy in within- versus cross-cultural conditions. This finding demonstrates cultural differences in expressive vocal style, and supports the dialect theory of emotions according to which greater recognition of expressions from in-group members results from greater familiarity with culturally specific expressive styles.
Evaluating the Accuracy of Results for Teacher Implemented Trial-Based Functional Analyses.
Rispoli, Mandy; Ninci, Jennifer; Burke, Mack D; Zaini, Samar; Hatton, Heather; Sanchez, Lisa
2015-09-01
Trial-based functional analysis (TBFA) allows for the systematic and experimental assessment of challenging behavior in applied settings. The purposes of this study were to evaluate a professional development package focused on training three Head Start teachers to conduct TBFAs with fidelity during ongoing classroom routines. To assess the accuracy of the TBFA results, the effects of a function-based intervention derived from the TBFA were compared with the effects of a non-function-based intervention. Data were collected on child challenging behavior and appropriate communication. An A-B-A-C-D design was utilized in which A represented baseline, and B and C consisted of either function-based or non-function-based interventions counterbalanced across participants, and D represented teacher implementation of the most effective intervention. Results showed that the function-based intervention produced greater decreases in challenging behavior and greater increases in appropriate communication than the non-function-based intervention for all three children. © The Author(s) 2015.
Spacesuit glove manufacturing enhancements through the use of advanced technologies
NASA Astrophysics Data System (ADS)
Cadogan, David; Bradley, David; Kosmo, Joseph
The sucess of astronauts performing extravehicular activity (EVA) on orbit is highly dependent upon the performance of their spacesuit gloves.A study has recently been conducted to advance the development and manufacture of spacesuit gloves. The process replaces the manual techniques of spacesuit glove manufacture by utilizing emerging technologies such as laser scanning, Computer Aided Design (CAD), computer generated two-dimensional patterns from three-dimensionl surfaces, rapid prototyping technology, and laser cutting of materials, to manufacture the new gloves. Results of the program indicate that the baseline process will not increase the cost of the gloves as compared to the existing styles, and in production, may reduce the cost of the gloves. perhaps the most important outcome of the Laserscan process is that greater accuracy and design control can be realized. Greater accuracy was achieved in the baseline anthropometric measurement and CAD data measurement which subsequently improved the design feature. This effectively enhances glove performance through better fit and comfort.
Implant alignment in total elbow arthroplasty: conventional vs. navigated techniques
NASA Astrophysics Data System (ADS)
McDonald, Colin P.; Johnson, James A.; King, Graham J. W.; Peters, Terry M.
2009-02-01
Incorrect selection of the native flexion-extension axis during implant alignment in elbow replacement surgery is likely a significant contributor to failure of the prosthesis. Computer and image-assisted surgery is emerging as a useful surgical tool in terms of improving the accuracy of orthopaedic procedures. This study evaluated the accuracy of implant alignment using an image-based navigation technique compared against a conventional non-navigated approach. Implant alignment error was 0.8 +/- 0.3 mm in translation and 1.1 +/- 0.4° in rotation for the navigated alignment, compared with 3.1 +/- 1.3 mm and 5.0 +/- 3.8° for the non-navigated alignment. Five (5) of the 11 non-navigated alignments were malaligned greater than 5° while none of the navigated alignments were placed with an error of greater than 2.0°. It is likely that improved implant positioning will lead to reduced implant loading and wear, resulting in fewer implantrelated complications and revision surgeries.
Assessment of craniometric traits in South Indian dry skulls for sex determination.
Ramamoorthy, Balakrishnan; Pai, Mangala M; Prabhu, Latha V; Muralimanju, B V; Rai, Rajalakshmi
2016-01-01
The skeleton plays an important role in sex determination in forensic anthropology. The skull bone is considered as the second best after the pelvic bone in sex determination due to its better retention of morphological features. Different populations have varying skeletal characteristics, making population specific analysis for sex determination essential. Hence the objective of this investigation is to obtain the accuracy of sex determination using cranial parameters of adult skulls to the highest percentage in South Indian population and to provide a baseline data for sex determination in South India. Seventy adult preserved human skulls were taken and based on the morphological traits were classified into 43 male skulls and 27 female skulls. A total of 26 craniometric parameters were studied. The data were analyzed by using the SPSS discriminant function. The analysis of stepwise, multivariate, and univariate discriminant function gave an accuracy of 77.1%, 85.7%, and 72.9% respectively. Multivariate direct discriminant function analysis classified skull bones into male and female with highest levels of accuracy. Using stepwise discriminant function analysis, the most dimorphic variable to determine sex of the skull, was biauricular breadth followed by weight. Subjecting the best dimorphic variables to univariate discriminant analysis, high levels of accuracy of sexual dimorphism was obtained. Percentage classification of high accuracies were obtained in this study indicating high level of sexual dimorphism in the crania, setting specific discriminant equations for the gender determination in South Indian people. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopich, Irina V.
2015-01-21
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when themore » FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.« less
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
Gopich, Irina V.
2015-01-01
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated. PMID:25612692
[Assessment of precision and accuracy of digital surface photogrammetry with the DSP 400 system].
Krimmel, M; Kluba, S; Dietz, K; Reinert, S
2005-03-01
The objective of the present study was to evaluate the precision and accuracy of facial anthropometric measurements obtained through digital 3-D surface photogrammetry with the DSP 400 system in comparison to traditional 2-D photogrammetry. Fifty plaster casts of cleft infants were imaged and 21 standard anthropometric measurements were obtained. For precision assessment the measurements were performed twice in a subsample. Accuracy was determined by comparison of direct measurements and indirect 2-D and 3-D image measurements. Precision of digital surface photogrammetry was almost as good as direct anthropometry and clearly better than 2-D photogrammetry. Measurements derived from 3-D images showed better congruence to direct measurements than from 2-D photos. Digital surface photogrammetry with the DSP 400 system is sufficiently precise and accurate for craniofacial anthropometric examinations.
NASA Technical Reports Server (NTRS)
Oza, D. H.; Jones, T. L.; Hodjatzadeh, M.; Samii, M. V.; Doll, C. E.; Hart, R. C.; Mistretta, G. D.
1991-01-01
The development of the Real-Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination on a Disk Operating System (DOS) based Personal Computer (PC) is addressed. The results of a study to compare the orbit determination accuracy of a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOD/E with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), is addressed. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for the Earth Radiation Budget Satellite (ERBS); the maximum solution differences were less than 25 m after the filter had reached steady state.
Shokri, Abbas; Eskandarloo, Amir; Norouzi, Marouf; Poorolajal, Jalal; Majidi, Gelareh; Aliyaly, Alireza
2018-03-01
This study compared the diagnostic accuracy of cone-beam computed tomography (CBCT) scans obtained with 2 CBCT systems with high- and low-resolution modes for the detection of root perforations in endodontically treated mandibular molars. The root canals of 72 mandibular molars were cleaned and shaped. Perforations measuring 0.2, 0.3, and 0.4 mm in diameter were created at the furcation area of 48 roots, simulating strip perforations, or on the external surfaces of 48 roots, simulating root perforations. Forty-eight roots remained intact (control group). The roots were filled using gutta-percha (Gapadent, Tianjin, China) and AH26 sealer (Dentsply Maillefer, Ballaigues, Switzerland). The CBCT scans were obtained using the NewTom 3G (QR srl, Verona, Italy) and Cranex 3D (Soredex, Helsinki, Finland) CBCT systems in high- and low-resolution modes, and were evaluated by 2 observers. The chi-square test was used to assess the nominal variables. In strip perforations, the accuracies of low- and high-resolution modes were 75% and 83% for NewTom 3G and 67% and 69% for Cranex 3D. In root perforations, the accuracies of low- and high-resolution modes were 79% and 83% for NewTom 3G and was 56% and 73% for Cranex 3D. The accuracy of the 2 CBCT systems was different for the detection of strip and root perforations. The Cranex 3D had non-significantly higher accuracy than the NewTom 3G. In both scanners, the high-resolution mode yielded significantly higher accuracy than the low-resolution mode. The diagnostic accuracy of CBCT scans was not affected by the perforation diameter.
Influence of Pedometer Position on Pedometer Accuracy at Various Walking Speeds: A Comparative Study
Lovis, Christian
2016-01-01
Background Demographic growth in conjunction with the rise of chronic diseases is increasing the pressure on health care systems in most OECD countries. Physical activity is known to be an essential factor in improving or maintaining good health. Walking is especially recommended, as it is an activity that can easily be performed by most people without constraints. Pedometers have been extensively used as an incentive to motivate people to become more active. However, a recognized problem with these devices is their diminishing accuracy associated with decreased walking speed. The arrival on the consumer market of new devices, worn indifferently either at the waist, wrist, or as a necklace, gives rise to new questions regarding their accuracy at these different positions. Objective Our objective was to assess the performance of 4 pedometers (iHealth activity monitor, Withings Pulse O2, Misfit Shine, and Garmin vívofit) and compare their accuracy according to their position worn, and at various walking speeds. Methods We conducted this study in a controlled environment with 21 healthy adults required to walk 100 m at 3 different paces (0.4 m/s, 0.6 m/s, and 0.8 m/s) regulated by means of a string attached between their legs at the level of their ankles and a metronome ticking the cadence. To obtain baseline values, we asked the participants to walk 200 m at their own pace. Results A decrease of accuracy was positively correlated with reduced speed for all pedometers (12% mean error at self-selected pace, 27% mean error at 0.8 m/s, 52% mean error at 0.6 m/s, and 76% mean error at 0.4 m/s). Although the position of the pedometer on the person did not significantly influence its accuracy, some interesting tendencies can be highlighted in 2 settings: (1) positioning the pedometer at the waist at a speed greater than 0.8 m/s or as a necklace at preferred speed tended to produce lower mean errors than at the wrist position; and (2) at a slow speed (0.4 m/s), pedometers worn at the wrist tended to produce a lower mean error than in the other positions. Conclusions At all positions, all tested pedometers generated significant errors at slow speeds and therefore cannot be used reliably to evaluate the amount of physical activity for people walking slower than 0.6 m/s (2.16 km/h, or 1.24 mph). At slow speeds, the better accuracy observed with pedometers worn at the wrist could constitute a valuable line of inquiry for the future development of devices adapted to elderly people. PMID:27713114
Kiang, Richard; Adimi, Farida; Soika, Valerii; Nigro, Joseph; Singhasivanon, Pratap; Sirichaisinthop, Jeeraphat; Leemingsawat, Somjai; Apiwathnasorn, Chamnarn; Looareesuwan, Sornchai
2006-11-01
In many malarious regions malaria transmission roughly coincides with rainy seasons, which provide for more abundant larval habitats. In addition to precipitation, other meteorological and environmental factors may also influence malaria transmission. These factors can be remotely sensed using earth observing environmental satellites and estimated with seasonal climate forecasts. The use of remote sensing usage as an early warning tool for malaria epidemics have been broadly studied in recent years, especially for Africa, where the majority of the world's malaria occurs. Although the Greater Mekong Subregion (GMS), which includes Thailand and the surrounding countries, is an epicenter of multidrug resistant falciparum malaria, the meteorological and environmental factors affecting malaria transmissions in the GMS have not been examined in detail. In this study, the parasitological data used consisted of the monthly malaria epidemiology data at the provincial level compiled by the Thai Ministry of Public Health. Precipitation, temperature, relative humidity, and vegetation index obtained from both climate time series and satellite measurements were used as independent variables to model malaria. We used neural network methods, an artificial-intelligence technique, to model the dependency of malaria transmission on these variables. The average training accuracy of the neural network analysis for three provinces (Kanchanaburi, Mae Hong Son, and Tak) which are among the provinces most endemic for malaria, is 72.8% and the average testing accuracy is 62.9% based on the 1994-1999 data. A more complex neural network architecture resulted in higher training accuracy but also lower testing accuracy. Taking into account of the uncertainty regarding reported malaria cases, we divided the malaria cases into bands (classes) to compute training accuracy. Using the same neural network architecture on the 19 most endemic provinces for years 1994 to 2000, the mean training accuracy weighted by provincial malaria cases was 73%. Prediction of malaria cases for 2001 using neural networks trained for 1994-2000 gave a weighted accuracy of 53%. Because there was a significant decrease (31%) in the number of malaria cases in the 19 provinces from 2000 to 2001, the networks overestimated malaria transmissions. The decrease in transmission was not due to climatic or environmental changes. Thailand is a country with long borders. Migrant populations from the neighboring countries enlarge the human malaria reservoir because these populations have more limited access to health care. This issue also confounds the complexity of modeling malaria based on meteorological and environmental variables alone. In spite of the relatively low resolution of the data and the impact of migrant populations, we have uncovered a reasonably clear dependency of malaria on meteorological and environmental remote sensing variables. When other contextual determinants do not vary significantly, using neural network analysis along with remote sensing variables to predict malaria endemicity should be feasible.
Gandhi, Neha; Jain, Sandeep; Kumar, Manish; Rupakar, Pratik; Choyal, Kanaram; Prajapati, Seema
2015-01-01
Age assessment may be a crucial step in postmortem profiling leading to confirmative identification. In children, Demirjian's method based on eight developmental stages was developed to determine maturity scores as a function of age and polynomial functions to determine age as a function of score. Of this study was to evaluate the reliability of age estimation using Demirjian's eight teeth method following the French maturity scores and Indian-specific formula from developmental stages of third molar with the help of orthopantomograms using the Demirjian method. Dental panoramic tomograms from 30 subjects each of known chronological age and sex were collected and were evaluated according to Demirjian's criteria. Age calculations were performed using Demirjian's formula and Indian formula. Statistical analysis used was Chi-square test and ANOVA test and the P values obtained were statistically significant. There was an average underestimation of age with both Indian and Demirjian's formulas. The mean absolute error was lower using Indian formula hence it can be applied for age estimation in present Gujarati population. Also, females were ahead of achieving dental maturity than males thus completion of dental development is attained earlier in females. Greater accuracy can be obtained if population-specific formulas considering the ethnic and environmental variation are derived performing the regression analysis.
Liao, Xiaolei; Zhao, Juanjuan; Jiao, Cheng; Lei, Lei; Qiang, Yan; Cui, Qiang
2016-01-01
Background Lung parenchyma segmentation is often performed as an important pre-processing step in the computer-aided diagnosis of lung nodules based on CT image sequences. However, existing lung parenchyma image segmentation methods cannot fully segment all lung parenchyma images and have a slow processing speed, particularly for images in the top and bottom of the lung and the images that contain lung nodules. Method Our proposed method first uses the position of the lung parenchyma image features to obtain lung parenchyma ROI image sequences. A gradient and sequential linear iterative clustering algorithm (GSLIC) for sequence image segmentation is then proposed to segment the ROI image sequences and obtain superpixel samples. The SGNF, which is optimized by a genetic algorithm (GA), is then utilized for superpixel clustering. Finally, the grey and geometric features of the superpixel samples are used to identify and segment all of the lung parenchyma image sequences. Results Our proposed method achieves higher segmentation precision and greater accuracy in less time. It has an average processing time of 42.21 seconds for each dataset and an average volume pixel overlap ratio of 92.22 ± 4.02% for four types of lung parenchyma image sequences. PMID:27532214
Polidori, David; Rowley, Clarence
2014-07-22
The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.
Stress fractures: diagnosis, treatment, and prevention.
Patel, Deepak S; Roth, Matt; Kapil, Neha
2011-01-01
Stress fractures are common injuries in athletes and military recruits. These injuries occur more commonly in lower extremities than in upper extremities. Stress fractures should be considered in patients who present with tenderness or edema after a recent increase in activity or repeated activity with limited rest. The differential diagnosis varies based on location, but commonly includes tendinopathy, compartment syndrome, and nerve or artery entrapment syndrome. Medial tibial stress syndrome (shin splints) can be distinguished from tibial stress fractures by diffuse tenderness along the length of the posteromedial tibial shaft and a lack of edema. When stress fracture is suspected, plain radiography should be obtained initially and, if negative, may be repeated after two to three weeks for greater accuracy. If an urgent diagnosis is needed, triple-phase bone scintigraphy or magnetic resonance imaging should be considered. Both modalities have a similar sensitivity, but magnetic resonance imaging has greater specificity. Treatment of stress fractures consists of activity modification, including the use of nonweight-bearing crutches if needed for pain relief. Analgesics are appropriate to relieve pain, and pneumatic bracing can be used to facilitate healing. After the pain is resolved and the examination shows improvement, patients may gradually increase their level of activity. Surgical consultation may be appropriate for patients with stress fractures in high-risk locations, nonunion, or recurrent stress fractures. Prevention of stress fractures has been studied in military personnel, but more research is needed in other populations.
Zhao, Yinzhi; Zhang, Peng; Guo, Jiming; Li, Xin; Wang, Jinling; Yang, Fei; Wang, Xinzhe
2018-06-20
Due to the great influence of multipath effect, noise, clock and error on pseudorange, the carrier phase double difference equation is widely used in high-precision indoor pseudolite positioning. The initial position is determined mostly by the known point initialization (KPI) method, and then the ambiguities can be fixed with the LAMBDA method. In this paper, a new method without using the KPI to achieve high-precision indoor pseudolite positioning is proposed. The initial coordinates can be quickly obtained to meet the accuracy requirement of the indoor LAMBDA method. The detailed processes of the method follows: Aiming at the low-cost single-frequency pseudolite system, the static differential pseudolite system (DPL) method is used to obtain the low-accuracy positioning coordinates of the rover station quickly. Then, the ambiguity function method (AFM) is used to search for the coordinates in the corresponding epoch. The real coordinates obtained by AFM can meet the initial accuracy requirement of the LAMBDA method, so that the double difference carrier phase ambiguities can be correctly fixed. Following the above steps, high-precision indoor pseudolite positioning can be realized. Several experiments, including static and dynamic tests, are conducted to verify the feasibility of the new method. According to the results of the experiments, the initial coordinates with the accuracy of decimeter level through the DPL can be obtained. For the AFM part, both a one-meter search scope and two-centimeter or four-centimeter search steps are used to ensure the precision at the centimeter level and high search efficiency. After dealing with the problem of multiple peaks caused by the ambiguity cosine function, the coordinate information of the maximum ambiguity function value (AFV) is taken as the initial value of the LAMBDA, and the ambiguities can be fixed quickly. The new method provides accuracies at the centimeter level for dynamic experiments and at the millimeter level for static ones.
Meterological correction of optical beam refraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lukin, V.P.; Melamud, A.E.; Mironov, V.L.
1986-02-01
At the present time laser reference systems (LRS's) are widely used in agrotechnology and in geodesy. The demands for accuracy in LRS's constantly increase, so that a study of error sources and means of considering and correcting them is of practical importance. A theoretical algorithm is presented for correction of the regular component of atmospheric refraction for various types of hydrostatic stability of the atmospheric layer adjacent to the earth. The algorithm obtained is compared to regression equations obtained by processing an experimental data base. It is shown that within admissible accuracy limits the refraction correction algorithm obtained permits constructionmore » of correction tables and design of optical systems with programmable correction for atmospheric refraction on the basis of rapid meteorological measurements.« less
Alter, Adam L; Oppenheimer, Daniel M; Epley, Nicholas
2013-08-01
In this issue of Cognition, Thompson and her colleagues challenge the results from a paper we published several years ago (Alter, Oppenheimer, Epley, & Eyre, 2007). That paper demonstrated that metacognitive difficulty or disfluency can trigger more analytical thinking as measured by accuracy on several reasoning tasks. In their experiments, Thompson et al. find evidence that people process information more deeply-but not necessarily more accurately-when they experience disfluency. These results are consistent with our original theorizing, but the authors misinterpret it as counter-evidence because they suggest that accuracy (and even confidence) is a measure of deeper processing rather than a contingent outcome of such processing. We further suggest that Thompson et al. err when they discriminate between "perceptual fluency" and "answer fluency," the former of which is an element of the latter. Thompson et al. advance research by adding reaction time as a measure of deeper cognitive processing, but we caution against misinterpreting the meaning of accuracy. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Mulligan, P. J.; Gervin, J. C.; Lu, Y. C.
1985-01-01
An area bordering the Eastern Shore of the Chesapeake Bay was selected for study and classified using unsupervised techniques applied to LANDSAT-2 MSS data and several band combinations of LANDSAT-4 TM data. The accuracies of these Level I land cover classifications were verified using the Taylor's Island USGS 7.5 minute topographic map which was photointerpreted, digitized and rasterized. The the Taylor's Island map, comparing the MSS and TM three band (2 3 4) classifications, the increased resolution of TM produced a small improvement in overall accuracy of 1% correct due primarily to a small improvement, and 1% and 3%, in areas such as water and woodland. This was expected as the MSS data typically produce high accuracies for categories which cover large contiguous areas. However, in the categories covering smaller areas within the map there was generally an improvement of at least 10%. Classification of the important residential category improved 12%, and wetlands were mapped with 11% greater accuracy.
Tsukagoshi, Mariko; Araki, Kenichiro; Saito, Fumiyoshi; Kubo, Norio; Watanabe, Akira; Igarashi, Takamichi; Ishii, Norihiro; Yamanaka, Takahiro; Shirabe, Ken; Kuwano, Hiroyuki
2018-04-01
International consensus guidelines for intraductal papillary mucinous neoplasms (IPMNs) were revised in 2012. We aimed to evaluate the clinical utility of each predictor in the 2006 and 2012 guidelines and validate the diagnostic value and surgical indications. Forty-two patients with surgically resected IPMNs were included. Each predictor was applied to evaluate its diagnostic value. The 2012 guidelines had greater accuracy for invasive carcinoma than the 2006 guidelines (64.3 vs. 31.0%). Moreover, the accuracy for high-grade dysplasia was also increased (48.6 vs. 77.1%). When the main pancreatic duct (MPD) size ≥8 mm was substituted for MPD size ≥10 mm in the 2012 guidelines, the accuracy for high-grade dysplasia was 80.0%. The 2012 guidelines exhibited increased diagnostic accuracy for invasive IPMN. It is important to consider surgical resection prior to invasive carcinoma, and high-risk stigmata might be a useful diagnostic criterion. Furthermore, MPD size ≥8 mm may be predictive of high-grade dysplasia.
Evaluation of airborne image data for mapping riparian vegetation within the Grand Canyon
Davis, Philip A.; Staid, Matthew I.; Plescia, Jeffrey B.; Johnson, Jeffrey R.
2002-01-01
This study examined various types of remote-sensing data that have been acquired during a 12-month period over a portion of the Colorado River corridor to determine the type of data and conditions for data acquisition that provide the optimum classification results for mapping riparian vegetation. Issues related to vegetation mapping included time of year, number and positions of wavelength bands, and spatial resolution for data acquisition to produce accurate vegetation maps versus cost of data. Image data considered in the study consisted of scanned color-infrared (CIR) film, digital CIR, and digital multispectral data, whose resolutions from 11 cm (photographic film) to 100 cm (multispectral), that were acquired during the Spring, Summer, and Fall seasons in 2000 for five long-term monitoring sites containing riparian vegetation. Results show that digitally acquired data produce higher and more consistent classification accuracies for mapping vegetation units than do film products. The highest accuracies were obtained from nine-band multispectral data; however, a four-band subset of these data, that did not include short-wave infrared bands, produced comparable mapping results. The four-band subset consisted of the wavelength bands 0.52-0.59 µm, 0.59-0.62 µm, 0.67-0.72 µm, and 0.73-0.85 µm. Use of only three of these bands that simulate digital CIR sensors produced accuracies for several vegetation units that were 10% lower than those obtained using the full multispectral data set. Classification tests using band ratios produced lower accuracies than those using band reflectance for scanned film data; a result attributed to the relatively poor radiometric fidelity maintained by the film scanning process, whereas calibrated multispectral data produced similar classification accuracies using band reflectance and band ratios. This suggests that the intrinsic band reflectance of the vegetation is more important than inter-band reflectance differences in attaining high mapping accuracies. These results also indicate that radiometrically calibrated sensors that record a wide range of radiance produce superior results and that such sensors should be used for monitoring purposes. When texture (spatial variance) at near-infrared wavelength is combined with spectral data in classification, accuracy increased most markedly (20-30%) for the highest resolution (11-cm) CIR film data, but decreased in its effect on accuracy in lower-resolution multi-spectral image data; a result observed in previous studies (Franklin and McDermid 1993, Franklin et al. 2000, 2001). While many classification unit accuracies obtained from the 11-cm film CIR band with texture data were in fact higher than those produced using the 100-cm, nine-band multispectral data with texture, the 11-cm film CIR data produced much lower accuracies than the 100-cm multispectral data for the more sparsely populated vegetation units due to saturation of picture elements during the film scanning process in vegetation units with a high proportion of alluvium. Overall classification accuracies obtained from spectral band and texture data range from 36% to 78% for all databases considered, from 57% to 71% for the 11-cm film CIR data, and from 54% to 78% for the 100-cm multispectral data. Classification results obtained from 20-cm film CIR band and texture data, which were produced by applying a Gaussian filter to the 11-cm film CIR data, showed increases in accuracy due to texture that were similar to those observed using the original 11-cm film CIR data. This suggests that data can be collected at the lower resolution and still retain the added power of vegetation texture. Classification accuracies for the riparian vegetation units examined in this study do not appear to be influenced by season of data acquisition, although data acquired under direct sunlight produced higher overall accuracies than data acquired under overcast conditions. The latter observation, in addition to the importance of band reflectance for classification, implies that data should be acquired near summer solstice when sun elevation and reflectance is highest and when shadows cast by steep canyon walls are minimized.
Magnussen, Svein; Safer, Martin A.; Sartori, Giuseppe; Wise, Richard A.
2013-01-01
We surveyed 100 Italian defense attorneys about their knowledge and beliefs about factors affecting eyewitness accuracy. The results of similar surveys show that U.S. defense attorneys were significantly more knowledgeable than other legal professionals, including U.S. prosecutors and U.S. and European judges. The present survey of Italian defense attorneys produced similar results. However, the results suggest that the defense attorney’s superior performance may be due at least in part to their skepticism of eyewitness testimony rather than their greater knowledge of eyewitness factors. PMID:23720639
Schrangl, Patrick; Reiterer, Florian; Heinemann, Lutz; Freckmann, Guido; Del Re, Luigi
2018-05-18
Systems for continuous glucose monitoring (CGM) are evolving quickly, and the data obtained are expected to become the basis for clinical decisions for many patients with diabetes in the near future. However, this requires that their analytical accuracy is sufficient. This accuracy is usually determined with clinical studies by comparing the data obtained by the given CGM system with blood glucose (BG) point measurements made with a so-called reference method. The latter is assumed to indicate the correct value of the target quantity. Unfortunately, due to the nature of the clinical trials and the approach used, such a comparison is subject to several effects which may lead to misleading results. While some reasons for the differences between the values obtained with CGM and BG point measurements are relatively well-known (e.g., measurement in different body compartments), others related to the clinical study protocols are less visible, but also quite important. In this review, we present a general picture of the topic as well as tools which allow to correct or at least to estimate the uncertainty of measures of CGM system performance.
Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models
NASA Astrophysics Data System (ADS)
Saha, Debasish; Kemanian, Armen R.; Rau, Benjamin M.; Adler, Paul R.; Montes, Felipe
2017-04-01
Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (corn-soybean rotation), College Station, TX (corn-vetch rotation), Fort Collins, CO (irrigated corn), and Pullman, WA (winter wheat), representing diverse agro-ecoregions of the United States. Fertilization source, rate, and timing were site-specific. These simulated fluxes surrogated daily measurements in the analysis. We ;sampled; the fluxes using a fixed interval (1-32 days) or a rule-based (decision tree-based) sampling method. Two types of decision trees were built: a high-input tree (HI) that included soil inorganic nitrogen (SIN) as a predictor variable, and a low-input tree (LI) that excluded SIN. Other predictor variables were identified with Random Forest. The decision trees were inverted to be used as rules for sampling a representative number of members from each terminal node. The uncertainty of the annual N2O flux estimation increased along with the fixed interval length. A 4- and 8-day fixed sampling interval was required at College Station and Ames, respectively, to yield ±20% accuracy in the flux estimate; a 12-day interval rendered the same accuracy at Fort Collins and Pullman. Both the HI and the LI rule-based methods provided the same accuracy as that of fixed interval method with up to a 60% reduction in sampling events, particularly at locations with greater temporal flux variability. For instance, at Ames, the HI rule-based and the fixed interval methods required 16 and 91 sampling events, respectively, to achieve the same absolute bias of 0.2 kg N ha-1 yr-1 in estimating cumulative N2O flux. These results suggest that using simulation models along with decision trees can reduce the cost and improve the accuracy of the estimations of cumulative N2O fluxes using the discrete chamber-based method.
Gemignani, Jessica; Middell, Eike; Barbour, Randall L; Graber, Harry L; Blankertz, Benjamin
2018-04-04
The statistical analysis of functional near infrared spectroscopy (fNIRS) data based on the general linear model (GLM) is often made difficult by serial correlations, high inter-subject variability of the hemodynamic response, and the presence of motion artifacts. In this work we propose to extract information on the pattern of hemodynamic activations without using any a priori model for the data, by classifying the channels as 'active' or 'not active' with a multivariate classifier based on linear discriminant analysis (LDA). This work is developed in two steps. First we compared the performance of the two analyses, using a synthetic approach in which simulated hemodynamic activations were combined with either simulated or real resting-state fNIRS data. This procedure allowed for exact quantification of the classification accuracies of GLM and LDA. In the case of real resting-state data, the correlations between classification accuracy and demographic characteristics were investigated by means of a Linear Mixed Model. In the second step, to further characterize the reliability of the newly proposed analysis method, we conducted an experiment in which participants had to perform a simple motor task and data were analyzed with the LDA-based classifier as well as with the standard GLM analysis. The results of the simulation study show that the LDA-based method achieves higher classification accuracies than the GLM analysis, and that the LDA results are more uniform across different subjects and, in contrast to the accuracies achieved by the GLM analysis, have no significant correlations with any of the demographic characteristics. Findings from the real-data experiment are consistent with the results of the real-plus-simulation study, in that the GLM-analysis results show greater inter-subject variability than do the corresponding LDA results. The results obtained suggest that the outcome of GLM analysis is highly vulnerable to violations of theoretical assumptions, and that therefore a data-driven approach such as that provided by the proposed LDA-based method is to be favored.
Auditory and visual localization accuracy in young children and adults.
Martin, Karen; Johnstone, Patti; Hedrick, Mark
2015-06-01
This study aimed to measure and compare sound and light source localization ability in young children and adults who have normal hearing and normal/corrected vision in order to determine the extent to which age, type of stimuli, and stimulus order affects sound localization accuracy. Two experiments were conducted. The first involved a group of adults only. The second involved a group of 30 children aged 3 to 5 years. Testing occurred in a sound-treated booth containing a semi-circular array of 15 loudspeakers set at 10° intervals from -70° to 70° azimuth. Each loudspeaker had a tiny light bulb and a small picture fastened underneath. Seven of the loudspeakers were used to randomly test sound and light source identification. The sound stimulus was the word "baseball". The light stimulus was a flashing of a light bulb triggered by the digital signal of the word "baseball". Each participant was asked to face 0° azimuth, and identify the location of the test stimulus upon presentation. Adults used a computer mouse to click on an icon; children responded by verbally naming or walking toward the picture underneath the corresponding loudspeaker or light. A mixed experimental design using repeated measures was used to determine the effect of age and stimulus type on localization accuracy in children and adults. A mixed experimental design was used to compare the effect of stimulus order (light first/last) and varying or fixed intensity sound on localization accuracy in children and adults. Localization accuracy was significantly better for light stimuli than sound stimuli for children and adults. Children, compared to adults, showed significantly greater localization errors for audition. Three-year-old children had significantly greater sound localization errors compared to 4- and 5-year olds. Adults performed better on the sound localization task when the light localization task occurred first. Young children can understand and attend to localization tasks, but show poorer localization accuracy than adults in sound localization. This may be a reflection of differences in sensory modality development and/or central processes in young children, compared to adults. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Multiply charged particles of the primary cosmic rays with energies greater than about 2 TeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanenko, I.P.; Grigorov, N.L.; Shestoperov, V.IA.
1986-08-01
Data on the energy spectra and charge composition of primary cosmic ray particles with energies greater than about 2 TeV are analyzed. The equipment on the Kosmos 1543 satellite used to obtain the data is described. Protons and alpha particles are detected, and the nuclei are separated into H, M, VH, and alpha groups. It is determined that the charge compositions of the primary nuclei with z greater than about 2 at energies greater than about 2 TeV compare well with data obtained at energies greater than about 1-10 GeV/nucleon. 8 references.
A novel approach to reduce environmental noise in microgravity measurements using a Scintrex CG5
NASA Astrophysics Data System (ADS)
Boddice, Daniel; Atkins, Phillip; Rodgers, Anthony; Metje, Nicole; Goncharenko, Yuriy; Chapman, David
2018-05-01
The accuracy and repeatability of microgravity measurements for surveying purposes are affected by two main sources of noise; instrument noise from the sensor and electronics, and environmental sources of noise from anthropogenic activity, wind, microseismic activity and other sources of vibrational noise. There is little information in the literature on the quantitative values of these different noise sources and their significance for microgravity measurements. Experiments were conducted to quantify these sources of noise with multiple instruments, and to develop methodologies to reduce these unwanted signals thereby improving the accuracy or speed of microgravity measurements. External environmental sources of noise were found to be concentrated at higher frequencies (> 0.1 Hz), well within the instrument's bandwidth. In contrast, the internal instrumental noise was dominant at frequencies much lower than the reciprocal of the maximum integration time, and was identified as the limiting factor for current instruments. The optimum time for integration was found to be between 120 and 150 s for the instruments tested. In order to reduce the effects of external environmental noise on microgravity measurements, a filtering and despiking technique was created using data from noisy environments next to a main road and outside on a windy day. The technique showed a significant improvement in the repeatability of measurements, with between 40% and 50% lower standard deviations being obtained over numerous different data sets. The filtering technique was then tested in field conditions by using an anomaly of known size, and a comparison made between different filtering methods. Results showed improvements with the proposed method performing better than a conventional, or boxcar, averaging process. The proposed despiking process was generally found to be ineffective, with greater gains obtained when complete measurement records were discarded. Field survey results were worse than static measurement results, possibly due to the actions of moving the Scintrex during the survey which caused instability and elastic relaxation in the sensor, or the liquid tilt sensors, which generated additional low frequency instrument noise. However, the technique will result in significant improvements to accuracy and a reduction of measurement time, both for static measurements, for example at reference sites and observatories, and for field measurements using the next generation of instruments based on new technology, such as atom interferometry, resulting in time and cost savings.
Laffel, Lori
2016-02-01
This study was designed to evaluate accuracy, performance, and safety of the Dexcom (San Diego, CA) G4(®) Platinum continuous glucose monitoring (CGM) system (G4P) compared with the Dexcom G4 Platinum with Software 505 algorithm (SW505) when used as adjunctive management to blood glucose (BG) monitoring over a 7-day period in youth, 2-17 years of age, with diabetes. Youth wore either one or two sensors placed on the abdomen or upper buttocks for 7 days, calibrating the device twice daily with a uniform BG meter. Participants had one in-clinic session on Day 1, 4, or 7, during which fingerstick BG measurements (self-monitoring of blood glucose [SMBG]) were obtained every 30 ± 5 min for comparison with CGM, and in youth 6-17 years of age, reference YSI glucose measurements were obtained from arterialized venous blood collected every 15 ± 5 min for comparison with CGM. The sensor was removed by the participant/family after 7 days. In comparison of 2,922 temporally paired points of CGM with the reference YSI measurement for G4P and 2,262 paired points for SW505, the mean absolute relative difference (MARD) was 17% for G4P versus 10% for SW505 (P < 0.0001). In comparison of 16,318 temporally paired points of CGM with SMBG for G4P and 4,264 paired points for SW505, MARD was 15% for G4P versus 13% for SW505 (P < 0.0001). Similarly, error grid analyses indicated superior performance with SW505 compared with G4P in comparison of CGM with YSI and CGM with SMBG results, with greater percentages of SW505 results falling within error grid Zone A or the combined Zones A plus B. There were no serious adverse events or device-related serious adverse events for either the G4P or the SW505, and there was no sensor breakoff. The updated algorithm offers substantial improvements in accuracy and performance in pediatric patients with diabetes. Use of CGM with improved performance has potential to increase glucose time in range and improve glycemic outcomes for youth.
Broadband Ultrasonic Transducers
NASA Technical Reports Server (NTRS)
Heyser, R. C.
1986-01-01
New geometry spreads out resonance region of piezoelectric crystal. In new transducer, crystal surfaces made nonparallel. One surface planar; other, concave. Geometry designed to produce nearly uniform response over a predetermined band of frequencies and to attenuate strongly frequencies outside band. Greater bandwidth improves accuracy of sonar and ultrasonic imaging equipment.
Education for Effective Case Management Practice.
ERIC Educational Resources Information Center
Dickerson, Pamela S.; Mansfield, Jerry A.
2003-01-01
Managed care organization employees (n=115) attended case management training that included case studies, problem solving and communication skills, and focus on internal capability. Three-month follow-up showed that case managers now ask more questions, have more confidence, mentor new employees, and work with greater accuracy. (SK)
Effects of Multiple Crimps and Cable Length on Reflection Signatures from Long Cables
DOT National Transportation Integrated Search
2002-03-19
The accuracy of time domain reflectometry (TDR) measurements of rock shearing with cable lengths greater than 60 m has not been adequately documented. This paper presents the results of controlled crimping and shearing of a 530 m long, 22.2mm diamete...
Adverse Effects in Dual-Star Interferometry
NASA Technical Reports Server (NTRS)
Colavita, M. Mark
2008-01-01
Narrow-angle dual-star interferometric astrometry can provide very high accuracy in the presence of the Earth's turbulent atmosphere. However, to exploit the high atmospherically-limited accuracy requires control of systematic errors in measurement of the interferometer baseline, internal OPDs, and fringe phase. In addition, as high photometric SNR is required, care must be taken to maximize throughput and coherence to obtain high accuracy on faint stars. This article reviews: the keys aspects of the dual-star approach and implementation; the main contributors to the
NASA Astrophysics Data System (ADS)
Jiménez, César; Carbonel, Carlos; Rojas, Joel
2018-04-01
We have implemented a numerical procedure to forecast the parameters of a tsunami, such as the arrival time of the front of the first wave and the maximum wave height in real and virtual tidal stations along the Peruvian coast, with this purpose a database of pre-computed synthetic tsunami waveforms (or Green functions) was obtained from numerical simulation of seismic unit sources (dimension: 50 × 50 km2) for subduction zones from southern Chile to northern Mexico. A bathymetry resolution of 30 arc-sec (approximately 927 m) was used. The resulting tsunami waveform is obtained from the superposition of synthetic waveforms corresponding to several seismic unit sources contained within the tsunami source geometry. The numerical procedure was applied to the Chilean tsunami of April 1, 2014. The results show a very good correlation for stations with wave amplitude greater than 1 m, in the case of the Arica tide station an error (from the maximum height of the observed and simulated waveform) of 3.5% was obtained, for Callao station the error was 12% and the largest error was in Chimbote with 53.5%, however, due to the low amplitude of the Chimbote wave (<1 m), the overestimated error, in this case, is not important for evacuation purposes. The aim of the present research is tsunami early warning, where speed is required rather than accuracy, so the results should be taken as preliminary.
NASA Astrophysics Data System (ADS)
Ishihara, Miya; Horiguchi, Akio; Shinmoto, Hiroshi; Tsuda, Hitoshi; Irisawa, Kaku; Wada, Takatsugu; Asano, Tomohiko
2016-03-01
Transrectal ultrasonography (TRUS) is the most popular imaging modality for diagnosing and treating prostate cancer. TRUS-guided prostate biopsy is mandatory for the histological diagnosis of patients with elevated serum prostatespecific antigen (PSA), but its diagnostic accuracy is not satisfactory due to TRUS's low resolution. As a result, a considerable number of patients are required to undergo an unnecessary repeated biopsy. Photoacoustic imaging (PAI) can be used to provide microvascular network imaging using hemoglobin as an intrinsic, optical absorption molecule. We developed an original TRUS-type PAI probe consisting of a micro-convex array transducer with an optical illumination system to provide superimposed PAI and ultrasound images. TRUS-type PAI has the advantage of having much higher resolution and greater contrast than does Doppler TRUS. The purpose of this study was to demonstrate the clinical feasibility of the transrectal PAI system. We performed a clinical trial to compare the image of the cancerous area obtained by transrectal PAI with that obtained by TRUS Doppler during prostate biopsy. The obtained prostate biopsy cores were stained with anti-CD34 antibodies to provide a microvascular distribution map. We also confirmed its consistency with PAI and pre-biopsy MRI findings. Our study demonstrated that transrectal identification of tumor angiogenesis under superimposed photoacoustic and ultrasound images was easier than that under TRUS alone. We recognized a consistent relationship between PAI and MRI findings in most cases. However, there were no correspondences in some cases.
Detection of white spot lesions by segmenting laser speckle images using computer vision methods.
Gavinho, Luciano G; Araujo, Sidnei A; Bussadori, Sandra K; Silva, João V P; Deana, Alessandro M
2018-05-05
This paper aims to develop a method for laser speckle image segmentation of tooth surfaces for diagnosis of early stages caries. The method, applied directly to a raw image obtained by digital photography, is based on the difference between the speckle pattern of a carious lesion tooth surface area and that of a sound area. Each image is divided into blocks which are identified in a working matrix by their χ 2 distance between block histograms of the analyzed image and the reference histograms previously obtained by K-means from healthy (h_Sound) and lesioned (h_Decay) areas, separately. If the χ 2 distance between a block histogram and h_Sound is greater than the distance to h_Decay, this block is marked as decayed. The experiments showed that the method can provide effective segmentation for initial lesions. We used 64 images to test the algorithm and we achieved 100% accuracy in segmentation. Differences between the speckle pattern of a sound tooth surface region and a carious region, even in the early stage, can be evidenced by the χ 2 distance between histograms. This method proves to be more effective for segmenting the laser speckle image, which enhances the contrast between sound and lesioned tissues. The results were obtained with low computational cost. The method has the potential for early diagnosis in a clinical environment, through the development of low-cost portable equipment.
NASA Astrophysics Data System (ADS)
Jiménez, César; Carbonel, Carlos; Rojas, Joel
2017-09-01
We have implemented a numerical procedure to forecast the parameters of a tsunami, such as the arrival time of the front of the first wave and the maximum wave height in real and virtual tidal stations along the Peruvian coast, with this purpose a database of pre-computed synthetic tsunami waveforms (or Green functions) was obtained from numerical simulation of seismic unit sources (dimension: 50 × 50 km2) for subduction zones from southern Chile to northern Mexico. A bathymetry resolution of 30 arc-sec (approximately 927 m) was used. The resulting tsunami waveform is obtained from the superposition of synthetic waveforms corresponding to several seismic unit sources contained within the tsunami source geometry. The numerical procedure was applied to the Chilean tsunami of April 1, 2014. The results show a very good correlation for stations with wave amplitude greater than 1 m, in the case of the Arica tide station an error (from the maximum height of the observed and simulated waveform) of 3.5% was obtained, for Callao station the error was 12% and the largest error was in Chimbote with 53.5%, however, due to the low amplitude of the Chimbote wave (<1 m), the overestimated error, in this case, is not important for evacuation purposes. The aim of the present research is tsunami early warning, where speed is required rather than accuracy, so the results should be taken as preliminary.
The effect of speed-accuracy strategy on response interference control in Parkinson's disease.
Wylie, S A; van den Wildenberg, W P M; Ridderinkhof, K R; Bashore, T R; Powell, V D; Manning, C A; Wooten, G F
2009-07-01
Studies that used conflict paradigms such as the Eriksen Flanker task show that many individuals with Parkinson's disease (PD) have pronounced difficulty resolving the conflict that arises from the simultaneous activation of mutually exclusive responses. This finding fits well with contemporary views that postulate a key role for the basal ganglia in action selection. The present experiment aims to specify the cognitive processes that underlie action selection deficits among PD patients in the context of variations in speed-accuracy strategy. PD patients (n=28) and healthy controls (n=17) performed an arrow version of the flanker task under task instructions that either emphasized speed or accuracy of responses. Reaction time (RT) and accuracy rates decreased with speed compared to accuracy instructions, although to a lesser extent for the PD group. Differences in flanker interference effects among PD and healthy controls depended on speed-accuracy strategy. Compared to the healthy controls, PD patients showed larger flanker interference effects under speed stress. RT distribution analyses suggested that PD patients have greater difficulty suppressing incorrect response activation when pressing for speed. These initial findings point to an important interaction between strategic and computational aspects of interference control in accounting for cognitive impairments of PD. The results are also compatible with recent brain imaging studies that demonstrate basal ganglia activity to co-vary with speed-accuracy adjustments.
Code of Federal Regulations, 2010 CFR
2010-01-01
... light-sport aircraft that has a VH greater than 87 knots CAS? 61.327 Section 61.327 Aeronautics and...: PILOTS, FLIGHT INSTRUCTORS, AND GROUND INSTRUCTORS Sport Pilots § 61.327 How do I obtain privileges to operate a light-sport aircraft that has a VH greater than 87 knots CAS? If you hold a sport pilot...
Horizontal Temperature Variability in the Stratosphere: Global Variations Inferred from CRISTA Data
NASA Technical Reports Server (NTRS)
Eidmann, G.; Offermann, D.; Jarisch, M.; Preusse, P.; Eckermann, S. D.; Schmidlin, F. J.
2001-01-01
In two separate orbital campaigns (November, 1994 and August, 1997), the Cryogenic Infrared Spectrometers and Telescopes for the Atmosphere (CRISTA) instrument acquired global stratospheric data of high accuracy and high spatial resolution. The standard limb-scanned CRISTA measurements resolved atmospheric spatial structures with vertical dimensions greater than or equal to 1.5 - 2 km and horizontal dimensions is greater than or equal to 100 - 200 km. A fluctuation analysis of horizontal temperature distributions derived from these data is presented. This method is somewhat complementary to conventional power-spectral analysis techniques.
Simultaneous optimization method for absorption spectroscopy postprocessing.
Simms, Jean M; An, Xinliang; Brittelle, Mack S; Ramesh, Varun; Ghandhi, Jaal B; Sanders, Scott T
2015-05-10
A simultaneous optimization method is proposed for absorption spectroscopy postprocessing. This method is particularly useful for thermometry measurements based on congested spectra, as commonly encountered in combustion applications of H2O absorption spectroscopy. A comparison test demonstrated that the simultaneous optimization method had greater accuracy, greater precision, and was more user-independent than the common step-wise postprocessing method previously used by the authors. The simultaneous optimization method was also used to process experimental data from an environmental chamber and a constant volume combustion chamber, producing results with errors on the order of only 1%.
Variance estimates and confidence intervals for the Kappa measure of classification accuracy
M. A. Kalkhan; R. M. Reich; R. L. Czaplewski
1997-01-01
The Kappa statistic is frequently used to characterize the results of an accuracy assessment used to evaluate land use and land cover classifications obtained by remotely sensed data. This statistic allows comparisons of alternative sampling designs, classification algorithms, photo-interpreters, and so forth. In order to make these comparisons, it is...
ERIC Educational Resources Information Center
Pena, Elizabeth D.; Gillam, Ronald B.; Malek, Melynn; Ruiz-Felter, Roxanna; Resendiz, Maria; Fiestas, Christine; Sabel, Tracy
2006-01-01
Two experiments examined reliability and classification accuracy of a narration-based dynamic assessment task. Purpose: The first experiment evaluated whether parallel results were obtained from stories created in response to 2 different wordless picture books. If so, the tasks and measures would be appropriate for assessing pretest and posttest…
NASA Astrophysics Data System (ADS)
Ha, Jin Gwan; Moon, Hyeonjoon; Kwak, Jin Tae; Hassan, Syed Ibrahim; Dang, Minh; Lee, O. New; Park, Han Yong
2017-10-01
Recently, unmanned aerial vehicles (UAVs) have gained much attention. In particular, there is a growing interest in utilizing UAVs for agricultural applications such as crop monitoring and management. We propose a computerized system that is capable of detecting Fusarium wilt of radish with high accuracy. The system adopts computer vision and machine learning techniques, including deep learning, to process the images captured by UAVs at low altitudes and to identify the infected radish. The whole radish field is first segmented into three distinctive regions (radish, bare ground, and mulching film) via a softmax classifier and K-means clustering. Then, the identified radish regions are further classified into healthy radish and Fusarium wilt of radish using a deep convolutional neural network (CNN). In identifying radish, bare ground, and mulching film from a radish field, we achieved an accuracy of ≥97.4%. In detecting Fusarium wilt of radish, the CNN obtained an accuracy of 93.3%. It also outperformed the standard machine learning algorithm, obtaining 82.9% accuracy. Therefore, UAVs equipped with computational techniques are promising tools for improving the quality and efficiency of agriculture today.
Percutaneous CT-guided biopsy of the spine: results of 430 biopsies
Rimondi, Eugenio; Errani, Costantino; Bianchi, Giuseppe; Casadei, Roberto; Alberghini, Marco; Malaguti, Maria Cristina; Rossi, Giuseppe; Durante, Stefano; Mercuri, Mario
2008-01-01
Biopsies of lesions in the spine are often challenging procedures with significant risk of complications. CT-guided needle biopsies could lower these risks but uncertainties still exist about the diagnostic accuracy. Aim of this retrospective study was to evaluate the diagnostic accuracy of CT-guided needle biopsies for bone lesions of the spine. We retrieved the results of 430 core needle biopsies carried out over the past fifteen years at the authors’ institute and examined the results obtained. Of the 430 biopsies performed, in 401 cases the right diagnosis was made with the first CT-guided needle biopsy (93.3% accuracy rate). Highest accuracy rates were obtained in primary and secondary malignant lesions. Most false negative results were found in cervical lesions and in benign, pseudotumoral, inflammatory, and systemic pathologies. There were only 9 complications (5 transient paresis, 4 haematomas that resolved spontaneously) that had no influence on the treatment strategy, nor on the patient’s outcome. In conclusion we can assert that this technique is reliable and safe and should be considered the gold standard in biopsies of the spine. PMID:18463900